[go: up one dir, main page]

Next Issue
Volume 20, November-2
Previous Issue
Volume 20, October-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 21 (November-1 2020) – 437 articles

Cover Story (view full-size image): Imaging technologies are being deployed on cabled observatory networks worldwide for the monitoring of the biological activity of deep-sea organisms on unprecedented temporal scales. We propose a new pipeline for the extraction of biological information on the activity status of the iconic conservation species of deep-sea bubblegum coral Paragorgia arborea, based on: image and oceanographic synchronous data collection, image enhancement, supervised tagging of coral areas, CNN automated attribution of polyps’ open/closed status, time-series analysis, and multivariate ANN modeling. We indicate a route for the development of online tools for the real-time processing of multiparametric bio- and oceanographic data sets, which is still missing in the data management structures of most cabled observatories. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
15 pages, 12413 KiB  
Article
Deep Learning Correction Algorithm for The Active Optics System
by Wenxiang Li, Chao Kang, Hengrui Guan, Shen Huang, Jinbiao Zhao, Xiaojun Zhou and Jinpeng Li
Sensors 2020, 20(21), 6403; https://doi.org/10.3390/s20216403 - 9 Nov 2020
Cited by 2 | Viewed by 2992
Abstract
The correction of wavefront aberration plays a vital role in active optics. The traditional correction algorithms based on the deformation of the mirror cannot effectively deal with disturbances in the real system. In this study, a new algorithm called deep learning correction algorithm [...] Read more.
The correction of wavefront aberration plays a vital role in active optics. The traditional correction algorithms based on the deformation of the mirror cannot effectively deal with disturbances in the real system. In this study, a new algorithm called deep learning correction algorithm (DLCA) is proposed to compensate for wavefront aberrations and improve the correction capability. The DLCA consists of an actor network and a strategy unit. The actor network is utilized to establish the mapping of active optics systems with disturbances and provide a search basis for the strategy unit, which can increase the search speed; The strategy unit is used to optimize the correction force, which can improve the accuracy of the DLCA. Notably, a heuristic search algorithm is applied to reduce the search time in the strategy unit. The simulation results show that the DLCA can effectively improve correction capability and has good adaptability. Compared with the least square algorithm (LSA), the algorithm we proposed has better performance, indicating that the DLCA is more accurate and can be used in active optics. Moreover, the proposed approach can provide a new idea for further research of active optics. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>The support system of the standard mirror and finite element analysis.</p>
Full article ">Figure 2
<p>The difference of structure between the traditional correction algorithms and the deep learning correction algorithm (DLCA). (<b>a</b>) The structure of the traditional correction algorithm; (<b>b</b>) The structure of the DLCA.</p>
Full article ">Figure 3
<p>The process of the actor network.</p>
Full article ">Figure 4
<p>Flow chart for the correction force search with evolutionary strategy algorithm.</p>
Full article ">Figure 5
<p>The working procedure of the DLCA.</p>
Full article ">Figure 6
<p>The search process of the strategy unit.</p>
Full article ">Figure 7
<p>Mirror shape before and after correction. (<b>a</b>) The initial RMS (Root Mean Square) of the mirror is 0.26<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>b</b>) The initial RMS of the mirror is 0.44<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>c</b>) The initial RMS of the mirror is 0.68<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>d</b>) The initial RMS of the mirror is 0.84<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>e</b>) The initial RMS of the mirror is 1.07<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>.</p>
Full article ">Figure 7 Cont.
<p>Mirror shape before and after correction. (<b>a</b>) The initial RMS (Root Mean Square) of the mirror is 0.26<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>b</b>) The initial RMS of the mirror is 0.44<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>c</b>) The initial RMS of the mirror is 0.68<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>d</b>) The initial RMS of the mirror is 0.84<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>e</b>) The initial RMS of the mirror is 1.07<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>The correction results of LSA and DLCA in Zernike mode. (<b>a</b>) The initial RMS of the mirror is 0.26<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>b</b>) The initial RMS of the mirror is 0.44<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>c</b>) The initial RMS of the mirror is 0.68<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>d</b>) The initial RMS of the mirror is 0.84<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>; (<b>e</b>) The initial RMS of the mirror is 1.07<math display="inline"><semantics> <mi mathvariant="sans-serif">λ</mi> </semantics></math>.</p>
Full article ">
28 pages, 7930 KiB  
Article
Design Optimization of Resource Allocation in OFDMA-Based Cognitive Radio-Enabled Internet of Vehicles (IoVs)
by Joy Eze, Sijing Zhang, Enjie Liu and Elias Eze
Sensors 2020, 20(21), 6402; https://doi.org/10.3390/s20216402 - 9 Nov 2020
Cited by 12 | Viewed by 2872
Abstract
Joint optimal subcarrier and transmit power allocation with QoS guarantee for enhanced packet transmission over Cognitive Radio (CR)-Internet of Vehicles (IoVs) is a challenge. This open issue is considered in this paper. A novel SNBS-based wireless radio resource scheduling scheme in OFDMA CR-IoV [...] Read more.
Joint optimal subcarrier and transmit power allocation with QoS guarantee for enhanced packet transmission over Cognitive Radio (CR)-Internet of Vehicles (IoVs) is a challenge. This open issue is considered in this paper. A novel SNBS-based wireless radio resource scheduling scheme in OFDMA CR-IoV network systems is proposed. This novel scheduler is termed the SNBS OFDMA-based overlay CR-Assisted Vehicular NETwork (SNO-CRAVNET) scheduling scheme. It is proposed for efficient joint transmit power and subcarrier allocation for dynamic spectral resource access in cellular OFDMA-based overlay CRAVNs in clusters. The objectives of the optimization model applied in this study include (1) maximization of the overall system throughput of the CR-IoV system, (2) avoiding harmful interference of transmissions of the shared channels’ licensed owners (or primary users (PUs)), (3) guaranteeing the proportional fairness and minimum data-rate requirement of each CR vehicular secondary user (CRV-SU), and (4) ensuring efficient transmit power allocation amongst CRV-SUs. Furthermore, a novel approach which uses Lambert-W function characteristics is introduced. Closed-form analytical solutions were obtained by applying time-sharing variable transformation. Finally, a low-complexity algorithm was developed. This algorithm overcame the iterative processes associated with searching for the optimal solution numerically through iterative programming methods. Theoretical analysis and simulation results demonstrated that, under similar conditions, the proposed solutions outperformed the reference scheduler schemes. In comparison to other scheduling schemes that are fairness-considerate, the SNO-CRAVNET scheme achieved a significantly higher overall average throughput gain. Similarly, the proposed time-sharing SNO-CRAVNET allocation based on the reformulated convex optimization problem is shown to be capable of achieving up to 99.987% for the average of the total theoretical capacity. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of a typical cognitive cell (CC) service area.</p>
Full article ">Figure 2
<p>Illustration of the phases of the research study.</p>
Full article ">Figure 3
<p>Performance evaluation using the achieved system throughput measured against the overall supplied transmit power with number of Cognitive Radio vehicular secondary users (CRV-SUs) = 7.</p>
Full article ">Figure 4
<p>Performance evaluation using the achieved system throughput measured against the overall supplied transmit power with number of Cognitive Radio vehicular secondary users (CRV-SUs) = 14.</p>
Full article ">Figure 5
<p>Performance evaluation using the overall achieved average throughput gain measured against the varying number of CRV-SUs.</p>
Full article ">Figure 6
<p>Performance evaluation using the total transmit power gain measured against a varying number of CRV-SUs.</p>
Full article ">Figure 7
<p>Resource allocation fairness performance evaluation using Jain’s fairness index (JFI) measured against a varying number of CRV-SUs.</p>
Full article ">Figure 8
<p>Performance evaluation using optimal throughput measured against the optimal supplied transmit power for CRV-SU 1.</p>
Full article ">Figure 9
<p>Performance evaluation using optimal throughput measured against the optimal supplied transmit power for CRV-SU 2.</p>
Full article ">
21 pages, 10144 KiB  
Article
Analysis on the Possibility of Eliminating Interference from Paraseismic Vibration Signals Induced by the Detonation of Explosive Materials
by Józef Pyra, Maciej Kłaczyński and Rafał Burdzik
Sensors 2020, 20(21), 6401; https://doi.org/10.3390/s20216401 - 9 Nov 2020
Cited by 2 | Viewed by 2859
Abstract
This article presents the results of studies on the impact of acoustic waves on geophones and microphones used to measure airblasts carried out in a reverberation chamber. During the tests, a number of test signals were generated, of which two are presented in [...] Read more.
This article presents the results of studies on the impact of acoustic waves on geophones and microphones used to measure airblasts carried out in a reverberation chamber. During the tests, a number of test signals were generated, of which two are presented in this article: frequency-modulated sine (sine sweep) waves in the 30–300 Hz range, and the result of detonating 3 g of pyrotechnic material inside the chamber. Then, based on the short-time Fourier transform, the spectral subtraction method was used to remove unwanted disruption interfering with the recorded signal. Using MATLAB software, a program was written that was calibrated and adapted to the specifics of the measuring equipment based on the collected test results. As a result, it was possible to clean the signals of interference and obtain a vibration signal propagated by the substrate. The results are based on signals registered in the laboratory and made in field conditions during the detonation of explosive materials. Full article
Show Figures

Figure 1

Figure 1
<p>Picture of sensors in the reverberation chamber.</p>
Full article ">Figure 2
<p>Distribution of sensors in the reverberation chamber.</p>
Full article ">Figure 3
<p>Time courses of sine sweep, 30–300 Hz range: beginning of the signal (<b>a</b>), end of the signal (<b>b</b>).</p>
Full article ">Figure 4
<p>FFT analysis of sine sweep: 30–300 Hz; 3 s of signal; fs = 51,200 Hz; nfft = 4096 samples.</p>
Full article ">Figure 5
<p>Test No. I: vibration seismogram (three directional components) and airblast pressure record with FFT analysis; sine sweep range of 30–300 Hz; stand 3 and 4.</p>
Full article ">Figure 6
<p>Test No. I: vibration seismogram (three directional components) and airblast pressure record with FFT analysis; sine sweep in the range of 30–300 Hz; stand 1 and 2.</p>
Full article ">Figure 7
<p>Test No. I: vibration seismogram (three directional components) and airblast pressure record with FFT analysis; sine sweep in the range of 30–300 Hz; stands 1, 2 and 4.</p>
Full article ">Figure 8
<p>Test No. I: sound pressure level waveform sine sweep in the range of 30–300 Hz (ZZ17); stand 5.</p>
Full article ">Figure 9
<p>Test No. I: spectrum of averaged sound pressure levels in 1/3 octave bands; sine sweep in the range of 30–300 Hz and background noise (Leq_background); stand 5.</p>
Full article ">Figure 10
<p>Test No. II: vibration seismogram (three directional components) and airblast pressure record together with FFT analysis; pyrotechnic material explosion; stand 1, 2 and 4.</p>
Full article ">Figure 11
<p>Test No. II: sound pressure level pyrotechnic material explosion; stand 5.</p>
Full article ">Figure 12
<p>Test No. II: spectrum of maximum sound pressure level in 1/3 octave bands; pyrotechnic material explosion and background noise (Leq_background); stand 5.</p>
Full article ">Figure 13
<p>The result of using filtration for the data from the recording shown in <a href="#sensors-20-06401-f005" class="html-fig">Figure 5</a> (test No. I) for the vertical component of the sensor placed on the mat.</p>
Full article ">Figure 14
<p>The result of using filtration for the data from the recording shown in <a href="#sensors-20-06401-f006" class="html-fig">Figure 6</a> (test No. I) for the vertical component of a suspended sensor on the structure.</p>
Full article ">Figure 15
<p>The result of using filtration for the data from the recording shown in <a href="#sensors-20-06401-f007" class="html-fig">Figure 7</a> (test No. I) for the vertical component from a sensor located directly on the floor.</p>
Full article ">Figure 16
<p>The result of using filtration for a vertical component from a sensor located outside the building.</p>
Full article ">Figure 17
<p>The result of using filtration for the vertical component from a sensor placed inside the building—an internal microphone.</p>
Full article ">Figure 18
<p>The result of using filtering for the vertical component from a sensor located inside the building—an external microphone.</p>
Full article ">Figure 19
<p>The result of using filtration for the vertical component from a sensor located outside the building—detonation of dynamite material placed in a short blast hole.</p>
Full article ">Figure 20
<p>The result of using filtration for the vertical component from a sensor placed inside the building—detonation of dynamite material placed in a short blast hole.</p>
Full article ">Figure 21
<p>The result of using filtration for the vertical component—testing the fall of a structure to the ground.</p>
Full article ">Figure 22
<p>The result of using filtration for the vertical component—firing a series of long-hole blasts.</p>
Full article ">
15 pages, 10773 KiB  
Article
A Novel Routing Algorithm for the Acceleration of Flow Scheduling in Time-Sensitive Networks
by Jheng-Yu Huang, Ming-Hung Hsu and Chung-An Shen
Sensors 2020, 20(21), 6400; https://doi.org/10.3390/s20216400 - 9 Nov 2020
Cited by 5 | Viewed by 2979
Abstract
IEEE Time-Sensitive Networking (TSN) Task Group specifies a series of standards such as 802.1Qbv for enhancing the management of time-critical flows in real-time networks. Under the IEEE 802.1Qbv standard, the scheduling algorithm is employed to determine the time when a specific gate in [...] Read more.
IEEE Time-Sensitive Networking (TSN) Task Group specifies a series of standards such as 802.1Qbv for enhancing the management of time-critical flows in real-time networks. Under the IEEE 802.1Qbv standard, the scheduling algorithm is employed to determine the time when a specific gate in the network entities is opened or closed so that the real-time requirements for the flows are guaranteed. The computation time of this scheduling algorithm is critical for the system where dynamic network configurations and settings are required. In addition, the network routing where the paths of the flows are determined has a significant impact on the computation time of the network scheduling. This paper presents a novel scheduling-aware routing algorithm to minimize the computation time of the scheduling algorithm in network management. The proposed routing algorithm determines the path for each time-triggered flow by including the consideration of the period of the flow. This decreases the occurrence of path-conflict during the stage of network scheduling. The detailed outline of the proposed algorithm is presented in this paper. The experimental results show that the proposed routing algorithm reduces the computation time of network scheduling by up to 30% and improves the schedulability of time-triggered flows is the network. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>The example of the conflict-free edge.</p>
Full article ">Figure 2
<p>The example of two different routing schemes shown in (<b>a</b>) and (<b>b</b>) respectively.</p>
Full article ">Figure 3
<p>The practical experiment of the example of two different routing schemes shown in (<b>a</b>) and (<b>b</b>) respectively.</p>
Full article ">Figure 4
<p>The overall procedure and the procedure for the Routing stage of the proposed algorithm.</p>
Full article ">Figure 5
<p>The example that illustrates the operation of the proposed algorithm. (<b>a</b>) The mesh topology with 9 switches and 6 hosts. (<b>b</b>) The shortest path of the flow F1. (<b>c</b>) The path with the minimum cost of the flow F2. (<b>d</b>) The result of routing the flow F3 with visited nodes, source node, and the destination node. (<b>e</b>) The result of updating the weight of each edge along the path of flow F3. (<b>f</b>) The result of routing the flow F4 with visited nodes, the source node, and the destination node. (<b>g</b>) The result of computing the weight of each edge where the flow F4 passes through. (<b>h</b>) The result of rerouting the flow F4 with all nodes in the topology.</p>
Full article ">Figure 6
<p>The timelines that flow are transmitted along the routed paths for the example shown in <a href="#sensors-20-06400-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>The frame offsets with thresholds and different numbers of flows.</p>
Full article ">Figure 8
<p>The computation time in routing with thresholds and numbers of flows.</p>
Full article ">Figure 9
<p>The computation time in scheduling with thresholds and numbers of flows.</p>
Full article ">Figure 10
<p>The overall edge weight with thresholds and different numbers of flows.</p>
Full article ">Figure 11
<p>Computation time in (<b>a</b>) Routing and (<b>b</b>) Scheduling.</p>
Full article ">Figure 12
<p>The overall computation time including routing and scheduling.</p>
Full article ">
25 pages, 12871 KiB  
Article
Physically Plausible Spectral Reconstruction
by Yi-Tun Lin and Graham D. Finlayson
Sensors 2020, 20(21), 6399; https://doi.org/10.3390/s20216399 - 9 Nov 2020
Cited by 16 | Viewed by 4631
Abstract
Spectral reconstruction algorithms recover spectra from RGB sensor responses. Recent methods—with the very best algorithms using deep learning—can already solve this problem with good spectral accuracy. However, the recovered spectra are physically incorrect in that they do not induce the RGBs from which [...] Read more.
Spectral reconstruction algorithms recover spectra from RGB sensor responses. Recent methods—with the very best algorithms using deep learning—can already solve this problem with good spectral accuracy. However, the recovered spectra are physically incorrect in that they do not induce the RGBs from which they are recovered. Moreover, if the exposure of the RGB image changes then the recovery performance often degrades significantly—i.e., most contemporary methods only work for a fixed exposure. In this paper, we develop a physically accurate recovery method: the spectra we recover provably induce the same RGBs. Key to our approach is the idea that the set of spectra that integrate to the same RGB can be expressed as the sum of a unique fundamental metamer (spanned by the camera’s spectral sensitivities and linearly related to the RGB) and a linear combination of a vector space of metameric blacks (orthogonal to the spectral sensitivities). Physically plausible spectral recovery resorts to finding a spectrum that adheres to the fundamental metamer plus metameric black decomposition. To further ensure spectral recovery that is robust to changes in exposure, we incorporate exposure changes in the training stage of the developed method. In experiments we evaluate how well the methods recover spectra and predict the actual RGBs and RGBs under different viewing conditions (changing illuminations and/or cameras). The results show that our method generally improves the state-of-the-art spectral recovery (with more stabilized performance when exposure varies) and provides zero colorimetric error. Moreover, our method significantly improves the color fidelity under different viewing conditions, with up to a 60% reduction in some cases. Full article
(This article belongs to the Special Issue Color & Spectral Sensors)
Show Figures

Figure 1

Figure 1
<p>Our physical plausibility (color fidelity) test for SR.</p>
Full article ">Figure 2
<p>The color errors introduced by polynomial regression SR [<a href="#B34-sensors-20-06399" class="html-bibr">34</a>] (<b>left</b>) and HSCNN-R [<a href="#B46-sensors-20-06399" class="html-bibr">46</a>] (<b>right</b>). The color errors are measured in CIE <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>E</mi> </mrow> </semantics></math> 2000 (<math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>) [<a href="#B54-sensors-20-06399" class="html-bibr">54</a>].</p>
Full article ">Figure 3
<p>Spectral reconstruction under varying exposure by linear regression [<a href="#B33-sensors-20-06399" class="html-bibr">33</a>] and HSCNN-R [<a href="#B46-sensors-20-06399" class="html-bibr">46</a>]. The spectral errors are calculated in mean relative absolute error (MRAE) [<a href="#B47-sensors-20-06399" class="html-bibr">47</a>,<a href="#B48-sensors-20-06399" class="html-bibr">48</a>].</p>
Full article ">Figure 4
<p>The scene relighting color fidelity of one example hyperspectral image recovered by the RBFN algorithm [<a href="#B36-sensors-20-06399" class="html-bibr">36</a>] and by our physically plausible modification of RBFN. The results are shown as the error maps of CIE <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>E</mi> </mrow> </semantics></math> 2000 color differences (<math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>) [<a href="#B54-sensors-20-06399" class="html-bibr">54</a>].</p>
Full article ">Figure 5
<p>The HSCNN-R architecture [<a href="#B46-sensors-20-06399" class="html-bibr">46</a>]. “C” means <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> convolution and “R” refers to the ReLU activation.</p>
Full article ">Figure 6
<p>Physically implausible (<b>left</b>) and physically plausible spectral reconstruction (<b>right</b>).</p>
Full article ">Figure 7
<p>The standard SR scheme (<b>top</b>) versus our physically plausible SR scheme (<b>bottom</b>).</p>
Full article ">Figure 8
<p>The comparison between drawing the scaling factor <span class="html-italic">k</span> from the straightforward uniform distribution (<b>left</b>) and from our proposed distribution (<b>right</b>).</p>
Full article ">Figure 9
<p>Example scenes from the ICVL hyperspectral image database [<a href="#B45-sensors-20-06399" class="html-bibr">45</a>].</p>
Full article ">Figure 10
<p>Visualizing the performance and generalizability (in mean MRAE) with respect to different <math display="inline"><semantics> <mi>β</mi> </semantics></math> factors chosen.</p>
Full article ">Figure 11
<p>Target illuminants for scene relighting: CIE Illuminants A (<b>left</b>), E (<b>middle</b>) and D65 (<b>right</b>).</p>
Full article ">Figure 12
<p>The spectral sensitivities of the ground-truth RGBs used for training (CIE 1964 color matching functions) and for testing (SONY IMX135, NIKON D810 and CANON 5DSR).</p>
Full article ">Figure 13
<p>The reconstruction error maps of an example scene in terms of spectral accuracy (<b>left</b>; in MRAE), color fidelity (<b>middle</b>; in <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>) and color fidelity under CIE Illuminant A (<b>right</b>; in <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>E</mi> <mn>00</mn> </msub> </mrow> </semantics></math>).</p>
Full article ">
11 pages, 2504 KiB  
Letter
Bacterial Respiration Used as a Proxy to Evaluate the Bacterial Load in Cooling Towers
by Stepan Toman, Bruno Kiilerich, Ian P.G. Marshall and Klaus Koren
Sensors 2020, 20(21), 6398; https://doi.org/10.3390/s20216398 - 9 Nov 2020
Viewed by 3575
Abstract
Evaporative cooling towers to dissipate excess process heat are essential installations in a variety of industries. The constantly moist environment enables substantial microbial growth, causing both operative challenges (e.g., biocorrosion) as well as health risks due to the potential aerosolization of pathogens. Currently, [...] Read more.
Evaporative cooling towers to dissipate excess process heat are essential installations in a variety of industries. The constantly moist environment enables substantial microbial growth, causing both operative challenges (e.g., biocorrosion) as well as health risks due to the potential aerosolization of pathogens. Currently, bacterial levels are monitored using rather slow and infrequent sampling and cultivation approaches. In this study, we describe the use of metabolic activity, namely oxygen respiration, as an alternative measure of bacterial load within cooling tower waters. This method is based on optical oxygen sensors that enable an accurate measurement of oxygen consumption within a closed volume. We show that oxygen consumption correlates with currently used cultivation-based methods (R2 = 0.9648). The limit of detection (LOD) for respiration-based bacterial quantification was found to be equal to 1.16 × 104 colony forming units (CFU)/mL. Contrary to the cultivation method, this approach enables faster assessment of the bacterial load with a measurement time of just 30 min compared to 48 h needed for cultivation-based measurements. Furthermore, this approach has the potential to be integrated and automated. Therefore, this method could contribute to more robust and reliable monitoring of bacterial contamination within cooling towers and subsequently increase operational stability and reduce health risks. Full article
(This article belongs to the Special Issue Optical Sensors for Water Monitoring)
Show Figures

Figure 1

Figure 1
<p>Sketch of an evaporative cooling tower with a water reservoir, fill material and heat exchanger. Microbial communities can thrive within the humid environment, both in planktonic form as well as in biofilms on all surfaces.</p>
Full article ">Figure 2
<p>(<b>A</b>): Picture of the measurement vial with the glass magnet (1), rubber stopper (2), thin glass capillary (3), temperature probe (4) and O<sub>2</sub> sensor spot with the respective connector (5). (<b>B</b>): Three samples and one control measured at the same time within a water bath kept at 22 °C.</p>
Full article ">Figure 3
<p>(<b>A</b>): Principal Coordinates Analysis based on Bray–Curtis distances between ASV relative abundances. The two most significant principal components are shown, axes annotated with percentage of variation explained. (<b>B</b>): Heatmap showing mean percentage abundances for all sample types for the 21 most abundant genera in the dataset. Rows labeled with names ending in “_ASVXX” show ASVs with undefined genera (only defined up to the family level) which are nonetheless more abundant than all genera in the top 21. vE6 is an uncultured family-level group within the Chlamydiales.</p>
Full article ">Figure 4
<p>Comparison of the decrease in O<sub>2</sub> concentration over time of a pure sample compared to a sample with added glucose and LB medium. The respective respiration rates were determined via linear regression and are displayed within the graph.</p>
Full article ">Figure 5
<p>Examples of measured oxygen respiration of an undiluted sample, a 1 + 1 diluted sample and a 1 + 2 diluted sample compared to the negative control.</p>
Full article ">Figure 6
<p>Correlation between the numbers of colony forming units (CFUs) in samples measured by plate counts and the oxygen consumption rates measured by the optode system. Data points represent means with the respective standard deviations for each sample (n = 5 for the undiluted sample, 2 for 1 + 1 and 1 + 2 dilutions and 3 for the negative control). The linear regression shown by the blue dotted line has an R<sup>2</sup> = 0.9648.</p>
Full article ">
19 pages, 3306 KiB  
Article
Time-Domain Blind ICI Compensation in Coherent Optical FBMC/OQAM System
by Binqi Wu, Jin Lu, Mingyi Gao, Hongliang Ren, Zichun Le, Yali Qin, Shuqin Guo and Weisheng Hu
Sensors 2020, 20(21), 6397; https://doi.org/10.3390/s20216397 - 9 Nov 2020
Cited by 3 | Viewed by 3643
Abstract
A blind discrete-cosine-transform-based phase noise compensation (BD-PNC) is proposed to compensate the inter-carrier-interference (ICI) in the coherent optical offset-quadrature amplitude modulation (OQAM)-based filter-bank multicarrier (CO-FBMC/OQAM) transmission system. Since the phase noise sample can be approximated by an expansion of the discrete cosine transform [...] Read more.
A blind discrete-cosine-transform-based phase noise compensation (BD-PNC) is proposed to compensate the inter-carrier-interference (ICI) in the coherent optical offset-quadrature amplitude modulation (OQAM)-based filter-bank multicarrier (CO-FBMC/OQAM) transmission system. Since the phase noise sample can be approximated by an expansion of the discrete cosine transform (DCT) in the time-domain, a time-domain compensation model is built for the transmission system. According to the model, phase noise compensation (PNC) depends only on its DCT coefficients. The common phase error (CPE) compensation is firstly performed for the received signal. After that, a pre-decision is made on a part of compensated signals with low decision error probability, and the pre-decision results are used as the estimated values of transmitted signals to calculate the DCT coefficients. Such a partial pre-decision process reduces not only decision error but also the complexity of the BD-PNC method while keeping almost the same performance as in the case of the pre-decision of all compensated signals. Numerical simulations are performed to evaluate the performance of the proposed scheme for a 30 GBaud CO-FBMC/OQAM system. The simulation results show that its bit error rate (BER) performance is improved by more than one order of magnitude through the mitigation of the ICI in comparison with the traditional blind PNC scheme only aiming for CPE compensation. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of CO-FBMC/OQAM BTB system.</p>
Full article ">Figure 2
<p>Block diagram of the proposed blind phase noise compensation scheme.</p>
Full article ">Figure 3
<p>Taking the 16-OQAM as an example, the signals with high decision error probability in three shadow regions (<math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>a</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mi>n</mi> <mo>,</mo> <mi>m</mi> </mrow> <mo>′</mo> </msubsup> <mo>∈</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mn>2</mn> <mo>−</mo> <mrow> <mi>δ</mi> <mo>/</mo> <mn>2</mn> </mrow> <mo>,</mo> <mo>−</mo> <mn>2</mn> <mo>+</mo> <mi>δ</mi> <mo>/</mo> <mn>2</mn> <mo stretchy="false">]</mo> <mo>∪</mo> <mo stretchy="false">[</mo> <mo>−</mo> <mrow> <mi>δ</mi> <mo>/</mo> <mn>2</mn> </mrow> <mo>,</mo> <mi>δ</mi> <mo>/</mo> <mn>2</mn> <mo stretchy="false">]</mo> <mo>∪</mo> <mo stretchy="false">[</mo> <mn>2</mn> <mo>−</mo> <mrow> <mi>δ</mi> <mo>/</mo> <mn>2</mn> </mrow> <mo>,</mo> <mn>2</mn> <mo>+</mo> <mi>δ</mi> <mo>/</mo> <mn>2</mn> <mo stretchy="false">]</mo> </mrow> </semantics></math>) not being used to perform the pre-decision.</p>
Full article ">Figure 4
<p>Time-domain FBMC/OQAM transmitted blocks with overlapped structure.</p>
Full article ">Figure 5
<p>(<b>a</b>) BER versus the width of shadow rectangle in a high decision error probability region, <span class="html-italic">δ</span>. (<b>b</b>) BER versus the length of DCT coefficient, <span class="html-italic">L</span>. (<b>c</b>) BER versus the number of subcarriers, <span class="html-italic">M</span>.</p>
Full article ">Figure 6
<p>(<b>a</b>) One realization of the real phase noise and its estimations after the M-BPS and BD-PNC methods at <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ν</mi> <mo>⋅</mo> <msub> <mi>T</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>1.5</mn> <mo>×</mo> <mn>1</mn> <msup> <mn>0</mn> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics></math> and OSNR = 20 dB for 30 Gbaud CO-FBMC 16-QAM BTB transmission systems with 1024 subcarriers. (<b>b</b>) Illustration of constellations before and after PNC employing M-BPS and BD-PNC at <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ν</mi> <mo>⋅</mo> <msub> <mi>T</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>5</mn> <mo>×</mo> <mn>1</mn> <msup> <mn>0</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math> and OSNR = 27 dB for 30 Gbaud CO-FBMC 64-QAM BTB systems with 1024 subcarriers.</p>
Full article ">Figure 7
<p>OSNR penalty versus <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi>ν</mi> <mo>⋅</mo> <msub> <mi>T</mi> <mi>S</mi> </msub> </mrow> </semantics></math> at a BER of <math display="inline"><semantics> <mrow> <mn>3.8</mn> <mo>×</mo> <mn>1</mn> <msup> <mn>0</mn> <mrow> <mo>−</mo> <mn>3</mn> </mrow> </msup> </mrow> </semantics></math> using M-BPS, the proposed schemes for CO-FBMC 4/16/64-QAM systems with 256 subcarriers (<b>a</b>), 512 subcarriers (<b>b</b>), and 1024 subcarriers (<b>c</b>).</p>
Full article ">
46 pages, 2708 KiB  
Review
The Importance of Respiratory Rate Monitoring: From Healthcare to Sport and Exercise
by Andrea Nicolò, Carlo Massaroni, Emiliano Schena and Massimo Sacchetti
Sensors 2020, 20(21), 6396; https://doi.org/10.3390/s20216396 - 9 Nov 2020
Cited by 254 | Viewed by 35274
Abstract
Respiratory rate is a fundamental vital sign that is sensitive to different pathological conditions (e.g., adverse cardiac events, pneumonia, and clinical deterioration) and stressors, including emotional stress, cognitive load, heat, cold, physical effort, and exercise-induced fatigue. The sensitivity of respiratory rate to these [...] Read more.
Respiratory rate is a fundamental vital sign that is sensitive to different pathological conditions (e.g., adverse cardiac events, pneumonia, and clinical deterioration) and stressors, including emotional stress, cognitive load, heat, cold, physical effort, and exercise-induced fatigue. The sensitivity of respiratory rate to these conditions is superior compared to that of most of the other vital signs, and the abundance of suitable technological solutions measuring respiratory rate has important implications for healthcare, occupational settings, and sport. However, respiratory rate is still too often not routinely monitored in these fields of use. This review presents a multidisciplinary approach to respiratory monitoring, with the aim to improve the development and efficacy of respiratory monitoring services. We have identified thirteen monitoring goals where the use of the respiratory rate is invaluable, and for each of them we have described suitable sensors and techniques to monitor respiratory rate in specific measurement scenarios. We have also provided a physiological rationale corroborating the importance of respiratory rate monitoring and an original multidisciplinary framework for the development of respiratory monitoring services. This review is expected to advance the field of respiratory monitoring and favor synergies between different disciplines to accomplish this goal. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic representation of the monitoring goals described in this review and related examples of specific measurement scenarios.</p>
Full article ">Figure 2
<p>Schematic representation of the interactions between respiratory physiology, applied sciences, and technological development. The Figure shows how fruitful synergies between different disciplines are essential for the development of respiratory monitoring services.</p>
Full article ">Figure 3
<p>Schematic representation of a simple model of ventilatory control (see Nicolò and Sacchetti [<a href="#B22-sensors-20-06396" class="html-bibr">22</a>] for further information). While respiratory rate (the behavioral component of minute ventilation) is substantially influenced by non-metabolic stressors, V<sub>T</sub> (the metabolic component of minute ventilation) satisfies the metabolic requirements of the human body. As such V<sub>T</sub> is fine-tuned according to the levels of respiratory rate and the magnitude of metabolic inputs, while <span class="html-italic">f</span><sub>R</sub> is influenced by V<sub>T</sub> to a lesser extent. This model explains why <span class="html-italic">f</span><sub>R</sub> is more sensitive than V<sub>T</sub> to a variety of non-metabolic stressors and corroborates the importance of <span class="html-italic">f</span><sub>R</sub> monitoring in different fields of use.</p>
Full article ">Figure 4
<p>A conceptual framework for the development of respiratory monitoring services. The framework is composed of ten steps that are numbered and listed on the left-hand side of the figure. Each of the ten steps is accompanied by a graphical example reported on the right-hand side of the figure. Panel (<b>A</b>) reports the thirteen monitoring goals described in this review. The graph in panel (<b>B</b>) is reproduced from Massaroni et al. [<a href="#B5-sensors-20-06396" class="html-bibr">5</a>]. The graph in panel (<b>C</b>) is reproduced from Naranjo-Hernández et al. [<a href="#B157-sensors-20-06396" class="html-bibr">157</a>]. Panel (<b>D</b>) provides an example of the output of some of the sensors used to detect apnea events in sleep laboratories. The graph in panel (<b>E</b>) is slightly modified from Massaroni et al. [<a href="#B24-sensors-20-06396" class="html-bibr">24</a>]. The graph in panel (<b>F</b>) is slightly modified from Lo Presti et al. [<a href="#B290-sensors-20-06396" class="html-bibr">290</a>]. The graph in panel (<b>G</b>) is reproduced from Tomasic et al. [<a href="#B293-sensors-20-06396" class="html-bibr">293</a>]. Panel (<b>H</b>) provides an example of data transmission performance evaluation. The graph in panel (<b>I</b>) is slightly modified from Quinten et al. [<a href="#B116-sensors-20-06396" class="html-bibr">116</a>]. The graph in panel (<b>J</b>) is slightly modified from Gerry et al. [<a href="#B110-sensors-20-06396" class="html-bibr">110</a>].</p>
Full article ">Figure 5
<p>Schematic representation of how respiratory rate (values expressed both in breaths/min and in Hz) may change in response to different stressors. The range of respiratory rate values reported for each stressor has been defined according to the cited references (numbers in square brackets), but these values should only be considered as plausible examples. <span class="html-italic">f</span><sub>R</sub> values refer to adults if not otherwise stated. * it is not unusual to observe <span class="html-italic">f</span><sub>R</sub> values higher than 65 breaths/min. Mos, months.</p>
Full article ">
11 pages, 3089 KiB  
Letter
GeSi Nanocrystals Photo-Sensors for Optical Detection of Slippery Road Conditions Combining Two Classification Algorithms
by Catalin Palade, Ionel Stavarache, Toma Stoica and Magdalena Lidia Ciurea
Sensors 2020, 20(21), 6395; https://doi.org/10.3390/s20216395 - 9 Nov 2020
Cited by 10 | Viewed by 3138
Abstract
One of the key elements in assessing traffic safety on the roads is the detection of asphalt conditions. In this paper, we propose an optical sensor based on GeSi nanocrystals embedded in SiO2 matrix that discriminates between different slippery road conditions (wet [...] Read more.
One of the key elements in assessing traffic safety on the roads is the detection of asphalt conditions. In this paper, we propose an optical sensor based on GeSi nanocrystals embedded in SiO2 matrix that discriminates between different slippery road conditions (wet and icy asphalt and asphalt covered with dirty ice) in respect to dry asphalt. The sensor is fabricated by magnetron sputtering deposition followed by rapid thermal annealing. The photodetector has spectral sensitivity in the 360–1350 nm range and the signal-noise ratio is 102–103. The working principle of sensor setup for detection of road conditions is based on the photoresponse (photocurrent) of the sensor under illumination with the light reflected from the asphalt having different reflection coefficients for dry, wet, icy and dirty ice coatings. For this, the asphalt is illuminated sequentially with 980 and 1064 nm laser diodes. A database of these photocurrents is obtained for the different road conditions. We show that the use of both k-nearest neighbor and artificial neural networks classification algorithms enables a more accurate recognition of the class corresponding to a specific road state than in the case of using only one algorithm. This is achieved by comparing the new output sensor data with previously classified data for each algorithm and then by performing an intersection of the algorithms’ results. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The workflow for obtaining the GeSi NCs:SiO<sub>2</sub>/SiO<sub>2</sub>/n-Si photodetector.</p>
Full article ">Figure 2
<p>Low magnification XTEM image of GeSi NCs:SiO<sub>2</sub>/SiO<sub>2</sub>/n-Si structure. Inset is an HRTEM image of a spherical GeSi NC.</p>
Full article ">Figure 3
<p>Spectral dependence of the photocurrent measured on GeSi NCs:SiO<sub>2</sub> photodetector.</p>
Full article ">Figure 4
<p>(<b>a</b>) The working principle and (<b>b</b>) the workflow of the sensor setup.</p>
Full article ">Figure 5
<p>The electric circuit of the laser diode power supply.</p>
Full article ">Figure 6
<p>Experimental results obtained by multiple measurements of the photocurrent for the two laser diode illumination in the case of dry, wet, icy asphalt and dirty ice (frozen monolith of mixed asphalt powder, dust and water).</p>
Full article ">Figure 7
<p>K-nearest neighbor (KNN) algorithm: (<b>a</b>) the working principle and (<b>b</b>) KNN algorithm applied to array data.</p>
Full article ">Figure 8
<p>Artificial neural network (ANN) algorithm: (<b>a</b>) the working principle and (<b>b</b>) ANN algorithm applied to array data.</p>
Full article ">Figure 9
<p>KNN and ANN algorithms’ classification intersection: (<b>a</b>) a schematic example of the intersection of the classification results of the KNN and ANN algorithms for the dirty ice state and (<b>b</b>) results of KNN and ANN intersection applied on the array data.</p>
Full article ">
13 pages, 576 KiB  
Letter
Use of Functional Linear Models to Detect Associations between Characteristics of Walking and Continuous Responses Using Accelerometry Data
by William F. Fadel, Jacek K. Urbanek, Nancy W. Glynn and Jaroslaw Harezlak
Sensors 2020, 20(21), 6394; https://doi.org/10.3390/s20216394 - 9 Nov 2020
Cited by 2 | Viewed by 2701
Abstract
Various methods exist to measure physical activity. Subjective methods, such as diaries and surveys, are relatively inexpensive ways of measuring one’s physical activity; however, they are prone to measurement error and bias due to self-reporting. Wearable accelerometers offer a non-invasive and objective measure [...] Read more.
Various methods exist to measure physical activity. Subjective methods, such as diaries and surveys, are relatively inexpensive ways of measuring one’s physical activity; however, they are prone to measurement error and bias due to self-reporting. Wearable accelerometers offer a non-invasive and objective measure of one’s physical activity and are now widely used in observational studies. Accelerometers record high frequency data and each produce an unlabeled time series at the sub-second level. An important activity to identify from the data collected is walking, since it is often the only form of activity for certain populations. Currently, most methods use an activity summary which ignores the nuances of walking data. We propose methodology to model specific continuous responses with a functional linear model utilizing spectra obtained from the local fast Fourier transform (FFT) of walking as a predictor. Utilizing prior knowledge of the mechanics of walking, we incorporate this as additional information for the structure of our transformed walking spectra. The methods were applied to the in-the-laboratory data obtained from the Developmental Epidemiologic Cohort Study (DECOS). Full article
Show Figures

Figure 1

Figure 1
<p>Triaxial accelerometer data from the 400 m walk for a single individual (<b>top left</b>) and a zoomed 10 s window (<b>top right</b>). Vector magnitude from the 400 m walk for same individual (<b>bottom left</b>) and zoomed 10 s window (<b>bottom right</b>).</p>
Full article ">Figure 2
<p>Pre-processing data. Observed FFT spectra for one participant as described in step 4 of Algorithm 1 (<b>top left</b>). Observed spectra realigned into order domain for the same participant as described in step 6 of Algorithm 1 (<b>top right</b>). Average realigned spectra for all participants as described in step 7 of Algorithm 1 (<b>bottom left</b>). Scaled average spectra for all participants as described in step 9 of Algorithm 1 (<b>bottom right</b>).</p>
Full article ">Figure 3
<p>Pre-processed walking spectra (<b>top</b>) and basis functions used for modeling (<b>bottom</b>). The <span class="html-italic">x</span>-axis represents multiples of the frequency of the cadence.</p>
Full article ">Figure 4
<p>Estimates of the coefficient function, <math display="inline"><semantics> <mover accent="true"> <mi>β</mi> <mo>˜</mo> </mover> </semantics></math>, (with 95% point-wise confidence band) for the association of walking with age and BMI, as described in <a href="#sec4-sensors-20-06394" class="html-sec">Section 4</a>. The x-axis represents multiples of the frequency of the cadence.</p>
Full article ">
19 pages, 3099 KiB  
Article
Hybrid Routing, Modulation, Spectrum and Core Allocation Based on Mapping Scheme
by Edson Rodrigues, Eduardo Cerqueira, Denis Rosário and Helder Oliveira
Sensors 2020, 20(21), 6393; https://doi.org/10.3390/s20216393 - 9 Nov 2020
Cited by 4 | Viewed by 2427
Abstract
With the persistently growing popularity of internet traffic, telecom operators are forced to provide high-capacity, cost-efficient, and performance-adaptive connectivity solutions to fulfill the requirements and increase their returns. However, optical networks that make up the core of the Internet gradually reached physical transmission [...] Read more.
With the persistently growing popularity of internet traffic, telecom operators are forced to provide high-capacity, cost-efficient, and performance-adaptive connectivity solutions to fulfill the requirements and increase their returns. However, optical networks that make up the core of the Internet gradually reached physical transmission limits. In an attempt to provide new solutions emerged, the Space-Division Multiplexing Elastic Optical Network emerged as one of the best ways to deal with the network depletion. However, it is necessary to establish lightpaths using routing, modulation, spectrum, and core allocation (RMSCA) algorithms to establish connections in these networks. This article proposes a crosstalk-aware RMSCA algorithm that uses a multi-path and mapping scheme for improving resource allocation. The results show that the proposed algorithm decreases the blocking ratio by up to four orders of magnitude compared with other RMSCA algorithms in the literature. Full article
Show Figures

Figure 1

Figure 1
<p>Spectrum Matrix and Mapped Matrix.</p>
Full article ">Figure 2
<p>Mapped Graph.</p>
Full article ">Figure 3
<p>USA Topology.</p>
Full article ">Figure 4
<p>NSF Topology.</p>
Full article ">Figure 5
<p>Bandwidth Blocking for USA topology.</p>
Full article ">Figure 6
<p>Bandwidth Blocking for NSF topology.</p>
Full article ">Figure 7
<p>Energy Efficiency for USA topology.</p>
Full article ">Figure 8
<p>Energy Efficiency for NSF topology.</p>
Full article ">Figure 9
<p>Fragmentation Ratio for NSF topology.</p>
Full article ">Figure 10
<p>Fragmentation Ratio for USA topology.</p>
Full article ">Figure 11
<p>Crosstalk per Slot for NSF topology.</p>
Full article ">Figure 12
<p>Crosstalk per Slot for USA topology.</p>
Full article ">Figure 13
<p>Modulation Format Percentage for USA topology.</p>
Full article ">Figure 14
<p>Modulation Format Percentage for NSF topology.</p>
Full article ">
16 pages, 5013 KiB  
Article
Capacitive-Coupling Impedance Spectroscopy Using a Non-Sinusoidal Oscillator and Discrete-Time Fourier Transform: An Introductory Study
by Tomiharu Yamaguchi and Akinori Ueno
Sensors 2020, 20(21), 6392; https://doi.org/10.3390/s20216392 - 9 Nov 2020
Cited by 3 | Viewed by 4278
Abstract
In this study, we propose a new short-time impedance spectroscopy method with the following three features: (1) A frequency spectrum of complex impedance for the measured object can be obtained even when the measuring electrodes are capacitively coupled with the object and the [...] Read more.
In this study, we propose a new short-time impedance spectroscopy method with the following three features: (1) A frequency spectrum of complex impedance for the measured object can be obtained even when the measuring electrodes are capacitively coupled with the object and the precise capacitance of the coupling is unknown; (2) the spectrum can be obtained from only one cycle of the non-sinusoidal oscillation waveform without sweeping the oscillation frequency; and (3) a front-end measuring circuit can be built, simply and cheaply, without the need for a digital-to-analog (D-A) converter to synthesize elaborate waveforms comprising multiple frequencies. We built the measurement circuit using the proposed method and then measured the complex impedance spectra of 18 resistive elements connected in series with one of three respective capacitive couplings. With this method, each element’s resistance and each coupling’s capacitance were estimated independently and compared with their nominal values. When the coupling capacitance was set to 10 nF or 1.0 nF, estimated errors for the resistive elements in the range of 2.0–10.0 kΩ were less than 5%. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic model of electrodes capacitively coupled to a conductive material via a thin insulator: (<b>a</b>) cross-sectional diagram; (<b>b</b>) equivalent circuit.</p>
Full article ">Figure 2
<p>Circuit diagrams illustrating the measurement of impedance shown in <a href="#sensors-20-06392-f001" class="html-fig">Figure 1</a>: (<b>a</b>) with a sinusoidal voltage source; and (<b>b</b>) in a non-sinusoidal oscillator.</p>
Full article ">Figure 3
<p>Example of the oscillation waveforms of <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mrow> <mn>12</mn> </mrow> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>v</mi> <mn>1</mn> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> measured in the circuit of <a href="#sensors-20-06392-f002" class="html-fig">Figure 2</a>b. The <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> were set to 4.0 kΩ and 10 nF, respectively. The segment of time with a colored background corresponds to two cycles of the oscillation and was used for the subsequent discrete Fourier transform (DFT) analysis.</p>
Full article ">Figure 4
<p>Frequency spectra of: (<b>a</b>) amplitude <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mover accent="true"> <mi>V</mi> <mo>˙</mo> </mover> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </mfenced> </mrow> </semantics></math>; (<b>b</b>) phase <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi mathvariant="normal">V</mi> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) amplitude <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mover accent="true"> <mi>V</mi> <mo>˙</mo> </mover> <mn>1</mn> </msub> </mrow> </mfenced> </mrow> </semantics></math>; and (<b>d</b>) phase <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi mathvariant="normal">V</mi> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>2</mn> <mi>m</mi> <mo>−</mo> <mn>1</mn> </mrow> </mfenced> <msub> <mi>f</mi> <mn>0</mn> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mn>2</mn> <mo>,</mo> <mo> </mo> <mn>3</mn> <mo>,</mo> <mo> </mo> <mo>⋯</mo> <mo>,</mo> <mo> </mo> <mn>20</mn> </mrow> </semantics></math> ) Hz. The spectra were obtained using DFT from the two-cycle segment of <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mrow> <mn>12</mn> </mrow> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mn>1</mn> </msub> <mfenced> <mi>t</mi> </mfenced> </mrow> </semantics></math> in <a href="#sensors-20-06392-f003" class="html-fig">Figure 3</a>. The <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> were set to 4.0 kΩ and 10 nF, respectively.</p>
Full article ">Figure 5
<p>Frequency spectra of: (<b>a</b>) absolute impedance <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">A</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math>; (<b>b</b>) phase <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>ZA</mi> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) absolute impedance <math display="inline"><semantics> <mrow> <mfenced close="|" open="|"> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </mfenced> </mrow> </semantics></math>; and (<b>d</b>) phase <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mrow> <mi>ZX</mi> </mrow> </msub> </mrow> </semantics></math> at <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>2</mn> <mi>m</mi> <mo>−</mo> <mn>1</mn> </mrow> </mfenced> <msub> <mi>f</mi> <mn>0</mn> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mn>2</mn> <mo>,</mo> <mo> </mo> <mn>3</mn> <mo>,</mo> <mo> </mo> <mo>⋯</mo> <mo>,</mo> <mo> </mo> <mn>20</mn> </mrow> </semantics></math> ) Hz.</p>
Full article ">Figure 6
<p>Three-dimensional perspective plots of: (<b>a</b>) impedance <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; and (<b>b</b>) admittance <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Y</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>. Three-dimensional DFT data is projected onto each plane. The solid and dashed lines are theoretical curves of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Y</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6 Cont.
<p>Three-dimensional perspective plots of: (<b>a</b>) impedance <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; and (<b>b</b>) admittance <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Y</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>. Three-dimensional DFT data is projected onto each plane. The solid and dashed lines are theoretical curves of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Y</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Frequency spectra of: (<b>a</b>) real part <math display="inline"><semantics> <mrow> <mi>Re</mi> <mo stretchy="false">(</mo> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> <mo stretchy="false">)</mo> </mrow> </semantics></math>; and (<b>b</b>) imaginary part <math display="inline"><semantics> <mrow> <mi>Im</mi> <mo stretchy="false">(</mo> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> <mo stretchy="false">)</mo> </mrow> </semantics></math> of impedance <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>. The DFT data were fitted to Equations (22) and (23) for determining <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> (dashed lines).</p>
Full article ">Figure 8
<p><math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> estimated using oscillation waveforms and DFT: (<b>a</b>) estimated <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) absolute error of estimated <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) estimated <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; (<b>d</b>) relative error of estimated <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Equivalent circuit modeling of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">A</mi> </msub> </mrow> </semantics></math>: (<b>a</b>) Cole–Cole plot for <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 0.10 nF and <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 0 Ω; (<b>b</b>) equivalent circuit with a stray resistance <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>AS</mi> </mrow> </msub> </mrow> </semantics></math> and stray inductance <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi mathvariant="normal">A</mi> </msub> </mrow> </semantics></math>. The symbol <math display="inline"><semantics> <mi>n</mi> </semantics></math> represents the harmonic number of <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">A</mi> </msub> <mfenced> <mrow> <mi>n</mi> <msub> <mi>f</mi> <mn>0</mn> </msub> </mrow> </mfenced> </mrow> </semantics></math>. We used pyZwx software to fit the DFT data to the equivalent circuit model [<a href="#B59-sensors-20-06392" class="html-bibr">59</a>].</p>
Full article ">Figure 10
<p><math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math> estimated using optimized <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>Z</mi> <mo>˙</mo> </mover> <mi mathvariant="normal">A</mi> </msub> </mrow> </semantics></math>: (<b>a</b>) estimated <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) absolute error of estimated <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) relative error of estimated <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi mathvariant="normal">X</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">
29 pages, 536 KiB  
Article
Enhancements and Challenges in CoAP—A Survey
by Muhammad Ashar Tariq, Murad Khan, Muhammad Toaha Raza Khan and Dongkyun Kim
Sensors 2020, 20(21), 6391; https://doi.org/10.3390/s20216391 - 9 Nov 2020
Cited by 36 | Viewed by 8452
Abstract
The Internet of Engineering Task (IETF) developed a lighter application protocol (Constrained Application Protocol (CoAP)) for the constrained IoT devices operating in lossy environments. Based on UDP, CoAP is a lightweight and efficient protocol compared to other IoT protocols such as HTTP, MQTT, [...] Read more.
The Internet of Engineering Task (IETF) developed a lighter application protocol (Constrained Application Protocol (CoAP)) for the constrained IoT devices operating in lossy environments. Based on UDP, CoAP is a lightweight and efficient protocol compared to other IoT protocols such as HTTP, MQTT, etc. CoAP also provides reliable communication among nodes in wireless sensor networks in addition to features such as resource observation, resource discovery, congestion control, etc. These capabilities of CoAP have enabled the implementation of CoAP in various domains ranging from home automation to health management systems. The use of CoAP has highlighted its shortcomings over the time. To overcome shortcomings of CoAP, numerous enhancements have been made in basic CoAP architecture. This survey highlights the shortcomings of basic CoAP architecture and enhancements made in it throughout the time. Furthermore, existing challenges and issue in the current CoAP architecture are also discussed. Finally, some applications with CoAP implementation are mentioned in order to realize the viability of CoAP in real world use cases. Full article
(This article belongs to the Special Issue Internet of Underwater Things)
Show Figures

Figure 1

Figure 1
<p>An overview of CoAP architecture.</p>
Full article ">Figure 2
<p>CoAP Message Header.</p>
Full article ">Figure 3
<p>Examples of Confirmable and Non-confirmable CoAP messages.</p>
Full article ">Figure 4
<p>CoAP Default Congestion Control.</p>
Full article ">Figure 5
<p>CoAP Resource Observation</p>
Full article ">
17 pages, 9686 KiB  
Article
Artificial Intelligence-Based Optimal Grasping Control
by Dongeon Kim, Jonghak Lee, Wan-Young Chung and Jangmyung Lee
Sensors 2020, 20(21), 6390; https://doi.org/10.3390/s20216390 - 9 Nov 2020
Cited by 18 | Viewed by 3579
Abstract
A new tactile sensing module was proposed to sense the contact force and location of an object on a robot hand, which was attached on the robot finger. Three air pressure sensors are installed at the tip of the finger to detect the [...] Read more.
A new tactile sensing module was proposed to sense the contact force and location of an object on a robot hand, which was attached on the robot finger. Three air pressure sensors are installed at the tip of the finger to detect the contacting force at the points. To obtain a nominal contact force at the finger from data from the three air pressure sensors, a force estimation was developed based upon the learning of a deep neural network. The data from the three air pressure sensors were utilized as inputs to estimate the contact force at the finger. In the tactile module, the arrival time of the air pressure sensor data has been utilized to recognize the contact point of the robot finger against an object. Using the three air pressure sensors and arrival time, the finger location can be divided into 3 × 3 block locations. The resolution of the contact point recognition was improved to 6 × 4 block locations on the finger using an artificial neural network. The accuracy and effectiveness of the tactile module were verified using real grasping experiments. With this stable grasping, an optimal grasping force was estimated empirically with fuzzy rules for a given object. Full article
(This article belongs to the Special Issue Smart Sensors for Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>Configuration of tactile sensing module: (<b>a</b>) silicone base of the module, (<b>b</b>) air pressure sensor, (<b>c</b>) robot hand, (<b>d</b>) tactile sensing module, (<b>e</b>) module applied to robot hand.</p>
Full article ">Figure 2
<p>Configuration of finger skin.</p>
Full article ">Figure 3
<p>Object weight detection experiment.</p>
Full article ">Figure 4
<p>Weight sensing training with deep neural network.</p>
Full article ">Figure 5
<p>Parameters of arrival of time (AoT) algorithm.</p>
Full article ">Figure 6
<p>Artificial neural network for contact area training.</p>
Full article ">Figure 7
<p>Range setting according to contact area: (<b>a</b>) size 1, (<b>b</b>) size 4–1, (<b>c</b>) size 9, (<b>d</b>) size 16, (<b>e</b>) size 12, (<b>f</b>) size 4–2.</p>
Full article ">Figure 8
<p>Structure for conversion of contact.</p>
Full article ">Figure 9
<p>Experiment measured by converting contact area and force.</p>
Full article ">Figure 10
<p>Results of contact area prediction through MLP training.</p>
Full article ">Figure 11
<p>Sensing expression according to touch point of tactile sensing module.</p>
Full article ">Figure 12
<p>Optimal grasping controller.</p>
Full article ">Figure 13
<p>Fuzzy control system.</p>
Full article ">Figure 14
<p>Fuzzy control system.</p>
Full article ">Figure 15
<p>Contact pressure error membership functions (MFs).</p>
Full article ">Figure 16
<p>Contact pressure derivative MFs.</p>
Full article ">Figure 17
<p>Optimal torque MFs.</p>
Full article ">Figure 18
<p>Surface of fuzzy controller.</p>
Full article ">Figure 19
<p>Control system configuration.</p>
Full article ">Figure 20
<p>Robot hand control module.</p>
Full article ">Figure 21
<p>Bottom layer of Robot hand control module.</p>
Full article ">Figure 22
<p>Grasping of various objects (<b>a</b>–<b>f</b>).</p>
Full article ">Figure 23
<p>Current value of cylinder grasping: (<b>a</b>) torque min, (<b>b</b>) torque max.</p>
Full article ">Figure 24
<p>Grasp angle of the fuzzy proportional-integral-derivative (PID) controller.</p>
Full article ">
16 pages, 1677 KiB  
Article
Improving the Accuracy of Low-Cost Sensor Measurements for Freezer Automation
by Kyriakos Koritsoglou, Vasileios Christou, Georgios Ntritsos, Georgios Tsoumanis, Markos G. Tsipouras, Nikolaos Giannakeas and Alexandros T. Tzallas
Sensors 2020, 20(21), 6389; https://doi.org/10.3390/s20216389 - 9 Nov 2020
Cited by 17 | Viewed by 4091
Abstract
In this work, a regression method is implemented on a low-cost digital temperature sensor to improve the sensor’s accuracy; thus, following the EN12830 European standard. This standard defines that the maximum acceptable error regarding temperature monitoring devices should not exceed 1 °C for [...] Read more.
In this work, a regression method is implemented on a low-cost digital temperature sensor to improve the sensor’s accuracy; thus, following the EN12830 European standard. This standard defines that the maximum acceptable error regarding temperature monitoring devices should not exceed 1 °C for the refrigeration and freezer areas. The purpose of the proposed method is to improve the accuracy of a low-cost digital temperature sensor by correcting its nonlinear response using simple linear regression (SLR). In the experimental part of this study, the proposed method’s outcome (in a custom created dataset containing values taken from a refrigerator) is compared against the values taken from a sensor complying with the EN12830 standard. The experimental results confirmed that the proposed method reduced the mean absolute error (MAE) by 82% for the refrigeration area and 69% for the freezer area—resulting in the accuracy improvement of the low-cost digital temperature sensor. Moreover, it managed to achieve a lower generalization error on the test set when compared to three other machine learning algorithms (SVM, B-ELM, and OS-ELM). Full article
(This article belongs to the Special Issue Human-Robot Interaction Applications in Internet of Things (IoT) Era)
Show Figures

Figure 1

Figure 1
<p>The SLR-DS18B20 System Architecture. This diagram depicts the proposed system’s architecture. The Raspberry Pi Zero W (Device <b>1</b>) is connected to the series of DS18B20 sensors (the sensors are given the number <b>2</b>) using pins 7, 17, and 25. Pin 7 is responsible for the communication, pin 17 provides 3.3 voltage to the device, while pin 25 is responsible for grounding the circuit. Then, these sensors are inserted into a commercial fridge (device <b>3</b>).</p>
Full article ">Figure 2
<p>Submerge procedure of the reference sensor and the sensor that is going to be calibrated. Initially, we gently stir and submerge the reference sensor, as seen in the left image. Then, we stir and submerge the sensor that is going to be calibrated (right image).</p>
Full article ">Figure 3
<p>The AE regarding sampled values for both temperature areas. The AE for each sampled value is depicted with blue dots. The <span class="html-italic">x</span>-axis shows the temperatures, while on the <span class="html-italic">y</span>-axis, the AE calculated using Formula (7). The left diagram shows the 1st temperature zone’s measurement results where the AE ranged from 0 °C to 2.07 °C. The right graph shows the 2nd temperature zone’s measurement results, where the AE went from 0 °C to 2.07 °C.</p>
Full article ">Figure 4
<p>The AE regarding predicted values for both temperature areas. The AE for each sampled value is depicted with blue dots. The <span class="html-italic">x</span>-axis shows the temperatures, while on the <span class="html-italic">y</span>-axis, the AE, which was calculated using Formula (7). The left diagram shows the 1st temperature zone’s measurement results where the AE ranged from 0 °C to 0.75 °C. The right diagram shows the 2nd temperature zone’s measurement results, where the AE went from 0 °C to 0.4 °C.</p>
Full article ">Figure 5
<p>Comparison between actual and predicted values for the 1st temperature zone. This diagram compares the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> for the actual and predicted values. The 95% CI for each category can be seen with a red vertical line. Each CI’s limits are depicted with magenta (actual values), and blue (predicted values) dashed lines. The two Cis do not overlap, which is a strong indication that the error reduction achieved after the application of linear regression is statistically significant.</p>
Full article ">Figure 6
<p>Comparison between actual and predicted values for the 2nd temperature zone. This diagram compares the <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> <mo stretchy="true">¯</mo> </mover> </mrow> </semantics></math> for the actual and predicted values. The 95% CI for each category can be seen with a red vertical line. The limits for each CI are depicted with magenta (actual values), and blue (predicted values) dashed lines. The two Cis do not overlap, which is a strong indication that the error reduction achieved after the application of linear regression is statistically significant.</p>
Full article ">
12 pages, 1966 KiB  
Article
A Random Forest Machine Learning Framework to Reduce Running Injuries in Young Triathletes
by Javier Martínez-Gramage, Juan Pardo Albiach, Iván Nacher Moltó, Juan José Amer-Cuenca, Vanessa Huesa Moreno and Eva Segura-Ortí
Sensors 2020, 20(21), 6388; https://doi.org/10.3390/s20216388 - 9 Nov 2020
Cited by 10 | Viewed by 8413
Abstract
Background: The running segment of a triathlon produces 70% of the lower limb injuries. Previous research has shown a clear association between kinematic patterns and specific injuries during running. Methods: After completing a seven-month gait retraining program, a questionnaire was used to assess [...] Read more.
Background: The running segment of a triathlon produces 70% of the lower limb injuries. Previous research has shown a clear association between kinematic patterns and specific injuries during running. Methods: After completing a seven-month gait retraining program, a questionnaire was used to assess 19 triathletes for the incidence of injuries. They were also biomechanically analyzed at the beginning and end of the program while running at a speed of 90% of their maximum aerobic speed (MAS) using surface sensor dynamic electromyography and kinematic analysis. We used classification tree (random forest) techniques from the field of artificial intelligence to identify linear and non-linear relationships between different biomechanical patterns and injuries to identify which styles best prevent injuries. Results: Fewer injuries occurred after completing the program, with athletes showing less pelvic fall and greater activation in gluteus medius during the first phase of the float phase, with increased trunk extension, knee flexion, and decreased ankle dorsiflexion during the initial contact with the ground. Conclusions: The triathletes who had suffered the most injuries ran with increased pelvic drop and less activation in gluteus medius during the first phase of the float phase. Contralateral pelvic drop seems to be an important variable in the incidence of injuries in young triathletes. Full article
(This article belongs to the Special Issue Wearable Sensors & Gait)
Show Figures

Figure 1

Figure 1
<p>Visual real-time biofeedback during the retraining protocol.</p>
Full article ">Figure 2
<p>Importance of the variables, scaled according to the “varImp” method in the caret R library for the complete data set.</p>
Full article ">Figure 3
<p>Density plots showing the differences between pre- and post-values obtained before and after the retraining phase. Higher density values on the ordinate axis point out which are the most probable values on the abscissa axis. The difference in pelvic obliquity in the right and left limb (<b>A</b>), ankle dorsiflexion in the initial contact (<b>B</b>), contralateral pelvic drop (<b>C</b>,<b>D</b>), and gluteus medius activation during the first phase of flight (<b>E</b>,<b>F</b>).</p>
Full article ">Figure 4
<p>Density plots comparing the differences between injured and non-injured triathletes in terms of the degree of pelvic obliquity between the (<b>A</b>) pre-retraining; (<b>B</b>) and post-retraining phase values. Higher density values on the ordinate axis point out which are the most probable values on the abscissa axis.</p>
Full article ">Figure 5
<p>Gluteus medius (right) sEMG plot pre and post.</p>
Full article ">Figure 6
<p>Pelvis kinematics before and after retraining protocol.</p>
Full article ">
18 pages, 1887 KiB  
Article
LiDAR Point Cloud Recognition and Visualization with Deep Learning for Overhead Contact Inspection
by Xiaohan Tu, Cheng Xu, Siping Liu, Shuai Lin, Lipei Chen, Guoqi Xie and Renfa Li
Sensors 2020, 20(21), 6387; https://doi.org/10.3390/s20216387 - 9 Nov 2020
Cited by 20 | Viewed by 4171
Abstract
As overhead contact (OC) is an essential part of power supply systems in high-speed railways, it is necessary to regularly inspect and repair abnormal OC components. Relative to manual inspection, applying LiDAR (light detection and ranging) to OC inspection can improve efficiency, accuracy, [...] Read more.
As overhead contact (OC) is an essential part of power supply systems in high-speed railways, it is necessary to regularly inspect and repair abnormal OC components. Relative to manual inspection, applying LiDAR (light detection and ranging) to OC inspection can improve efficiency, accuracy, and safety, but it faces challenges to efficiently and effectively segment LiDAR point cloud data and identify catenary components. Recent deep learning-based recognition methods are rarely employed to recognize OC components, because they have high computational complexity, while their accuracy needs to be improved. To track these problems, we first propose a lightweight model, RobotNet, with depthwise and pointwise convolutions and an attention module to recognize the point cloud. Second, we optimize RobotNet to accelerate its recognition speed on embedded devices using an existing compilation tool. Third, we design software to facilitate the visualization of point cloud data. Our software can not only display a large amount of point cloud data, but also visualize the details of OC components. Extensive experiments demonstrate that RobotNet recognizes OC components more accurately and efficiently than others. The inference speed of the optimized RobotNet increases by an order of magnitude. RobotNet has lower computational complexity than other studies. The visualization results also show that our recognition method is effective. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Overhead catenary components.</p>
Full article ">Figure 2
<p>The proposed RobotNet model.</p>
Full article ">Figure 3
<p>The proposed attention module.</p>
Full article ">Figure 4
<p>TVM tuning through the RPC (remote procedure call) tracker.</p>
Full article ">Figure 5
<p>Intelligent inspection robots.</p>
Full article ">Figure 6
<p>Constructing visualization software for point cloud recognition.</p>
Full article ">Figure 7
<p>LiDAR scanning angle and point cloud recognition.</p>
Full article ">Figure 8
<p>Comparison of the number of MACs (multiply-and-accumulate operations) and parameters.</p>
Full article ">Figure 9
<p>Runtime comparison.</p>
Full article ">Figure 10
<p>Inference runtime of our model through tuning.</p>
Full article ">Figure 11
<p>(<b>a</b>) Comparison of the visualized results; (<b>b</b>) Comparison of the visualized results.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Comparison of the visualized results; (<b>b</b>) Comparison of the visualized results.</p>
Full article ">
19 pages, 2813 KiB  
Article
Attention-Deficit/Hyperactivity Disorder (ADHD): Integrating the MOXO-dCPT with an Eye Tracker Enhances Diagnostic Precision
by Tomer Elbaum, Yoram Braw, Astar Lev and Yuri Rassovsky
Sensors 2020, 20(21), 6386; https://doi.org/10.3390/s20216386 - 9 Nov 2020
Cited by 11 | Viewed by 7051
Abstract
Clinical decision-making may be enhanced when combining psychophysiological sensors with computerized neuropsychological tests. The current study explored the utility of integrating an eye tracker with a commercially available continuous performance test (CPT), the MOXO-dCPT. As part of the study, the performance of adult [...] Read more.
Clinical decision-making may be enhanced when combining psychophysiological sensors with computerized neuropsychological tests. The current study explored the utility of integrating an eye tracker with a commercially available continuous performance test (CPT), the MOXO-dCPT. As part of the study, the performance of adult attention-deficit/hyperactivity disorder (ADHD) patients and healthy controls (n = 43, n = 42, respectively) was compared in the integrated system. More specifically, the MOXO-dCPT has four stages, which differ in their combinations of ecological visual and auditory dynamic distractors. By exploring the participants’ performance in each of the stages, we were able to show that: (a) ADHD patients spend significantly more time gazing at irrelevant areas of interest (AOIs) compared to healthy controls; (b) visual distractors are particularly effective in impacting ADHD patients’ eye movements, suggesting their enhanced utility in diagnostic procedures; (c) combining gaze direction data and conventional CPT indices enhances group prediction, compared to the sole use of conventional indices. Overall, the findings indicate the utility of eye tracker-integrated CPTs and their enhanced diagnostic precision. They also suggest that the use of attention-grabbing visual distractors may be a promising path for the evolution of existing CPTs by shortening their duration and enhancing diagnostic precision. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Participant field of view (FOV) while performing the MOXO-dCPT, divided into distractibility (gridded) and target areas of interest (AOIs).</p>
Full article ">Figure 2
<p>Receiver operating characteristic (ROC) curves for the eye movement distractibility scale and the MOXO-dCPT indices: (<b>a</b>) All stages; (<b>b</b>) visual distractors stage (eye movement distractibility scale: AUC = 0.78, attention: AUC = 0.51, timeliness index: AUC = 0.63, hyperactivity: AUC = 0.59, impulsivity: AUC = 0.60). Additional information regarding the visual distractors stage is presented later (see “<a href="#sec2dot4dot3-sensors-20-06386" class="html-sec">Section 2.4.3</a>. Exploratory Analysis”).</p>
Full article ">Figure 3
<p>Eye movement distractibility line plot, of the four distractor-type stages. Error bars represent standard error (SE) of the repeated-measures ANOVA analysis (see <a href="#sensors-20-06386-t004" class="html-table">Table 4</a>).</p>
Full article ">Figure 4
<p>Heat maps of the eye distractibility scale, divided according to group (ADHD/controls) and MOXO-dCPT stages (visual/auditory).</p>
Full article ">Figure 5
<p>Timeline plot comparing eye movement distractibility scale of ADHD patients and controls. Four distractors stages (non, visual, auditory, and combined) are presented at the top section of the chart, subdivided into blocks one to eight. Each stage includes two consecutive blocks, except for the no distractors stage (blocks 1 and 8).</p>
Full article ">
18 pages, 5125 KiB  
Article
A New Method of Secure Authentication Based on Electromagnetic Signatures of Chipless RFID Tags and Machine Learning Approaches
by Dragoș Nastasiu, Răzvan Scripcaru, Angela Digulescu, Cornel Ioana, Raymundo De Amorim, Jr., Nicolas Barbot, Romain Siragusa, Etienne Perret and Florin Popescu
Sensors 2020, 20(21), 6385; https://doi.org/10.3390/s20216385 - 9 Nov 2020
Cited by 14 | Viewed by 3896
Abstract
In this study, we present the implementation of a neural network model capable of classifying radio frequency identification (RFID) tags based on their electromagnetic (EM) signature for authentication applications. One important application of the chipless RFID addresses the counterfeiting threat for manufacturers. The [...] Read more.
In this study, we present the implementation of a neural network model capable of classifying radio frequency identification (RFID) tags based on their electromagnetic (EM) signature for authentication applications. One important application of the chipless RFID addresses the counterfeiting threat for manufacturers. The goal is to design and implement chipless RFID tags that possess a unique and unclonable fingerprint to authenticate objects. As EM characteristics are employed, these fingerprints cannot be easily spoofed. A set of 18 tags operating in V band (65–72 GHz) was designed and measured. V band is more sensitive to dimensional variations compared to other applications at lower frequencies, thus it is suitable to highlight the differences between the EM signatures. Machine learning (ML) approaches are used to characterize and classify the 18 EM responses in order to validate the authentication method. The proposed supervised method reached a maximum recognition rate of 100%, surpassing in terms of accuracy most of RFID fingerprinting related work. To determine the best network configuration, we used a random search algorithm. Further tuning was conducted by comparing the results of different learning algorithms in terms of accuracy and loss. Full article
(This article belongs to the Special Issue Intelligent and Adaptive Security in Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Basic chipless radio frequency identification (RFID) system. Each tag has a different electromagnetic (EM) characteristic related to manufacture randomness.</p>
Full article ">Figure 2
<p>Dimensional inhomogeneities caused by a manufacturing process.</p>
Full article ">Figure 3
<p>E-shaped chipless resonator designed to different bands (V-band and X-band), all dimensions are in millimeter.</p>
Full article ">Figure 4
<p>E-Shaped X-band backscattered EM field, (<b>a</b>) S<sub>21</sub> magnitude and (<b>b</b>) phase considering the geometrical variations (17.5 <math display="inline"><semantics> <mrow> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math> and 35 <math display="inline"><semantics> <mrow> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>).</p>
Full article ">Figure 5
<p>E-Shaped V-band backscattered EM field, (<b>a</b>) S<sub>21</sub> magnitude and (<b>b</b>) phase, whereas the E<sub>V</sub> represents the backscattered signal from the V-band E-shape resonator.</p>
Full article ">Figure 6
<p>Fabricated chipless tags sharing the same substrate (all dimensions are in millimeters).</p>
Full article ">Figure 7
<p>Simulated radar cross section (RCS) versus frequency of different group resonators, where <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>a</mi> <mo>×</mo> <mi>b</mi> </mrow> </msub> </mrow> </semantics></math><math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">E</mi> <mrow> <mi mathvariant="normal">a</mi> <mo>×</mo> <mi mathvariant="normal">b</mi> </mrow> </msub> </mrow> </semantics></math> concern the lines <span class="html-italic">a</span> and columns <span class="html-italic">b</span> of the tag, respectively.</p>
Full article ">Figure 8
<p>Setup for V-band measurements in office environment.</p>
Full article ">Figure 9
<p>Neural network with two dense layers with ReLU activations, one dropout layer, and SoftMax loss function.</p>
Full article ">Figure 10
<p>The result of extending and normalizing EM signatures. Each color is representative for a tag: (<b>a</b>) initial 180 EM signatures measured for 18 different tags; (<b>b</b>) The extended database with noised and normalized EM signatures.</p>
Full article ">Figure 11
<p>Metrics used in the evaluation of the neural network: (<b>a</b>) training and validation accuracy; (<b>b</b>) training and validation loss.</p>
Full article ">Figure 12
<p>Confusion matrix for a separate test dataset.</p>
Full article ">Figure 13
<p>Comparison between Stochastic Gradient Descent (SGD), Adagrad, RMSprop, and Adam: (<b>a</b>) validation accuracy and (<b>b</b>) validation loss.</p>
Full article ">Figure A1
<p>Confusion matrix described in abstract terms.</p>
Full article ">
22 pages, 6390 KiB  
Article
Cooperative UAV–UGV Autonomous Power Pylon Inspection: An Investigation of Cooperative Outdoor Vehicle Positioning Architecture
by Alvaro Cantieri, Matheus Ferraz, Guido Szekir, Marco Antônio Teixeira, José Lima, André Schneider Oliveira and Marco Aurélio Wehrmeister
Sensors 2020, 20(21), 6384; https://doi.org/10.3390/s20216384 - 9 Nov 2020
Cited by 32 | Viewed by 5561
Abstract
Realizing autonomous inspection, such as that of power distribution lines, through unmanned aerial vehicle (UAV) systems is a key research domain in robotics. In particular, the use of autonomous and semi-autonomous vehicles to execute the tasks of an inspection process can enhance the [...] Read more.
Realizing autonomous inspection, such as that of power distribution lines, through unmanned aerial vehicle (UAV) systems is a key research domain in robotics. In particular, the use of autonomous and semi-autonomous vehicles to execute the tasks of an inspection process can enhance the efficacy and safety of the operation; however, many technical problems, such as those pertaining to the precise positioning and path following of the vehicles, robust obstacle detection, and intelligent control, must be addressed. In this study, an innovative architecture involving an unmanned aircraft vehicle (UAV) and an unmanned ground vehicle (UGV) was examined for detailed inspections of power lines. In the proposed strategy, each vehicle provides its position information to the other, which ensures a safe inspection process. The results of real-world experiments indicate a satisfactory performance, thereby demonstrating the feasibility of the proposed approach. Full article
(This article belongs to the Special Issue UAV-Based Smart Sensor Systems and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the architecture.</p>
Full article ">Figure 2
<p>Architecture components. (<b>a</b>) Inspection site: base station, RTK-GPS, and UAV. (<b>b</b>) Detail of the Bebop drone with RTK-GPS module and Pioneer P3 with the augmented reality tag (AR-Tag).</p>
Full article ">Figure 3
<p>Autonomous flight limits for safe operation.</p>
Full article ">Figure 4
<p>Reference corrections in the x–y plane.</p>
Full article ">Figure 5
<p>Absolute horizontal accuracy estimation for different camera resolution.</p>
Full article ">Figure 6
<p>Horizontal orientation accuracy estimation for different camera resolutions.</p>
Full article ">Figure 7
<p>Height accuracy estimation for different camera resolutions.</p>
Full article ">Figure 8
<p>Figure showing snapshots of a flight round of the Bebop drone capturing the AR-Tag positioning. (<b>1</b>) Image frame transmitted with error, causing lost of accuracy; (<b>2</b>) Regular operation; (<b>3</b>) Tag near of detection limit; (<b>4</b>) Partial shadowing of the tag, causing lost of accuracy;</p>
Full article ">Figure 9
<p>UAV flight path comparing the AR-Tag error with the Bebop odometry error.</p>
Full article ">Figure 10
<p>Absolute horizontal error for the AR-Tag and Bebop odometry with the mean and standard deviation values. (<b>a</b>) AR-Tag absolute error statistics; (<b>b</b>) Bebop odometry error statistics.</p>
Full article ">Figure 11
<p>Horizontal orientation estimation. (<b>a</b>) AR-Tag vs. Bebop odometry orientation. (<b>b</b>) Absolute Bebop vs. AR-Tag orientation error.</p>
Full article ">Figure 12
<p>Height evaluation. (<b>a</b>) Bebop odometry height error; (<b>b</b>) AR-Tag height error; (<b>c</b>) height data from the AR-Tag, Bebop, and LIDAR-Lite compared.</p>
Full article ">Figure 13
<p>Position error of the unmanned ground vehicle (UGV) displacements: Position error of indoor experiment.</p>
Full article ">Figure 14
<p>Snapshots of the UGV control displacement using AR-Tag position data in an outdoor site. (<b>1</b>) Start of the UGV displacement; (<b>2</b>)–(<b>7</b>) The UGV is following the UAV; (<b>8</b>) The UGV reach the position below the UAV;</p>
Full article ">Figure 15
<p>Position error of the UGV displacements: position error of the outdoor experiment.</p>
Full article ">
21 pages, 713 KiB  
Article
Machine Learning Improvements to Human Motion Tracking with IMUs
by Pedro Manuel Santos Ribeiro, Ana Clara Matos, Pedro Henrique Santos and Jaime S. Cardoso
Sensors 2020, 20(21), 6383; https://doi.org/10.3390/s20216383 - 9 Nov 2020
Cited by 19 | Viewed by 8465
Abstract
Inertial Measurement Units (IMUs) have become a popular solution for tracking human motion. The main problem of using IMU data for deriving the position of different body segments throughout time is related to the accumulation of the errors in the inertial data. The [...] Read more.
Inertial Measurement Units (IMUs) have become a popular solution for tracking human motion. The main problem of using IMU data for deriving the position of different body segments throughout time is related to the accumulation of the errors in the inertial data. The solution to this problem is necessary to improve the use of IMUs for position tracking. In this work, we present several Machine Learning (ML) methods to improve the position tracking of various body segments when performing different movements. Firstly, classifiers were used to identify the periods in which the IMUs were stopped (zero-velocity detection). The models Random Forest, Support Vector Machine (SVM) and neural networks based on Long-Short-Term Memory (LSTM) layers were capable of identifying those periods independently of the motion and body segment with a substantially higher performance than the traditional fixed-threshold zero-velocity detectors. Afterwards, these techniques were combined with ML regression models based on LSTMs capable of estimating the displacement of the sensors during periods of movement. These models did not show significant improvements when compared with the more straightforward double integration of the linear acceleration data with drift removal for translational motion estimate. Finally, we present a model based on LSTMs that combined simultaneously zero-velocity detection with the translational motion of sensors estimate. This model revealed a lower average error for position tracking than the combination of the previously referred methodologies. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Xsens Avatar with indication of the six body placements in which both inertial data and ground truth 3D position are provided, extracted from [<a href="#B33-sensors-20-06383" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Train and test split (RSHO, Right Shoulder; LSHO, Left Shoulder; RUPA, Right Upper Arm; LUPA, Left Upper Arm; RTOE, Right Toe; LTOE, Left Toe).</p>
Full article ">Figure 3
<p>“Smart” double integration by a two-Layer LSTM (Acceleration[i], linear acceleration at instant i; <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Velocity[i], velocity change at instant i; <math display="inline"><semantics> <mo>Δ</mo> </semantics></math>Position, displacement at instant i; LSTM, Long-Short-Term Memory Network Layer).</p>
Full article ">Figure 4
<p>Representation of the warm-Start with “integrative” weights methodology—example for the x-axis.</p>
Full article ">Figure 5
<p>Representation of the warm-Start with “pre-trained” weights methodology—example for the x-axis.</p>
Full article ">Figure 6
<p>Neural network architecture with no type of warm start for three-axis displacement estimate.</p>
Full article ">Figure 7
<p>Stacked neural network for simultaneous classification and regression.</p>
Full article ">Figure 8
<p>Accuracy of the different used classifiers (SHOE, Stance Hypothesis Optimal Estimation; ARED, Angular Rate Energy; Log-R, Logistic Regression; SVM, Support Vector Machine; RF, Random Forest; Dense, Densely Connected Neural Network; LSTM, Single Layer LSTM; CNN + LSTM, Convolutional Layers + LSTM Layers).</p>
Full article ">Figure 9
<p>Example of a trial classification using the Random Forest classifier with and without median filtering (blue background, true positives (correctly identified stopped periods); gray background, true negatives (correctly identified periods of movement), purple background, false positives (wrongly labeled as stopped periods); red background, false negatives (wrongly labeled as moving periods)).</p>
Full article ">
21 pages, 12075 KiB  
Communication
Damage Proxy Map of the Beirut Explosion on 4th of August 2020 as Observed from the Copernicus Sensors
by Athos Agapiou
Sensors 2020, 20(21), 6382; https://doi.org/10.3390/s20216382 - 9 Nov 2020
Cited by 15 | Viewed by 5978
Abstract
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, [...] Read more.
On the 4th of August 2020, a massive explosion occurred in the harbor area of Beirut, Lebanon, killing more than 100 people and damaging numerous buildings in its proximity. The current article aims to showcase how open access and freely distributed satellite data, such as those of the Copernicus radar and optical sensors, can deliver a damage proxy map of this devastating event. Sentinel-1 radar images acquired just prior (the 24th of July 2020) and after the event (5th of August 2020) were processed and analyzed, indicating areas with significant changes of the VV (vertical transmit, vertical receive) and VH (vertical transmit, horizontal receive) backscattering signal. In addition, an Interferometric Synthetic Aperture Radar (InSAR) analysis was performed for both descending (31st of July 2020 and 6th of August 2020) and ascending (29th of July 2020 and 10th of August 2020) orbits of Sentinel-1 images, indicating relative small ground displacements in the area near the harbor. Moreover, low coherence for these images is mapped around the blast zone. The current study uses the Hybrid Pluggable Processing Pipeline (HyP3) cloud-based system provided by the Alaska Satellite Facility (ASF) for the processing of the radar datasets. In addition, medium-resolution Sentinel-2 optical data were used to support thorough visual inspection and Principal Component Analysis (PCA) the damage in the area. While the overall findings are well aligned with other official reports found on the World Wide Web, which were mainly delivered by international space agencies, those reports were generated after the processing of either optical or radar datasets. In contrast, the current communication showcases how both optical and radar satellite data can be parallel used to map other devastating events. The use of open access and freely distributed Sentinel mission data was found very promising for delivering damage proxies maps after devastating events worldwide. Full article
Show Figures

Figure 1

Figure 1
<p>Top: (<b>a</b>) WorldView-2 high-resolution optical image over the Beirut harbor area before the explosion and (<b>b</b>) after the event. Bottom: (<b>c</b>) WorldView-2 high-resolution optical image over the broader area of the Beirut harbor before the explosion and (<b>d</b>) after the event (copyrights European Space Imaging [<a href="#B9-sensors-20-06382" class="html-bibr">9</a>]).</p>
Full article ">Figure 2
<p>Study area indicating the harbor area (blast site) with a yellow star and the various zones created for further consideration covering distances from 0 to 3000 m away from the blast site.</p>
Full article ">Figure 3
<p>Change detection results using the log difference of the VV and the VH backscattering amplitude. Significant changes of the pair of images are presented with dark red and blue colors (&gt;−0.25 or &gt;0.25 differences), while other minor changes in the range of −0.25 to −0.15 and 0.15 to 0.25 are also presented in light red and blue colors, respectively. The four zones under study (Zone A to Zone D) are also given. The location of the blast size is shown with the yellow star at the center of the figure. The white dashed rectangle around the blast site is indicating the zoom area presented in <a href="#sensors-20-06382-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>Change detection results—as in <a href="#sensors-20-06382-f003" class="html-fig">Figure 3</a>, see before—for the area around the blast site near the harbor of Beirut (indicated with a white dashed line in <a href="#sensors-20-06382-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 5
<p>Wrapped interferogram showing the deformation fringes as a result of the Beirut explosion as derived from the Sentinel-1 SAR images. The color shows a full 2π cycle of phase change.</p>
Full article ">Figure 6
<p>(<b>a</b>) Unwrapped interferogram as derived from the Sentinel-1 SAR images in descending orbit (see <a href="#sensors-20-06382-t002" class="html-table">Table 2</a>). (<b>b</b>) Unwrapped interferogram as derived from the Sentinel-1 SAR images in ascending orbit (see <a href="#sensors-20-06382-t003" class="html-table">Table 3</a>), and (<b>c</b>) VV and VH log difference polarizations of the same area (as in <a href="#sensors-20-06382-f003" class="html-fig">Figure 3</a>). Harbor area is indicated in a yellow square.</p>
Full article ">Figure 7
<p>(<b>a</b>) Line of sight (LOS) displacements as derived from the Sentinel-1 SAR images (descending orbit). Harbor area, as indicated in a yellow square. (<b>b</b>) Line of sight (LOS) displacements as derived from the Sentinel-1 SAR images (ascending orbit). Harbor area, as indicated in a yellow square.</p>
Full article ">Figure 8
<p>(<b>a</b>) Coherence map generated from the descending orbit images (see <a href="#sensors-20-06382-t002" class="html-table">Table 2</a>). (<b>b</b>) Coherence map generated from the ascending orbit images (see <a href="#sensors-20-06382-t003" class="html-table">Table 3</a>). (<b>c</b>) High-resolution WorldView-2 image.</p>
Full article ">Figure 9
<p>(<b>a</b>) Sentinel-2 optical image taken over the area of interest on the 8th of August 2020 (RGB composite). (<b>b</b>) The NIR-R-G pseudo color composite of the same image and (<b>c</b>) the change detection results from the Sentinel-1 image analysis. Black arrows indicate destroyed areas from image interpretation of the Sentinel-2 image, while yellow arrows indicate the non-detectable destroyed areas from this analysis. The location of the blast size is shown with the yellow star.</p>
Full article ">Figure 10
<p>(<b>a</b>) The first principal component (PC1) results over the area around the harbor after the Principal Component Analysis (PCA) of the integrated Sentinel-2 optical images of 24th of July 2020 and 8th of August. (<b>b</b>) The PC1–PC3 pseudo color composite of the same integrated dataset and (<b>c</b>) the high-resolution WorldView-2 image.</p>
Full article ">Figure 11
<p>(<b>a</b>) Change detection results as generated from the VV and VH log differences of the Sentinel-1 images, while (<b>b</b>) indicates the Damage Proxy Map generated from the Advanced Rapid Imaging and Analysis (ARIA) team [<a href="#B8-sensors-20-06382" class="html-bibr">8</a>]. The location of the blast size is shown with the yellow star.</p>
Full article ">Figure 12
<p>Change detection results as generated from the VV and VH log differences of the Sentinel-1 images around the blast site, while orange polygons indicate the buildings that have been damaged as reported by the MapAction platform [<a href="#B21-sensors-20-06382" class="html-bibr">21</a>] (digitized by the author).</p>
Full article ">Figure 13
<p>(<b>a</b>) High-resolution WorldView-2 image taken some hours after the exposition over the harbor of Beirut. The blast site is indicated with a yellow star. (<b>b</b>) Change detection as generated from the VV and VH log differences of the Sentinel-1 image and (<b>c</b>) the InSAR analysis (LOS of the descending orbit images) over the area. The location of the blast size is shown with the yellow star.</p>
Full article ">Figure A1
<p>Change detection results using the log difference of the VH backscattering amplitude. Significant changes of the pair of images are presented with dark red and blue colors (&gt;−0.25 or &gt;0.25 differences), while other minor changes in the range of −0.25 to −0.15 and 0.15 to 0.25 are also presented in light red and blue colors, respectively. The four zones under study (Zone A to Zone D) are also given. The location of the blast size is shown with the yellow star. The white dashed rectangle around the blast site is indicating the zoomed-in area presented in <a href="#sensors-20-06382-f0A3" class="html-fig">Figure A3</a>.</p>
Full article ">Figure A2
<p>Change detection results using the log difference of the VV backscattering amplitude. Significant changes of the pair of images are presented with dark red and blue colors (&gt;−0.25 or &gt;0.25 differences), while other minor changes in the range of −0.25 to −0.15 and 0.15 to 0.25 are also presented in light red and blue colors, respectively. The four zones under study (Zone A to Zone D) are also given. The location of the blast size is shown with the yellow star. The white dashed rectangle around the blast site is indicating the zoomed-in area presented in <a href="#sensors-20-06382-f0A4" class="html-fig">Figure A4</a>.</p>
Full article ">Figure A3
<p>Change detection results as shown in <a href="#sensors-20-06382-f0A1" class="html-fig">Figure A1</a> (see before) for the area around the blast site near the harbor of Beirut (indicated with a white dashed line in <a href="#sensors-20-06382-f0A1" class="html-fig">Figure A1</a>). The location of the blast size is shown with the yellow star.</p>
Full article ">Figure A4
<p>Change detection results—as in <a href="#sensors-20-06382-f0A2" class="html-fig">Figure A2</a>, see before—for the area around the blast site near the harbor of Beirut (indicated with a white dashed line in <a href="#sensors-20-06382-f0A2" class="html-fig">Figure A2</a>). The location of the blast size is shown with the yellow star.</p>
Full article ">
22 pages, 707 KiB  
Article
Analysis of Copernicus’ ERA5 Climate Reanalysis Data as a Replacement for Weather Station Temperature Measurements in Machine Learning Models for Olive Phenology Phase Prediction
by Noelia Oses, Izar Azpiroz, Susanna Marchi, Diego Guidotti, Marco Quartulli and Igor G. Olaizola
Sensors 2020, 20(21), 6381; https://doi.org/10.3390/s20216381 - 9 Nov 2020
Cited by 39 | Viewed by 6409
Abstract
Knowledge of phenological events and their variability can help to determine final yield, plan management approach, tackle climate change, and model crop development. THe timing of phenological stages and phases is known to be highly correlated with temperature which is therefore an essential [...] Read more.
Knowledge of phenological events and their variability can help to determine final yield, plan management approach, tackle climate change, and model crop development. THe timing of phenological stages and phases is known to be highly correlated with temperature which is therefore an essential component for building phenological models. Satellite data and, particularly, Copernicus’ ERA5 climate reanalysis data are easily available. Weather stations, on the other hand, provide scattered temperature data, with fragmentary spatial coverage and accessibility, as such being scarcely efficacious as unique source of information for the implementation of predictive models. However, as ERA5 reanalysis data are not real temperature measurements but reanalysis products, it is necessary to verify whether these data can be used as a replacement for weather station temperature measurements. The aims of this study were: (i) to assess the validity of ERA5 data as a substitute for weather station temperature measurements, (ii) to test different machine learning models for the prediction of phenological phases while using different sets of features, and (iii) to optimize the base temperature of olive tree phenological model. The predictive capability of machine learning models and the performance of different feature subsets were assessed when comparing the recorded temperature data, ERA5 data, and a simple growing degree day phenological model as benchmark. Data on olive tree phenology observation, which were collected in Tuscany for three years, provided the phenological phases to be used as target variables. The results show that ERA5 climate reanalysis data can be used for modelling phenological phases and that these models provide better predictions in comparison with the models trained with weather station temperature measurements. Full article
(This article belongs to the Special Issue Selected Papers from the Global IoT Summit GIoTS 2020)
Show Figures

Figure 1

Figure 1
<p>Dataset size by location and year.</p>
Full article ">Figure 2
<p><span class="html-italic">GDD Tavg calculation</span>.</p>
Full article ">Figure 3
<p><span class="html-italic">GDD Allen calculation</span>.</p>
Full article ">Figure 4
<p>Random forest model performance comparison using different predictors.</p>
Full article ">Figure 5
<p>Combined metric mean and median values for random forest model performance comparison using different predictors.</p>
Full article ">Figure 6
<p>Performance metrics for different ML models trained and tested under the scenarios specified in <a href="#sensors-20-06381-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 7
<p>Model selection for the scenarios specified in <a href="#sensors-20-06381-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 8
<p>Combined metric mean and median values for the scenarios specified in <a href="#sensors-20-06381-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 9
<p>Metrics’ comparison for the models in the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 10
<p>Residual histogram for the models in the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 11
<p>Comparison of the residuals by DOY for the different models’ in the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 12
<p>Comparison of the residuals by target output for the different models’ in the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 13
<p>Comparison of the residuals by location for the different models’ in the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>.</p>
Full article ">Figure 14
<p>Base temperature optimisation for the scenarios described in <a href="#sensors-20-06381-t003" class="html-table">Table 3</a>: confidence intervals for Accuracy, RMSE, and combined metric.</p>
Full article ">
25 pages, 1125 KiB  
Article
Spatio-Temporal Scale Coded Bag-of-Words
by Divina Govender and Jules-Raymond Tapamo
Sensors 2020, 20(21), 6380; https://doi.org/10.3390/s20216380 - 9 Nov 2020
Cited by 2 | Viewed by 3084
Abstract
The Bag-of-Words (BoW) framework has been widely used in action recognition tasks due to its compact and efficient feature representation. Various modifications have been made to this framework to increase its classification power. This often results in an increased complexity and reduced efficiency. [...] Read more.
The Bag-of-Words (BoW) framework has been widely used in action recognition tasks due to its compact and efficient feature representation. Various modifications have been made to this framework to increase its classification power. This often results in an increased complexity and reduced efficiency. Inspired by the success of image-based scale coded BoW representations, we propose a spatio-temporal scale coded BoW (SC-BoW) for video-based recognition. This involves encoding extracted multi-scale information into BoW representations by partitioning spatio-temporal features into sub-groups based on the spatial scale from which they were extracted. We evaluate SC-BoW in two experimental setups. We first present a general pipeline to perform real-time action recognition with SC-BoW. Secondly, we apply SC-BoW onto the popular Dense Trajectory feature set. Results showed SC-BoW representations to successfully improve performance by 2–7% with low added computational cost. Notably, SC-BoW on Dense Trajectories outperformed more complex deep learning approaches. Thus, scale coding is a low-cost and low-level encoding scheme that increases classification power of the standard BoW without compromising efficiency. Full article
(This article belongs to the Special Issue Data Processing of Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The pipeline for video-based action recognition using the Bag of Words (BoW) framework.</p>
Full article ">Figure 2
<p>Formation of SC-BoW representations.</p>
Full article ">Figure 3
<p>Scaled Spatio-Temporal Pyramids: (<b>a</b>) The first representation involves computing a SC-BoW for each cell. (<b>b</b>) The second representation adds scale as a 4th dimension and involves computing a standard BoW for each cell.</p>
Full article ">Figure 4
<p>The general pipeline for video-based action recognition with scale coded BoW.</p>
Full article ">Figure 5
<p>Generation of HOG features: For each block, the HOG feature vectors for the highlighted <math display="inline"><semantics> <mrow> <mi>k</mi> <mo>×</mo> <mi>k</mi> </mrow> </semantics></math> cell are sum pooled and divided by the <math display="inline"><semantics> <mrow> <mi>ℓ</mi> <mn>2</mn> </mrow> </semantics></math>-norm to form a normalized HOG feature histogram.</p>
Full article ">Figure 6
<p>Temporal pyramid structure for SC-BoW.</p>
Full article ">Figure 7
<p>Scaled Spatio-Temporal Pyramids to add structure to SC-BoW representations.</p>
Full article ">Figure 8
<p>Plot comparing the class accuracies obtained on the KTH dataset for dense trajectories and scale coded dense trajectories.</p>
Full article ">Figure 9
<p>Plot comparing the class accuracies obtained on the reduced HMDB51 dataset for dense trajectories and scale coded dense trajectories.</p>
Full article ">Figure 10
<p>The effect of number of scale partitions on accuracy for the KTH and HMDB51 (reduced) datasets.</p>
Full article ">
21 pages, 5088 KiB  
Article
Functional Evaluation of a Force Sensor-Controlled Upper-Limb Power-Assisted Exoskeleton with High Backdrivability
by Chang Liu, Hongbo Liang, Naoya Ueda, Peirang Li, Yasutaka Fujimoto and Chi Zhu
Sensors 2020, 20(21), 6379; https://doi.org/10.3390/s20216379 - 9 Nov 2020
Cited by 16 | Viewed by 4436
Abstract
A power-assisted exoskeleton should be capable of reducing the burden on the wearer’s body or rendering his or her work improved and efficient. More specifically, the exoskeleton should be easy to wear, be simple to use, and provide power assistance without hindering the [...] Read more.
A power-assisted exoskeleton should be capable of reducing the burden on the wearer’s body or rendering his or her work improved and efficient. More specifically, the exoskeleton should be easy to wear, be simple to use, and provide power assistance without hindering the wearer’s movement. Therefore, it is necessary to evaluate the backdrivability, range of motion, and power-assist capability of such an exoskeleton. This evaluation identifies the pros and cons of the exoskeleton, and it serves as the basis for its subsequent development. In this study, a lightweight upper-limb power-assisted exoskeleton with high backdrivability was developed. Moreover, a motion capture system was adopted to measure and analyze the workspace of the wearer’s upper limb after the exoskeleton was worn. The results were used to evaluate the exoskeleton’s ability to support the wearer’s movement. Furthermore, a small and compact three-axis force sensor was used for power assistance, and the effect of the power assistance was evaluated by means of measuring the wearer’s surface electromyography, force, and joint angle signals. Overall, the study showed that the exoskeleton could achieve power assistance and did not affect the wearer’s movements. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Mechanical structure of the exoskeleton.</p>
Full article ">Figure 2
<p>Schematic of the active joint.</p>
Full article ">Figure 3
<p>Measured backdriving torques of three different forms of gear reducers used for the active joint: (1) The gear reducer only. (2) The gear reducer and the timing-belt pulley. (3) The gear reducer, timing-belt pulley, and motor, that is, the entire active joint.</p>
Full article ">Figure 4
<p>(<b>a</b>) experimental setup and overview of the motion capture; (<b>b</b>) marker locations; (<b>c</b>) measurement of the shoulder joint workspace; (<b>d</b>) measurement of the elbow joint workspace.</p>
Full article ">Figure 5
<p>Boundary area of the shoulder joint (<b>a</b>) and the elbow joint (<b>b</b>) constructed by workspace measurement results of subject A after the exoskeleton was worn.</p>
Full article ">Figure 6
<p>The results of the subject A’s workspace of daily life, before, and after the exoskeleton was worn of shoulder joint (<b>a</b>) and elbow joint (<b>b</b>). The workspace before the exoskeleton was worn (blue), contains the workspace after the exoskeleton was worn (red), and the latter further contains the commonly used space for daily life (black).</p>
Full article ">Figure 7
<p>Position and measurement method of the force sensor.</p>
Full article ">Figure 8
<p>Block diagram of the control system for the exoskeleton.</p>
Full article ">Figure 9
<p>The experimental task is to carry a load from ground onto different heights (upper row is 1.2 m, bottom row is 1.8 m).</p>
Full article ">Figure 10
<p>sEMG signals for carrying a load to the 1.2 m high for subject A. (<b>a</b>) is the results of sEMG signals of biceps and deltoid without exoskeleton. (<b>b</b>) is the results of sEMG signals of biceps and deltoid with exoskeleton.</p>
Full article ">Figure 11
<p>Resultant force signals for carrying a load to the 1.2 m high for subject A.</p>
Full article ">Figure 12
<p>Joint angle signals for carrying a load to the 1.2 m high for subject A. (<b>a</b>) is the results of joint angle signals of elbow and shoulder without exoskeleton. (<b>b</b>) is the results of joint angle signals of elbow and shoulder with exoskeleton.</p>
Full article ">Figure 13
<p>sEMG signals for carrying a load to the 1.8 m high for subject A. (<b>a</b>) is the results of sEMG signals of biceps and deltoid without exoskeleton. (<b>b</b>) is the results of sEMG signals of biceps and deltoid with exoskeleton.</p>
Full article ">Figure 14
<p>Resultant force signals for carrying a load to the 1.8 m high for subject A.</p>
Full article ">Figure 15
<p>Joint angle signals for carrying a load to the 1.8 m high for subject A. (<b>a</b>) is the results of joint angle signals of elbow and shoulder without exoskeleton. (<b>b</b>) is the results of joint angle signals of elbow and shoulder with exoskeleton.</p>
Full article ">
21 pages, 3409 KiB  
Article
A Novel Framework Using Deep Auto-Encoders Based Linear Model for Data Classification
by Ahmad M. Karim, Hilal Kaya, Mehmet Serdar Güzel, Mehmet R. Tolun, Fatih V. Çelebi and Alok Mishra
Sensors 2020, 20(21), 6378; https://doi.org/10.3390/s20216378 - 9 Nov 2020
Cited by 20 | Viewed by 3784
Abstract
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which [...] Read more.
This paper proposes a novel data classification framework, combining sparse auto-encoders (SAEs) and a post-processing system consisting of a linear system model relying on Particle Swarm Optimization (PSO) algorithm. All the sensitive and high-level features are extracted by using the first auto-encoder which is wired to the second auto-encoder, followed by a Softmax function layer to classify the extracted features obtained from the second layer. The two auto-encoders and the Softmax classifier are stacked in order to be trained in a supervised approach using the well-known backpropagation algorithm to enhance the performance of the neural network. Afterwards, the linear model transforms the calculated output of the deep stacked sparse auto-encoder to a value close to the anticipated output. This simple transformation increases the overall data classification performance of the stacked sparse auto-encoder architecture. The PSO algorithm allows the estimation of the parameters of the linear model in a metaheuristic policy. The proposed framework is validated by using three public datasets, which present promising results when compared with the current literature. Furthermore, the framework can be applied to any data classification problem by considering minor updates such as altering some parameters including input features, hidden neurons and output classes. Full article
Show Figures

Figure 1

Figure 1
<p>The Deep Learning Framework Based on a Linear Model and metaheuristic algorithm (PSO).</p>
Full article ">Figure 2
<p>Training Flowchart for the Proposed Framework.</p>
Full article ">Figure 3
<p>Datasets for Normal and Abnormal Cases.</p>
Full article ">Figure 4
<p>The MSE for the Linear System for Epilepsy dataset.</p>
Full article ">Figure 5
<p>The MSE for the Linear System for SPECTF dataset.</p>
Full article ">Figure 6
<p>The MSE for the Linear System for Diagnosis of Cardiac Arrhythmia.</p>
Full article ">Figure 7
<p>Graphical Representation of Performance Criteria for Epileptic Seizure Detection.</p>
Full article ">Figure 8
<p>Graphical Representation of Performance Criteria for SPECTF Classification.</p>
Full article ">Figure 9
<p>Graphical Representation of Performance Criteria for Diagnosis of Cardiac Arrhythmia.</p>
Full article ">Figure A1
<p>The model of stack a Stacked Sparse Auto-encoder (SSAE) with two hidden layers and a classifier (SoftMax).</p>
Full article ">
12 pages, 2667 KiB  
Letter
Classification of Aggressive Movements Using Smartwatches
by Franck Tchuente, Natalie Baddour and Edward D. Lemaire
Sensors 2020, 20(21), 6377; https://doi.org/10.3390/s20216377 - 9 Nov 2020
Cited by 8 | Viewed by 2998
Abstract
Recognizing aggressive movements is a challenging task in human activity recognition. Wearable smartwatch technology with machine learning may be a viable approach for human aggressive behavior classification. This research identified a viable classification model and feature selector (CM-FS) combination for separating aggressive from [...] Read more.
Recognizing aggressive movements is a challenging task in human activity recognition. Wearable smartwatch technology with machine learning may be a viable approach for human aggressive behavior classification. This research identified a viable classification model and feature selector (CM-FS) combination for separating aggressive from non-aggressive movements using smartwatch data and determined if only one smartwatch is sufficient for this task. A ranking method was used to select relevant CM-FS models across accuracy, sensitivity, specificity, precision, F-score, and Matthews correlation coefficient (MCC). The Waikato environment for knowledge analysis (WEKA) was used to run 6 machine learning classifiers (random forest, k-nearest neighbors (kNN), multilayer perceptron neural network (MP), support vector machine, naïve Bayes, decision tree) coupled with three feature selectors (ReliefF, InfoGain, Correlation). Microsoft Band 2 accelerometer and gyroscope data were collected during an activity circuit that included aggressive (punching, shoving, slapping, shaking) and non-aggressive (clapping hands, waving, handshaking, opening/closing a door, typing on a keyboard) tasks. A combination of kNN and ReliefF was the best CM-FS model for separating aggressive actions from non-aggressive actions, with 99.6% accuracy, 98.4% sensitivity, 99.8% specificity, 98.9% precision, 0.987 F-score, and 0.984 MCC. kNN and random forest classifiers, combined with any of the feature selectors, generated the top models. Models with naïve Bayes or support vector machines had poor performance for sensitivity, F-score, and MCC. Wearing the smartwatch on the dominant wrist produced the best single-watch results. The kNN and ReliefF combination demonstrated that this smartwatch-based approach is a viable solution for identifying aggressive behavior. This wrist-based wearable sensor approach could be used by care providers in settings where people suffer from dementia or mental health disorders, where random aggressive behaviors often occur. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Microsoft band 2 (MSB2) accelerometer and gyroscope axes orientation. (<b>b</b>) Participant punching the body opponent bag.</p>
Full article ">Figure 2
<p>Accelerometer linear acceleration (x-axis).</p>
Full article ">Figure 3
<p>Tri-axial linear acceleration of participant 1.</p>
Full article ">Figure 4
<p>Tri-axial angular acceleration of participant 1.</p>
Full article ">Figure 5
<p>Extracting the mean feature from raw data sliding windows.</p>
Full article ">
17 pages, 10375 KiB  
Article
Driver Characteristics Oriented Autonomous Longitudinal Driving System in Car-Following Situation
by Haksu Kim, Kyunghan Min and Myoungho Sunwoo
Sensors 2020, 20(21), 6376; https://doi.org/10.3390/s20216376 - 9 Nov 2020
Cited by 10 | Viewed by 3075
Abstract
Advanced driver assistance system such as adaptive cruise control, traffic jam assistance, and collision warning has been developed to reduce the driving burden and increase driving comfort in the car-following situation. These systems provide automated longitudinal driving to ensure safety and driving performance [...] Read more.
Advanced driver assistance system such as adaptive cruise control, traffic jam assistance, and collision warning has been developed to reduce the driving burden and increase driving comfort in the car-following situation. These systems provide automated longitudinal driving to ensure safety and driving performance to satisfy unspecified individuals. However, drivers can feel a sense of heterogeneity when autonomous longitudinal control is performed by a general speed planning algorithm. In order to solve heterogeneity, a speed planning algorithm that reflects individual driving behavior is required to guarantee harmony with the intention of the driver. In this paper, we proposed a personalized longitudinal driving system in a car-following situation, which mimics personal driving behavior. The system is structured by a multi-layer framework composed of a speed planner and driver parameter manager. The speed planner generates an optimal speed profile by parametric cost function and constraints that imply driver characteristics. Furthermore, driver parameters are determined by the driver parameter manager according to individual driving behavior based on real driving data. The proposed algorithm was validated through driving simulation. The results show that the proposed algorithm mimics the driving style of an actual driver while maintaining safety against collisions with the preceding vehicle. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Vehicle configuration.</p>
Full article ">Figure 2
<p>Driving data acquisition site.</p>
Full article ">Figure 3
<p>Kernel density estimation (KDE) of driver features; spacing gap, maximum jerk, and minimum jerk.</p>
Full article ">Figure 4
<p>Driving feature distributions according to ego-vehicle velocity; distributions of spacing gap, maximum jerk and minimum jerk.</p>
Full article ">Figure 5
<p>The overall framework of driver characteristics oriented adaptive cruise control (DCO-ACC).</p>
Full article ">Figure 6
<p>Parameter vector of spacing gap depending on the ego-vehicle velocity. (<b>a</b>) Initial parameter vector, (<b>b</b>) Updated parameter vector.</p>
Full article ">Figure 7
<p>Parameter activation of driver parameter manager.</p>
Full article ">Figure 8
<p>Gaussian value and effective likelihood according to acceleration indicator value. (<b>a</b>) Gaussian value, (<b>b</b>) Effective likelihood.</p>
Full article ">Figure 9
<p>Optimization problem definition of the speed planner.</p>
Full article ">Figure 10
<p>Comparison of 1st driver’s real-driving data and simulation results of the proposed algorithm with 1st driver model.</p>
Full article ">Figure 11
<p>Driving simulation results of the proposed algorithm of 1st driver for the entire route.</p>
Full article ">Figure 12
<p>Driving simulation results of the proposed algorithm of 2nd driver for the entire route.</p>
Full article ">Figure 13
<p>Driving simulation results of the proposed algorithm of 3rd driver for the entire route.</p>
Full article ">Figure 14
<p>Driving behaviors of each driver model for identical preceding vehicle’s behavior.</p>
Full article ">Figure 15
<p>Spacing gap distributions of driving data and proposed algorithm results by each driver.</p>
Full article ">
8 pages, 1042 KiB  
Letter
Beryllium-Ion-Selective PEDOT Solid Contact Electrode Based on 9,10-Dinitrobenzo-9-Crown-3-Ether
by Junghwan Kim, Dae Hee Kim, Jin Cheol Yang, Jae Sang Kim, Ji Ha Lee and Sung Ho Jung
Sensors 2020, 20(21), 6375; https://doi.org/10.3390/s20216375 - 9 Nov 2020
Cited by 3 | Viewed by 2892
Abstract
A beryllium(II)-ion-selective poly(ethylenedioxythiophene) (PEDOT) solid contact electrode comprising 9,10-dinitrobenzo-9-crown-3-ether was successfully developed. The all-solid-state contact electrode, with an oxygen-containing cation-sensing membrane combined with an electropolymerized PEDOT layer, exhibited the best response characteristics. The performance of the constructed electrode was evaluated and optimized using [...] Read more.
A beryllium(II)-ion-selective poly(ethylenedioxythiophene) (PEDOT) solid contact electrode comprising 9,10-dinitrobenzo-9-crown-3-ether was successfully developed. The all-solid-state contact electrode, with an oxygen-containing cation-sensing membrane combined with an electropolymerized PEDOT layer, exhibited the best response characteristics. The performance of the constructed electrode was evaluated and optimized using potentiometry, conductance measurements, constant-current chronopotentiometry, and electrochemical impedance spectroscopy (EIS). Under optimized conditions, which were found for an ion-selective membrane (ISM) composition of 3% ionophore, 30% polyvinylchloride (PVC), 64% o-nitro phenyl octyl ether (o-NPOE), and 3% sodium tetraphenylborate (NaTPB), the fabricated electrode exhibited a good performance over a wide concentration range (10−2.5–10−7.0 M) and a wide pH range of 2.0–9.0, with a Nernstian slope of 29.5 mV/D for the beryllium (II) ion and a detection limit as low as 10−7.0 M. The developed electrode shows good selectivity for the beryllium(II) ion over alkali, alkaline earth, transition, and heavy metal ions. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Chemical structure of <b>1</b>.</p>
Full article ">Figure 2
<p>Conductometric titration curves for <b>1</b> with metal cations, obtained in 95% acetonitrile-DMSO (AN-DMSO) solution. The molar conductance Λ<sub>m</sub> (S<sup>−1</sup>cm<sup>2</sup>mol<sup>−1</sup>) is plotted against [<b>1</b>]/[M<sup>n+</sup>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Calibration curve of the fabricated Be<sup>2+</sup> ion-selective electrode (ISE), consisting of the potentiometric response of <b>E1</b> for various metal cations. (<b>b</b>) Potentiometric selectivity coefficients (<math display="inline"><semantics> <mrow> <mi>log</mi> <msubsup> <mi>K</mi> <mrow> <mi>B</mi> <msup> <mi>e</mi> <mrow> <mn>2</mn> <mo>+</mo> </mrow> </msup> <mo>,</mo> <msup> <mi>M</mi> <mrow> <mi>n</mi> <mo>+</mo> </mrow> </msup> </mrow> <mrow> <mi>p</mi> <mi>o</mi> <mi>t</mi> </mrow> </msubsup> </mrow> </semantics></math>) for <b>E1</b> and <b>E2</b>.</p>
Full article ">Figure 4
<p>Effect of pH on <b>E1</b>.</p>
Full article ">
18 pages, 3056 KiB  
Article
Temporal Changes in Air Quality According to Land-Use Using Real Time Big Data from Smart Sensors in Korea
by Sung Su Jo, Sang Ho Lee and Yountaik Leem
Sensors 2020, 20(21), 6374; https://doi.org/10.3390/s20216374 - 9 Nov 2020
Cited by 11 | Viewed by 3237
Abstract
This study analyzed the changes in particulate matter concentrations according to land-use over time and the spatial characteristics of the distribution of particulate matter concentrations using big data of particulate matter in Daejeon, Korea, measured by Private Air Quality Monitoring Smart Sensors (PAQMSSs). [...] Read more.
This study analyzed the changes in particulate matter concentrations according to land-use over time and the spatial characteristics of the distribution of particulate matter concentrations using big data of particulate matter in Daejeon, Korea, measured by Private Air Quality Monitoring Smart Sensors (PAQMSSs). Land-uses were classified into residential, commercial, industrial, and green groups according to the primary land-use around the 650-m sensor radius. Data on particulate matter with an aerodynamic diameter <10 µm (PM10) and <2.5 µm (PM2.5) were captured by PAQMSSs from September?October (i.e., fall) in 2019. Differences and variation characteristics of particulate matter concentrations between time periods and land-uses were analyzed and spatial mobility characteristics of the particulate matter concentrations over time were analyzed. The results indicate that the particulate matter concentrations in Daejeon decreased in the order of industrial, housing, commercial and green groups overall; however, the concentrations of the commercial group were higher than those of the residential group during 21:00–23:00, which reflected the vital nighttime lifestyle in the commercial group in Korea. Second, the green group showed the lowest particulate matter concentration and the industrial group showed the highest concentration. Third, the highest particulate matter concentrations were in urban areas where commercial and business functions were centered and in the vicinity of industrial complexes. Finally, over time, the PM10 concentrations were clearly high at noon and low at night, whereas the PM2.5 concentrations were similar at certain areas. Full article
Show Figures

Figure 1

Figure 1
<p>Mean concentrations of particulate matter (PM10, PM2.5) over time obtained from Private Air Quality Monitoring Smart Sensors (PAQMSSs) in Daejeon.</p>
Full article ">Figure 2
<p>Air Quality Monitoring Sensors (AQMSs) map with 650-m buffer.</p>
Full article ">Figure 3
<p>Residential, commercial, industrial, and green area ratio by groups.</p>
Full article ">Figure 4
<p>Map of clustered Private Air Quality Monitoring Smart Sensors (PAQMSSs).</p>
Full article ">Figure 5
<p>Differences in PM10 concentration by land-use over time.</p>
Full article ">Figure 6
<p>Changes in spatial distribution characteristics of PM10 concentration over time. AM1, AM2, Noon, PM1, and PM2.</p>
Full article ">Figure 7
<p>Changes in PM2.5 concentration by land-use group over time.</p>
Full article ">Figure 8
<p>Changes in spatial distribution characteristics of PM2.5 concentrations over time. AM1, AM2, Noon, PM1, and PM2.</p>
Full article ">
Previous Issue
Back to TopTop