[go: up one dir, main page]

Next Issue
Volume 17, February
Previous Issue
Volume 16, December
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 17, Issue 1 (January 2017) – 216 articles

Cover Story (view full-size image): Autonomous driving sets a disruptive paradigm of mobility which relies on different technological advances such as reliable sensors capable of providing accurate knowledge of the environment, AI-based algorithms which provide precise situation awareness, robust control architectures for vehicle's maneuverability and complex behavior models comprising drivers, pedestrians, vehicles or traffic. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
5603 KiB  
Article
Diaphragm Based Fiber Bragg Grating Acceleration Sensor with Temperature Compensation
by Tianliang Li, Yuegang Tan, Xue Han, Kai Zheng and Zude Zhou
Sensors 2017, 17(1), 218; https://doi.org/10.3390/s17010218 - 23 Jan 2017
Cited by 86 | Viewed by 7943
Abstract
A novel fiber Bragg grating (FBG) sensing-based acceleration sensor has been proposed to simultaneously decouple and measure temperature and acceleration in real-time. This design applied a diaphragm structure and utilized the axial property of a tightly suspended optical fiber, enabling improvement in its [...] Read more.
A novel fiber Bragg grating (FBG) sensing-based acceleration sensor has been proposed to simultaneously decouple and measure temperature and acceleration in real-time. This design applied a diaphragm structure and utilized the axial property of a tightly suspended optical fiber, enabling improvement in its sensitivity and resonant frequency and achieve a low cross-sensitivity. The theoretical vibrational model of the sensor has been built, and its design parameters and sensing properties have been analyzed through the numerical analysis. A decoupling method has been presented with consideration of the thermal expansion of the sensor structure to realize temperature compensation. Experimental results show that the temperature sensitivity is 8.66 pm/°C within the range of 30–90 °C. The acceleration sensitivity is 20.189 pm/g with a linearity of 0.764% within the range of 5~65 m/s2. The corresponding working bandwidth is 10~200 Hz and its resonant frequency is 600 Hz. This sensor possesses an excellent impact resistance for the cross direction, and the cross-axis sensitivity is below 3.31%. This implementation can avoid the FBG-pasting procedure and overcome its associated shortcomings. The performance of the proposed acceleration sensor can be easily adjusted by modifying their corresponding physical parameters to satisfy requirements from different vibration measurements. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic and modeling of the proposed sensor: (<b>a</b>) the schemetic design of the FBG-based vibration sensor; and (<b>b</b>) the vibrational modeling of this proposed sensor.</p>
Full article ">Figure 2
<p>Relationship between the peak-valley sensitivity/resonant frequency and the configuration parameters of this sensor. (<b>a</b>) The effects of diaphragm radius on the sensor’s sensitivity and resonance frequency; (<b>b</b>) The effects of radius of the hard-core diaphragm on the sensor’s sensitivity and resonance frequency; (<b>c</b>) The effects of diaphragm thickness on the sensor’s sensitivity and resonance frequency; (<b>d</b>) The effects of the effective length of the optical fiber on the sensor’s sensitivity and resonance frequency.</p>
Full article ">Figure 3
<p>Assembly diagram and prototype of the proposed sensor: (<b>a</b>) 2D mechanical assembly drawing of the proposed sensor; and (<b>b</b>) physical prototypes of the proposed sensors.</p>
Full article ">Figure 4
<p>Schematic diagram for characterization of temperature effect.</p>
Full article ">Figure 5
<p>Relationship between the center wavelength of #1FBG <span class="html-italic">λ</span><sub>1</sub> and the temperature: (<b>a</b>) the experimental data for the the center wavelength of #1FBG <span class="html-italic">λ</span><sub>1</sub> and the temperature; and (<b>b</b>) the linearly fitted curve for the experimental data.</p>
Full article ">Figure 6
<p>Relationship between the center wavelength <span class="html-italic">λ</span><sub>2</sub> of #2FBG and the temperature: (<b>a</b>) the experimental data for the the center wavelength of #2FBG <span class="html-italic">λ</span><sub>2</sub> and the temperature; and (<b>b</b>) the linearly fitted curve for the experimental data.</p>
Full article ">Figure 7
<p>Schematic diagram and experimental setup for static experiments: (<b>a</b>) the schematic diagram for the static experiments; and (<b>b</b>) the experimental setup for static characterization.</p>
Full article ">Figure 8
<p>Reflective wavelength response of the #1FBG and #2FBG under a stimulation amplitude of 60 m/s<sup>2</sup>.</p>
Full article ">Figure 9
<p>Relationship between the center wavelength shift ∆<span class="html-italic">λ</span><sub>1</sub> of #1FBG and acceleration.</p>
Full article ">Figure 10
<p>Frequency-amplitude response curves of this sensor.</p>
Full article ">Figure 11
<p>The measured frequency errors between the proposed sensor and the commercial piezoelectric sensor.</p>
Full article ">Figure 12
<p>Amplitude-frequency response curves of the #1FBG along the cross direction.</p>
Full article ">Figure 13
<p>Amplitude-frequency response curves of the cross direction and vertical direction.</p>
Full article ">
7034 KiB  
Article
Hierarchical NiCo2O4 Hollow Sphere as a Peroxidase Mimetic for Colorimetric Detection of H2O2 and Glucose
by Wei Huang, Tianye Lin, Yang Cao, Xiaoyong Lai, Juan Peng and Jinchun Tu
Sensors 2017, 17(1), 217; https://doi.org/10.3390/s17010217 - 23 Jan 2017
Cited by 35 | Viewed by 9526
Abstract
In this work, the hierarchical NiCo2O4 hollow sphere synthesized via a “coordinating etching and precipitating” process was demonstrated to exhibit intrinsic peroxidase-like activity. The peroxidase-like activity of NiCo2O4, NiO, and Co3O4 hollow spheres [...] Read more.
In this work, the hierarchical NiCo2O4 hollow sphere synthesized via a “coordinating etching and precipitating” process was demonstrated to exhibit intrinsic peroxidase-like activity. The peroxidase-like activity of NiCo2O4, NiO, and Co3O4 hollow spheres were comparatively studied by the catalytic oxidation reaction of 3,3,5,5-tetramethylbenzidine (TMB) in presence of H2O2, and a superior peroxidase-like activity of NiCo2O4 was confirmed by stronger absorbance at 652 nm. Furthermore, the proposed sensing platform showed commendable response to H2O2 with a linear range from 10 μM to 400 μM, and a detection limit of 0.21 μM. Cooperated with GOx, the developed novel colorimetric and visual glucose-sensing platform exhibited high selectivity, favorable reproducibility, satisfactory applicability, wide linear range (from 0.1 mM to 4.5 mM), and a low detection limit of 5.31 μM. In addition, the concentration-dependent color change would offer a better and handier way for detection of H2O2 and glucose by naked eye. Full article
(This article belongs to the Special Issue Colorimetric and Fluorescent Sensor)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>,<b>b</b>) SEM and TEM images of Cu<sub>2</sub>O solid sphere; (<b>c</b>,<b>d</b>) SEM images of the hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere.</p>
Full article ">Figure 2
<p>TEM (<b>a</b>–<b>c</b>) and HRTEM (<b>d</b>) images of NiCo<sub>2</sub>O<sub>4</sub> hollow sphere. Inset in (<b>b</b>) is the corresponding selected area electron diffraction (SAED) pattern.</p>
Full article ">Figure 3
<p>(<b>a</b>) X-ray diffraction (XRD) pattern of as-prepared NiCo<sub>2</sub>O<sub>4</sub> hollow sphere sample; (<b>b</b>) N<sub>2</sub> adsorption–desorption isotherms, and inset is the corresponding pore size distribution of the as-prepared hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere.</p>
Full article ">Figure 4
<p>X-ray photoelectron spectroscopy (XPS) spectrum of the NiCo<sub>2</sub>O<sub>4</sub> hollow sphere: (<b>a</b>) survey; (<b>b</b>) Ni 2p; (<b>c</b>) Co 2p; (<b>d</b>) O 1s.</p>
Full article ">Figure 5
<p>Representative TEM images of the samples at different stages: (<b>a</b>) Cu<sub>2</sub>O; (<b>b</b>) Cu<sub>2</sub>O@Ni–Co hydroxide; (<b>c</b>) Ni–Co hydroxide hollow sphere precursor; and (<b>d</b>) NiCo<sub>2</sub>O<sub>4</sub> hollow sphere. (<b>e</b>) The corresponding schematic illustration of the formation process for hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere. Abbreviations: TT, thermal treatment.</p>
Full article ">Figure 6
<p>(<b>a</b>) Photograph and absorption spectra of colorimetric reactions under different conditions: (i) 3,3,5,5-tetramethylbenzidine (TMB) + H<sub>2</sub>O<sub>2</sub>; (ii) TMB + NiCo<sub>2</sub>O<sub>4</sub>; (iii) TMB + H<sub>2</sub>O<sub>2</sub> + NiCo<sub>2</sub>O<sub>4</sub>; (iv) TMB + H<sub>2</sub>O<sub>2</sub> + NiO; (v) TMB + H<sub>2</sub>O<sub>2</sub> + Co<sub>3</sub>O<sub>4</sub>. TMB concentration- (<b>b</b>) and pH- (<b>c</b>) dependent peroxidase-like activity of hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere. Experimental conditions: 5 μL of 1 mg·mL<sup>−1</sup> enzyme mimetic dispersions were incubated with 3 mL of pre-prepared sodium citrate buffer solution (0.1 M, pH 4.5) with the presence of H<sub>2</sub>O<sub>2</sub> (0.02 M) and TMB (0.08 mM).</p>
Full article ">Figure 7
<p>(<b>a</b>) Absorption spectra of the hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere with various concentrations of H<sub>2</sub>O<sub>2</sub> (0.01–0.4 mM). (<b>b</b>) The corresponding linear calibration curve. The inset shows the photograph of color change for various concentrations.</p>
Full article ">Figure 8
<p>Scheme illustration for colorimetric detection of glucose using the hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere.</p>
Full article ">Figure 9
<p>(<b>a</b>) Absorption spectra of the hierarchical NiCo<sub>2</sub>O<sub>4</sub> hollow sphere with various concentrations of glucose (0.1–4.5 mM). (<b>b</b>) The corresponding linear calibration curve. The inset shows the photograph of color change for various concentrations.</p>
Full article ">Figure 10
<p>Selectivity of the proposed sensor for glucose detection was assessed by measuring the absorption intensity at 652 nm. The concentrations for fructose, galactose, sucrose, and NaCl are 20 mM, respectively.</p>
Full article ">
18729 KiB  
Article
An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS
by Zhongyu Li, Junjie Wu, Yulin Huang, Haiguang Yang and Jianyu Yang
Sensors 2017, 17(1), 216; https://doi.org/10.3390/s17010216 - 23 Jan 2017
Cited by 5 | Viewed by 4995
Abstract
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for [...] Read more.
Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Geometrical relationship between aircrafts and the moving target for BFSAR.</p>
Full article ">Figure 2
<p>The flowchart of the proposed method.</p>
Full article ">Figure 3
<p>The original simulated moving-target scene.</p>
Full article ">Figure 4
<p>Raw date of the moving target for BFSAR. (<b>a</b>) In the 2-D time domain; (<b>b</b>) In the range-compressed domain.</p>
Full article ">Figure 5
<p>Keystone transform processed data in the range-compressed domain.</p>
Full article ">Figure 6
<p>The solving process of the new optimization problem for the moving-target imaging. (<b>a</b>) Image entropy; (<b>b</b>) Estimated cross-track velocity; (<b>c</b>) Estimated along-track velocity.</p>
Full article ">Figure 7
<p>Range curvature corrected image for the moving target.</p>
Full article ">Figure 8
<p>Imaging result of the moving target for BFSAR. (<b>a</b>) Before geometric rectification; (<b>b</b>) After geometric rectification.</p>
Full article ">
23328 KiB  
Article
Three-Dimensional Measurement for Specular Reflection Surface Based on Reflection Component Separation and Priority Region Filling Theory
by Xiaoming Sun, Ye Liu, Xiaoyang Yu, Haibin Wu and Ning Zhang
Sensors 2017, 17(1), 215; https://doi.org/10.3390/s17010215 - 23 Jan 2017
Cited by 29 | Viewed by 7523
Abstract
Due to the strong reflection property of materials with smooth surfaces like ceramic and metal, it will cause saturation and the highlight phenomenon in the image when taking pictures of those materials. In order to solve this problem, a new algorithm which is [...] Read more.
Due to the strong reflection property of materials with smooth surfaces like ceramic and metal, it will cause saturation and the highlight phenomenon in the image when taking pictures of those materials. In order to solve this problem, a new algorithm which is based on reflection component separation (RCS) and priority region filling theory is designed. Firstly, the specular pixels in the image are found by comparing the pixel parameters. Then, the reflection components are separated and processed. However, for ceramic, metal and other objects with strong specular highlight, RCS theory will change color information of highlight pixels due to larger specular reflection component. In this situation, priority region filling theory was used to restore the color information. Finally, we implement 3D experiments on objects with strong reflecting surfaces like ceramic plate, ceramic bottle, marble pot and yellow plate. Experimental results show that, with the proposed method, the highlight caused by the strong reflecting surface can be well suppressed. The highlight pixel number of ceramic plate, ceramic bottle, marble pot and yellow plate, is decreased by 43.8 times, 41.4 times, 33.0 times, and 10.1 times. Three-dimensional reconstruction results show that highlight areas were significantly reduced. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Dichromatic reflection model.</p>
Full article ">Figure 2
<p>Relationship between diffuse and specular reflection components.</p>
Full article ">Figure 3
<p>Maximum chroma space.</p>
Full article ">Figure 4
<p>Reflection component separation. (<b>a</b>) gives the maximum chroma of b to a; and (<b>b</b>) gives the maximum chroma of c to b.</p>
Full article ">Figure 5
<p>Ceramic bottle processed by specular reflection component removal method. (<b>a</b>) input image of ceramic bottle; and (<b>b</b>) processed ceramic bottle.</p>
Full article ">Figure 6
<p>Image annotation.</p>
Full article ">Figure 7
<p>System structure and measuring range. (<b>a</b>) system structure; and (<b>b</b>) measuring range.</p>
Full article ">Figure 8
<p>Ceramic bottle influenced by highlight.</p>
Full article ">Figure 9
<p>Processed results of ceramic bottle with our method.</p>
Full article ">Figure 10
<p>Comparison of the reconstruction of ceramic bottle before and after processing. (<b>a</b>) ceramic bottle; (<b>b</b>) before processing; (<b>c</b>) after processing; (<b>d</b>) enlarged view of red frame in (<b>b</b>); and (<b>e</b>) enlarged view of red frame in (<b>c</b>).</p>
Full article ">Figure 11
<p>Comparison of ceramic plate before and after processing. (<b>a</b>) ceramic plate influenced by highlights; and (<b>b</b>) processed ceramic plate with our method.</p>
Full article ">Figure 12
<p>Comparison of the reconstruction of ceramic plate before and after processing. (<b>a</b>) ceramic plate; (<b>b</b>) before processing; (<b>c</b>) after processing; (<b>d</b>) enlarged view of red frame in (<b>b</b>); and (<b>e</b>) enlarged view of red frame in (<b>c</b>).</p>
Full article ">Figure 13
<p>Comparison of marble pot before and after processing. (<b>a</b>) marble pot influenced by highlights; and the (<b>b</b>) processed marble pot with our method.</p>
Full article ">Figure 14
<p>Comparison of the reconstruction of marble pot before and after processing. (<b>a</b>) marble pot; (<b>b</b>) before processing; (<b>c</b>) after processing; (<b>d</b>) enlarged view of red frame in (<b>b</b>); and (<b>e</b>) enlarged view of red frame in (<b>c</b>).</p>
Full article ">Figure 15
<p>Comparison of yellow plate before and after processing. (<b>a</b>) yellow plate influenced by highlights; and (<b>b</b>) processed yellow plate with our method.</p>
Full article ">Figure 15 Cont.
<p>Comparison of yellow plate before and after processing. (<b>a</b>) yellow plate influenced by highlights; and (<b>b</b>) processed yellow plate with our method.</p>
Full article ">Figure 16
<p>Comparison of the reconstruction of yellow plate before and after processing. (<b>a</b>) yellow plate; (<b>b</b>) before processing; (<b>c</b>) after processing; (<b>d</b>) enlarged view of red frame in (<b>b</b>); and (<b>e</b>) enlarged view of red frame in (<b>c</b>).</p>
Full article ">Figure 17
<p>3D reconstruction results. (<b>a</b>) before processing; and (<b>b</b>) after processing.</p>
Full article ">Figure 18
<p>Comparison result between the proposed method and the color space conversion method. (<b>a</b>) the original image; (<b>b</b>) color space conversion method; and (<b>c</b>) proposed method.</p>
Full article ">
14910 KiB  
Article
Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping
by Ali Shafiekhani, Suhas Kadam, Felix B. Fritschi and Guilherme N. DeSouza
Sensors 2017, 17(1), 214; https://doi.org/10.3390/s17010214 - 23 Jan 2017
Cited by 117 | Viewed by 16956
Abstract
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower [...] Read more.
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower oversees an entire field, identifying specific plants for further inspection by the Vinobot. The advantage of this architecture is threefold: first, it allows the system to inspect large areas of a field at any time, during the day and night, while identifying specific regions affected by biotic and/or abiotic stresses; second, it provides high-throughput plant phenotyping in the field by either comprehensive or selective acquisition of accurate and detailed data from groups or individual plants; and third, it eliminates the need for expensive and cumbersome aerial vehicles or similarly expensive and confined field platforms. As the preliminary results from our algorithms for data collection and 3D image processing, as well as the data analysis and comparison with phenotype data collected by hand demonstrate, the proposed architecture is cost effective, reliable, versatile, and extendable. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Show Figures

Figure 1

Figure 1
<p>The proposed platforms for high-throughput phenotyping in the field deployed at the Bradford Research Center: (<b>a</b>) ground vehicle, Vinobot; and (<b>b</b>) observation tower, Vinoculer shown at 15 ft (4.5 m) high.</p>
Full article ">Figure 2
<p>Hardware components of Vinobot.</p>
Full article ">Figure 3
<p>High-level software architecture of the Vinobot.</p>
Full article ">Figure 4
<p>Hardware architecture of the Vinoculer.</p>
Full article ">Figure 5
<p>Field configuration at the MU Bradford Research Center. Small vertical boxes represent 10 plants, with sorghum marked by blue and corn marked by red boxes. Full boxes indicated plants selected for destructive sampling.</p>
Full article ">Figure 6
<p>Typical examples of 3D reconstructed plants at four different DAP (days after planting) using stereo images collected by the Vinobot in rows 17 and 18 in <a href="#sensors-17-00214-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>Relationship of plant height measured manually and plant height extracted from the 3D Model created using RGB images captured by Vinobot.</p>
Full article ">Figure 8
<p>Comparison of LAIs obtained by LAI-2000 and Vinobot using 3D reconstruction of the leaves. The horizontal axis shows the number of samples collected per row; the row number from <a href="#sensors-17-00214-f005" class="html-fig">Figure 5</a>; and the days after planting (DAP) for each observation.</p>
Full article ">Figure 9
<p>Wavelength sensitivity for the LI-190 SA, the LI-200 SA and the two outputs from the TSL2561.</p>
Full article ">Figure 10
<p>Comparison between data collected with the TSL2561 and the LI-190SA PAR Quantum sensors.</p>
Full article ">Figure 11
<p>Comparison between the Pyranometer LI-200 SA and the PAR Quantum LI-190 SA. The data is organized over rows of plants, with the horizontal axes showing the number of samples collected per row and the row number from <a href="#sensors-17-00214-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 12
<p>Three typical images of the aluminum + paper pattern used for calibration of RGB and IR cameras. (<b>a</b>) Left RGB; (<b>b</b>) IR Camera; (<b>c</b>) Right RGB.</p>
Full article ">Figure 13
<p>Stereo RGB-Thermal camera configuration.</p>
Full article ">Figure 14
<p>Re-projections of the 3D coordinates of the corners obtained from left-right stereo reconstruction onto the thermal images. The “x” indicate the re-projections, and “+” the extracted corners from the thermal image.</p>
Full article ">Figure 15
<p>Re-projection error of 3D points found by triangulation of corners on left and right RGB images.</p>
Full article ">Figure 16
<p>3D Reconstruction of entire field (side view) using stereo RGB images collected by the Vinoculer. (<b>a</b>) 23 DAP; (<b>b</b>) 36 DAP; (<b>c</b>) 39 DAP; (<b>d</b>) 45 DAP.</p>
Full article ">Figure 17
<p>Comparison between manual measurements (Ground Truth) and 3D model based plant height at several time points during the growing season. The 3D model was created using RGB images captured by the Vinoculer.</p>
Full article ">Figure 18
<p>Configuration of selected points as collar of top most leaf on 3D model created by Vinoculer images.</p>
Full article ">Figure 19
<p>Comparison of LAI estimation between LAI-2000 and 3D Model created by Vinoculer stereo images. The horizontal axes shows the number of samples collected per row; the row number according to <a href="#sensors-17-00214-f005" class="html-fig">Figure 5</a>; and DAP.</p>
Full article ">Figure 20
<p>Correlation between the Vinobot temperature sensors and the Vinoculer IR camera.</p>
Full article ">Figure 21
<p>Comparison between temperature measured by proposed system and Ground Truth (Bradford Weather Station [<a href="#B63-sensors-17-00214" class="html-bibr">63</a>]).</p>
Full article ">
12592 KiB  
Article
Opportunistic Sensor Data Collection with Bluetooth Low Energy
by Sergio Aguilar, Rafael Vidal and Carles Gomez
Sensors 2017, 17(1), 159; https://doi.org/10.3390/s17010159 - 23 Jan 2017
Cited by 51 | Viewed by 10060
Abstract
Bluetooth Low Energy (BLE) has gained very high momentum, as witnessed by its widespread presence in smartphones, wearables and other consumer electronics devices. This fact can be leveraged to carry out opportunistic sensor data collection (OSDC) in scenarios where a sensor node cannot [...] Read more.
Bluetooth Low Energy (BLE) has gained very high momentum, as witnessed by its widespread presence in smartphones, wearables and other consumer electronics devices. This fact can be leveraged to carry out opportunistic sensor data collection (OSDC) in scenarios where a sensor node cannot communicate with infrastructure nodes. In such cases, a mobile entity (e.g., a pedestrian or a vehicle) equipped with a BLE-enabled device can collect the data obtained by the sensor node when both are within direct communication range. In this paper, we characterize, both analytically and experimentally, the performance and trade-offs of BLE as a technology for OSDC, for the two main identified approaches, and considering the impact of its most crucial configuration parameters. Results show that a BLE sensor node running on a coin cell battery can achieve a lifetime beyond one year while transferring around 10 Mbit/day, in realistic OSDC scenarios. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Illustration of OSDC concept examples, where the mobile entity is a bus (<b>Left</b>) or a pedestrian (<b>Right</b>). The mobile entity is equipped with a BLE device and collects data from the sensor node during the contact time.</p>
Full article ">Figure 2
<p>Illustration of the two main OSDC approaches with BLE, for the two different advertisement settings in each one. (<b>a</b>) advertisement-based approach with one advertising packet per advertising event; (<b>b</b>) advertisement-based approach with three advertising packets per advertising event; (<b>c</b>) connection-based approach with one advertising packet per advertising event; (<b>d</b>) connection-based approach with three advertising packets per advertising event.</p>
Full article ">Figure 3
<p>Experimental setup for current measurements of the BLE121LR modules using an Agilent N333 power analyzer. The module at the left works as a slave that connects to the module at the right, which operates as a master.</p>
Full article ">Figure 4
<p>Current consumption profile of an advertising event for the BLE121LR platform operating as a non-connectable advertiser. Three-advertisement (leftmost) and single-advertisement (rightmost) advertising events are shown.</p>
Full article ">Figure 5
<p>Illustration of variables involved in the calculation of the average current consumption in the advertisement-based approach (<span class="html-italic">I<sub>avg_adv</sub></span>). ‘Adv Event’ refers to an advertising event.</p>
Full article ">Figure 6
<p>Current consumption profile of an advertising event for the BLE121LR platform, operating as a connectable advertiser. Single-advertisement (<b>Left</b>) and three-advertisement (<b>Right</b>) advertising events are shown.</p>
Full article ">Figure 7
<p>Illustration of time variables involved in the calculation of the average current consumption in the connection-based approach (<span class="html-italic">I<sub>avg_conn</sub></span>).</p>
Full article ">Figure 8
<p>Illustration of the components related with connection establishment, use and finalization. S and M denote Slave and Master, respectively. A round trip exchange comprises a packet sent by the master to the slave, and the response sent by the latter.</p>
Full article ">Figure 9
<p>Average current consumption of the sensor node in the advertisement-based approach, as a function of <span class="html-italic">advInterval</span>, and for both <span class="html-italic">N</span> = 1 and <span class="html-italic">N</span> = 3.</p>
Full article ">Figure 10
<p>Average current consumption of the sensor node within <span class="html-italic">connInterval</span> (<span class="html-italic">I<sub>avg_CI</sub></span>) in the connection-based approach, as a function of <span class="html-italic">connInterval</span>, and for BER = 0.</p>
Full article ">Figure 11
<p>Average current consumption of the sensor node within <span class="html-italic">connInterval</span> (Iavg_CI) in the connection-based approach, as a function of <span class="html-italic">connInterval</span>, and for several BER values.</p>
Full article ">Figure 12
<p>Average current consumption of the sensor node within a <span class="html-italic">T<sub>conn</sub></span> period in the connection-based approach, as a function of <span class="html-italic">connInterval</span>, and for <span class="html-italic">N</span> = 3 and BER = 0.</p>
Full article ">Figure 13
<p>Average current consumption of the sensor node within a <span class="html-italic">T<sub>conn</sub></span> period in the connection-based approach, as a function of <span class="html-italic">connInterval</span>, for several BER values, and for <span class="html-italic">N</span> = 3, <span class="html-italic">advInterval</span> = 0.02 s, and <span class="html-italic">T<sub>contact</sub></span> = 150 s.</p>
Full article ">Figure 14
<p>Average current consumption of the sensor node in the connection-based approach, for a time between contacts of one day, as a function of <span class="html-italic">advInterval</span>, and for different <span class="html-italic">N</span> and <span class="html-italic">T<sub>contact</sub></span>, and for BER = 0. A theoretical value of <span class="html-italic">T<sub>contact</sub></span> = 0 has been evaluated, however depicted results in the logarithmic representation used in the figure are very close to those of <span class="html-italic">T<sub>contact</sub></span> = 45 s. Thus they have been excluded from the figure for the sake of clarity.</p>
Full article ">Figure 15
<p>Average current consumption of the sensor node in the connection-based approach, for a time between contacts of one day, for <span class="html-italic">N</span> = 1, Tcontact = 45 s, and connInterval = 4 s, as a function of <span class="html-italic">advInterval</span>, and for different BER values.</p>
Full article ">Figure 16
<p>Average current consumption of the sensor node, for the advertisement-based and connection-based approaches, as a function of <span class="html-italic">advInterval</span> and for different <span class="html-italic">N</span> and <span class="html-italic">T<sub>contact</sub></span> values, and for BER = 0.</p>
Full article ">Figure 17
<p>Average sensor node lifetime, for the advertisement-based and connection-based approaches, as a function of <span class="html-italic">advInterval</span>, and for different <span class="html-italic">N</span>, <span class="html-italic">T<sub>contact</sub></span> and BER values, and assuming a time between contacts of one day. For connection-based results, <span class="html-italic">connInterval</span> = 4 s has been assumed.</p>
Full article ">Figure 18
<p>Maximum amount of collected data per contact interval, for the advertisement-based and connection-based approaches, as a function of <span class="html-italic">advInterval</span>, and for different <span class="html-italic">T<sub>contact</sub></span> values. Only curves for <span class="html-italic">connInterval</span> = 4 s are shown, for the sake of figure clarity.</p>
Full article ">Figure 19
<p>Maximum amount of collected data per contact interval, for the advertisement-based and connection-based approaches, as a function of <span class="html-italic">advInterval</span>, for different BER values, and for <span class="html-italic">T<sub>contact</sub></span> = 150 s. <span class="html-italic">connInterval</span> = 4 s has been assumed.</p>
Full article ">Figure 20
<p>Influence of <span class="html-italic">connInterval</span> on the maximum amount of collected data per contact interval, for the connection-based approach, and for different <span class="html-italic">T<sub>contact</sub></span> and <span class="html-italic">advInterval</span> values.</p>
Full article ">Figure 21
<p>Influence of <span class="html-italic">connInterval</span> on the maximum amount of collected data per contact interval, for the connection-based approach, for different BER values, for <span class="html-italic">advInterval</span> = 0.02 s and <span class="html-italic">T<sub>contact</sub></span> = 150 s.</p>
Full article ">Figure 22
<p>Energy cost for the advertisement-based and the connection-based approaches as a function of <span class="html-italic">advInterval</span>, for different <span class="html-italic">T<sub>contact</sub></span> and <span class="html-italic">N</span> values, assuming connInterval = 4 s, and a time between contacts of one day.</p>
Full article ">Figure 23
<p>Maximum measured amount of collected data per contact interval, as a function of <span class="html-italic">connInterval</span>, for <span class="html-italic">T<sub>contact</sub></span> values of 45 s and 150 s.</p>
Full article ">Figure 24
<p>Number of round trip exchanges measured per <span class="html-italic">connInterval</span>, as a function of <span class="html-italic">connInterval</span>.</p>
Full article ">Figure 25
<p>Measured amount of collected data as a function of distance between sender and receiver, in the university campus and beach scenarios.</p>
Full article ">
2601 KiB  
Review
Gas Sensors Based on Polymer Field-Effect Transistors
by Aifeng Lv, Yong Pan and Lifeng Chi
Sensors 2017, 17(1), 213; https://doi.org/10.3390/s17010213 - 22 Jan 2017
Cited by 82 | Viewed by 11494
Abstract
This review focuses on polymer field-effect transistor (PFET) based gas sensor with polymer as the sensing layer, which interacts with gas analyte and thus induces the change of source-drain current (ΔISD). Dependent on the sensing layer which can be semiconducting [...] Read more.
This review focuses on polymer field-effect transistor (PFET) based gas sensor with polymer as the sensing layer, which interacts with gas analyte and thus induces the change of source-drain current (ΔISD). Dependent on the sensing layer which can be semiconducting polymer, dielectric layer or conducting polymer gate, the PFET sensors can be subdivided into three types. For each type of sensor, we present the molecular structure of sensing polymer, the gas analyte and the sensing performance. Most importantly, we summarize various analyte–polymer interactions, which help to understand the sensing mechanism in the PFET sensors and can provide possible approaches for the sensor fabrication in the future. Full article
(This article belongs to the Special Issue Gas Nanosensors)
Show Figures

Figure 1

Figure 1
<p>Four types of OFET device structure: (<b>a</b>) bottom gate/bottom contact; (<b>b</b>) bottom gate/top contact; (<b>c</b>) top gate/top contact; and (<b>d</b>) top gate/bottom contact. Reprinted with permission from Ref. [<a href="#B40-sensors-17-00213" class="html-bibr">40</a>]. Copyright (2012) American Chemical Society.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic structure of OFET sensor with poly(3-hexylthiophene)/polystyrene (P3HT/PS) blended film; and (<b>b</b>) transfer curve of P3HT/PS film with the best mixing ratio of 1:4 in weight. Reprinted with permission from Ref. [<a href="#B22-sensors-17-00213" class="html-bibr">22</a>]. Copyright (2016) Elsevier.</p>
Full article ">Figure 3
<p>Conceptual framework for physical discrimination. Partial analyte molecules in the ambient atmosphere will adsorb onto the surface of the polythiophene film at <span class="html-italic">x</span> = 0 and diffuse into the film toward the substrate, eventually interacting with the semiconductor layer at the dielectric–semiconductor interface. Reprinted with permission from Ref. [<a href="#B27-sensors-17-00213" class="html-bibr">27</a>]. Copyright (2010) Elsevier.</p>
Full article ">Figure 4
<p>Output curves of PTA-OMe OFETs: (<b>a</b>) before NO<sub>2</sub> exposure; and (<b>b</b>) after NO<sub>2</sub> exposure. The insets show saturated transfer curves of the corresponding OFETs in semilog plots. Reprinted with permission from Ref. [<a href="#B28-sensors-17-00213" class="html-bibr">28</a>]. Copyright (2007) Wiley-VCH.</p>
Full article ">Figure 5
<p>The supposed sensing process of H<sub>2</sub>S in film of polymer PSFDTBT. Both adsorption and desorption speeds of H<sub>2</sub>S molecules increase when the polymer films become thinner. However, desorption speed (<span class="html-italic">ν</span><sub>d,n</sub>; n = 25, 20, 15, and 5 nm) increases much faster than adsorption speed (<span class="html-italic">ν</span><sub>a,n</sub>). The length of arrows denotes the level of the speed.</p>
Full article ">Figure 6
<p>Response of five polymers toward twelve gas analytes in the sensor array. Baseline is established through operation in ambient conditions, and sensor response is measured after exposure to 40 ppm of each analyte for 2 min. Reprinted with permission from Ref. [<a href="#B39-sensors-17-00213" class="html-bibr">39</a>]. Copyright (2006) American Institute of Physics.</p>
Full article ">Scheme 1
<p>Reported molecular structures of semiconducting polymer used in the OFET sensors.</p>
Full article ">Scheme 2
<p>Formation of linkage type structure between NH<sub>3</sub> and P3HT.</p>
Full article ">Scheme 3
<p>Proposed interaction between NO<sub>2</sub> and polymer semiconductor PTA-OMe.</p>
Full article ">Scheme 4
<p>Chemical structures of: (<b>a</b>) compounds CuTPPS (CuII tetraphenylporphyrin) and ADB (copolymer of diethynyl-pentiptycene and dibenzyl-ProDOT); (<b>b</b>) non-explosive and (<b>c</b>) explosive nitro-based gas analytes; and (<b>d</b>) 2,3-dimethyl-2,3-dinitrobutane (DMNB).</p>
Full article ">Scheme 5
<p>(<b>a</b>) Proposed sensing mechanism with siloxane (T-SA) modified dielectric as the sensing layer; and (<b>b</b>) chemical structures of ROMP-dielectric materials: (left) statistic co-polymer with eosin Y as NH<sub>3</sub>-sensitive group (m = 300, n = 3), and (right) statistic co-polymer with 2,7-dichlorfluorescein as NH<sub>3</sub>-sensitive group (m = 300, n = 3).</p>
Full article ">
954 KiB  
Article
Fabrication Technology and Characteristics of a Magnetic Sensitive Transistor with nc-Si:H/c-Si Heterojunction
by Xiaofeng Zhao, Baozeng Li and Dianzhong Wen
Sensors 2017, 17(1), 212; https://doi.org/10.3390/s17010212 - 22 Jan 2017
Cited by 7 | Viewed by 5485
Abstract
This paper presents a magnetically sensitive transistor using a nc-Si:H/c-Si heterojunction as an emitter junction. By adopting micro electro-mechanical systems (MEMS) technology and chemical vapor deposition (CVD) method, the nc-Si:H/c-Si heterojunction silicon magnetically sensitive transistor (HSMST) chips were designed and fabricated on a [...] Read more.
This paper presents a magnetically sensitive transistor using a nc-Si:H/c-Si heterojunction as an emitter junction. By adopting micro electro-mechanical systems (MEMS) technology and chemical vapor deposition (CVD) method, the nc-Si:H/c-Si heterojunction silicon magnetically sensitive transistor (HSMST) chips were designed and fabricated on a p-type <100> orientation double-side polished silicon wafer with high resistivity. In addition, a collector load resistor ( R L ) was integrated on the chip, and the resistor converted the collector current ( I C ) to a collector output voltage ( V out ). When I B = 8.0 mA, V DD = 10.0 V, and R L = 4.1 kΩ, the magnetic sensitivity ( S V ) at room temperature and temperature coefficient ( α C ) of the collector current for HSMST were 181 mV/T and −0.11%/°C, respectively. The experimental results show that the magnetic sensitivity and temperature characteristics of the proposed transistor can be obviously improved by the use of a nc-Si:H/c-Si heterojunction as an emitter junction. Full article
(This article belongs to the Special Issue MEMS and Nano-Sensors)
Show Figures

Figure 1

Figure 1
<p>Basic structure and equivalent circuit of the heterojunction silicon magnetically sensitive transistor (HSMST): (<b>a</b>) Basic structure; (<b>b</b>) Equivalent circuit.</p>
Full article ">Figure 2
<p>Transistor band diagrams: (<b>a</b>) Band diagram of the homojunction transistor; (<b>b</b>) Band diagram of the heterojunction transistor; (<b>c</b>) Band diagram of the HSMST with a long base region.</p>
Full article ">Figure 3
<p>The working principle of the magnetically sensitive transistor (MST) under different <span class="html-italic">B</span>: (<b>a</b>) <span class="html-italic">B</span> = 0 T; (<b>b</b>) <span class="html-italic">B</span> = 0 T; (<b>c</b>) <span class="html-italic">B</span> &gt; 0 T; (<b>d</b>) <span class="html-italic">B</span> &lt; 0 T. B: base; C: collector; E: emitter.</p>
Full article ">Figure 4
<p>Main fabrication process of the integrated chip: (<b>a</b>) Cleaning wafer; (<b>b</b>) First photolithography; (<b>c</b>) Form the collector load resistor and second photolithography; (<b>d</b>) Making the collector and third photolithography; (<b>e</b>) Fabricating the base region and fourth photolithography; (<b>f</b>) Fabricating the emitter; (<b>g</b>) Fifth photolithography as a pin hole; (<b>h</b>) Sixth photolithography to form the electrodes.</p>
Full article ">Figure 5
<p>Packaging photograph of the integrated HSMST chip.</p>
Full article ">Figure 6
<p><math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">C</mi> </msub> </semantics> </math>–<math display="inline"> <semantics> <msub> <mi>V</mi> <mi>CE</mi> </msub> </semantics> </math> characteristic curves of the HSMST.</p>
Full article ">Figure 7
<p>Temperature characteristics of the magnetic sensitivity transistor with nc-Si:H/c-Si heterojunction: (<b>a</b>) <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math> = 1.0 mA; (<b>b</b>) <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math> = 5.0 mA; (<b>c</b>) <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math> = 8.0 mA.</p>
Full article ">Figure 8
<p>The relationship curve between <math display="inline"> <semantics> <msub> <mi>α</mi> <mi mathvariant="normal">C</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math>.</p>
Full article ">Figure 9
<p>Testing system of the magnetic field sensor.</p>
Full article ">Figure 10
<p><math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">C</mi> </msub> </semantics> </math>–<math display="inline"> <semantics> <msub> <mi>V</mi> <mi>CE</mi> </msub> </semantics> </math> characteristics of the HSMST under different <span class="html-italic">B</span>.</p>
Full article ">Figure 11
<p>The relationship curve between <math display="inline"> <semantics> <msub> <mi>S</mi> <mi mathvariant="normal">C</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math>.</p>
Full article ">Figure 12
<p>The magnetic characteristic curves of the HSMST chip: (<b>a</b>) Between <math display="inline"> <semantics> <msub> <mi>V</mi> <mi>out</mi> </msub> </semantics> </math> and <span class="html-italic">B</span>; (<b>b</b>) Between <math display="inline"> <semantics> <mrow> <mo>Δ</mo> <mi>V</mi> </mrow> </semantics> </math> and <span class="html-italic">B</span>.</p>
Full article ">Figure 13
<p>The relationship curve between <math display="inline"> <semantics> <msub> <mi>S</mi> <mi mathvariant="normal">V</mi> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>I</mi> <mi mathvariant="normal">B</mi> </msub> </semantics> </math>.</p>
Full article ">
1944 KiB  
Article
Evaluation of Commercial Self-Monitoring Devices for Clinical Purposes: Results from the Future Patient Trial, Phase I
by Soren Leth, John Hansen, Olav W. Nielsen and Birthe Dinesen
Sensors 2017, 17(1), 211; https://doi.org/10.3390/s17010211 - 22 Jan 2017
Cited by 55 | Viewed by 10527
Abstract
Commercial self-monitoring devices are becoming increasingly popular, and over the last decade, the use of self-monitoring technology has spread widely in both consumer and medical markets. The purpose of this study was to evaluate five commercially available self-monitoring devices for further testing in [...] Read more.
Commercial self-monitoring devices are becoming increasingly popular, and over the last decade, the use of self-monitoring technology has spread widely in both consumer and medical markets. The purpose of this study was to evaluate five commercially available self-monitoring devices for further testing in clinical applications. Four activity trackers and one sleep tracker were evaluated based on step count validity and heart rate validity. Methods: The study enrolled 22 healthy volunteers in a walking test. Volunteers walked a 100 m track at 2 km/h and 3.5 km/h. Steps were measured by four activity trackers and compared to gyroscope readings. Two trackers were also tested on nine subjects by comparing pulse readings to Holter monitoring. Results: The lowest average systematic error in the walking tests was −0.2%, recorded on the Garmin Vivofit 2 at 3.5 km/h; the highest error was the Fitbit Charge HR at 2 km/h with an error margin of 26.8%. Comparisons of pulse measurements from the Fitbit Charge HR revealed a margin error of −3.42% ± 7.99% compared to the electrocardiogram. The Beddit sleep tracker measured a systematic error of −3.27% ± 4.60%. Conclusion: The measured results revealed the current functionality and limitations of the five self-tracking devices, and point towards a need for future research in this area. Full article
(This article belongs to the Special Issue Sensing Technology for Healthcare System)
Show Figures

Figure 1

Figure 1
<p>The two figures illustrate the setup used in the walking tests. (<b>a</b>) A drawing of the track used in the walking tests together with the 15 m practice line; Trackers were applied as shown in (<b>b</b>). By walking in front of the subjects, the researcher could maintain the proper pace.</p>
Full article ">Figure 2
<p>A typical sample of the Shimmer 3 data based on the walking speed of 2 km/h, and the corresponding peaks found by the algorithm.</p>
Full article ">Figure 3
<p>Results from the walking tests with plots showing the difference between the steps measured by the Garmin Vivofit 2 and the Shimmer 3. (<b>a</b>) The percentage error of the 3.5 km/h walking test is −0.2 ± 14.2; (<b>b</b>) The percentage error of the 2 km/h walking test −5.3 ± 30.4.</p>
Full article ">Figure 4
<p>Results from the walking tests with plots showing the difference between the steps measured by the Fitbit Charge HR and the Shimmer 3. (<b>a</b>) The percentage error of the 3.5 km/h walking test is −0.7 ± 7.5; (<b>b</b>) The percentage error of the 2 km/h walking test 26.8 ± 32.0.</p>
Full article ">Figure 5
<p>Results from the walking tests with plots showing the difference between the steps measured by the Fitbit One and the Shimmer 3. (<b>a</b>) The percentage error of the 3.5 km/h walking test is −1.5 ± 0.9; (<b>b</b>) The percentage error of the 2 km/h walking test −7.8 ± 10.5.</p>
Full article ">Figure 6
<p>Results from the walking tests with plots showing the difference between the steps measured by the Fitbit Zip and the Shimmer 3. (<b>a</b>) The percentage error of the 3.5 km/h walking test is −1.1 ± 5.8; (<b>b</b>) The percentage error of the 2 km/h walking test −22.9 ± 33.3.</p>
Full article ">Figure 7
<p>Results from the heart rate study: the plot shows percentage differences in the average pulse calculated from the Fitbit Charge HR and the Pan Tompkins-like algorithm.</p>
Full article ">Figure 8
<p>Results from the heart rate study: the plot shows differences in the average pulse calculated from the Beddit sleep tracker and the Pan Tompkins-like algorithm.</p>
Full article ">Figure 9
<p>Results from the heart rate study: the plot shows percentage differences in the pulse measured by the Fitbit Charge HR and the Beddit sleep tracker.</p>
Full article ">
590 KiB  
Article
Competitive Swarm Optimizer Based Gateway Deployment Algorithm in Cyber-Physical Systems
by Shuqiang Huang and Ming Tao
Sensors 2017, 17(1), 209; https://doi.org/10.3390/s17010209 - 22 Jan 2017
Cited by 7 | Viewed by 5597
Abstract
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to [...] Read more.
Wireless sensor network topology optimization is a highly important issue, and topology control through node selection can improve the efficiency of data forwarding, while saving energy and prolonging lifetime of the network. To address the problem of connecting a wireless sensor network to the Internet in cyber-physical systems, here we propose a geometric gateway deployment based on a competitive swarm optimizer algorithm. The particle swarm optimization (PSO) algorithm has a continuous search feature in the solution space, which makes it suitable for finding the geometric center of gateway deployment; however, its search mechanism is limited to the individual optimum (pbest) and the population optimum (gbest); thus, it easily falls into local optima. In order to improve the particle search mechanism and enhance the search efficiency of the algorithm, we introduce a new competitive swarm optimizer (CSO) algorithm. The CSO search algorithm is based on an inter-particle competition mechanism and can effectively avoid trapping of the population falling into a local optimum. With the improvement of an adaptive opposition-based search and its ability to dynamically parameter adjustments, this algorithm can maintain the diversity of the entire swarm to solve geometric K-center gateway deployment problems. The simulation results show that this CSO algorithm has a good global explorative ability as well as convergence speed and can improve the network quality of service (QoS) level of cyber-physical systems by obtaining a minimum network coverage radius. We also find that the CSO algorithm is more stable, robust and effective in solving the problem of geometric gateway deployment as compared to the PSO or Kmedoids algorithms. Full article
(This article belongs to the Special Issue New Paradigms in Cyber-Physical Social Sensing)
Show Figures

Figure 1

Figure 1
<p>WMN gateway deployment using CPS.</p>
Full article ">Figure 2
<p>Gateway deployment of a 7-node network. (<b>A</b>) original network; (<b>B</b>) gateway deploying on node; (<b>C</b>) gateway deploying not on node.</p>
Full article ">Figure 3
<p>Deployment diagram of two gateways. (<b>A</b>) gateways deploying on nodes; (<b>B</b>) gateways deploying not on nodes.</p>
Full article ">Figure 4
<p>Search mechanism of the Competitive Swarm Optimizer (CSO) algorithm.</p>
Full article ">Figure 5
<p>Gateway deployment of a 7-node network. (<b>A</b>) 50 nodes; (<b>B</b>) 200 nodes; (<b>C</b>) 600 nodes; (<b>D</b>) 1000 nodes.</p>
Full article ">Figure 6
<p>Convergence of three algorithms at different network scales. (<b>A</b>) 50 nodes; (<b>B</b>) 200 nodes; (<b>C</b>) 600 nodes; (<b>D</b>) 1000 nodes.</p>
Full article ">Figure 7
<p>Optimization of three algorithms for networks of different sizes.</p>
Full article ">Figure 8
<p>Number of hops of three types of algorithms at different network scales.</p>
Full article ">
917 KiB  
Article
Object Detection and Classification by Decision-Level Fusion for Intelligent Vehicle Systems
by Sang-Il Oh and Hang-Bong Kang
Sensors 2017, 17(1), 207; https://doi.org/10.3390/s17010207 - 22 Jan 2017
Cited by 93 | Viewed by 11330
Abstract
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from [...] Read more.
To understand driving environments effectively, it is important to achieve accurate detection and classification of objects detected by sensor-based intelligent vehicle systems, which are significantly important tasks. Object detection is performed for the localization of objects, whereas object classification recognizes object classes from detected object regions. For accurate object detection and classification, fusing multiple sensor information into a key component of the representation and perception processes is necessary. In this paper, we propose a new object-detection and classification method using decision-level fusion. We fuse the classification outputs from independent unary classifiers, such as 3D point clouds and image data using a convolutional neural network (CNN). The unary classifiers for the two sensors are the CNN with five layers, which use more than two pre-trained convolutional layers to consider local to global features as data representation. To represent data using convolutional layers, we apply region of interest (ROI) pooling to the outputs of each layer on the object candidate regions generated using object proposal generation to realize color flattening and semantic grouping for charge-coupled device and Light Detection And Ranging (LiDAR) sensors. We evaluate our proposed method on a KITTI benchmark dataset to detect and classify three object classes: cars, pedestrians and cyclists. The evaluation results show that the proposed method achieves better performance than the previous methods. Our proposed method extracted approximately 500 proposals on a 1226 × 370 image, whereas the original selective search method extracted approximately 10 6 × n proposals. We obtained classification performance with 77.72% mean average precision over the entirety of the classes in the moderate detection level of the KITTI benchmark dataset. Full article
(This article belongs to the Special Issue Sensors for Transportation)
Show Figures

Figure 1

Figure 1
<p>Overview of our work. Red arrows denote the processing of unary classifier for each sensor, and green arrows denote the fusion processing.</p>
Full article ">Figure 2
<p>Procedure from pre-processing to semantic grouping on CCD image data. (<b>a</b>) Input image data; (<b>b</b>) color-flattened image; (<b>c</b>) segmented image using the graph-segmentation method; (<b>d</b>) semantic grouping using the dissimilarity cost function.</p>
Full article ">Figure 3
<p>Segment generation on 3D point clouds. (<b>a</b>) 2D occupancy grid mapping results and (<b>b</b>) segmentation result on 3D point clouds.</p>
Full article ">Figure 4
<p>Proposed network architecture as unary classifiers.</p>
Full article ">Figure 5
<p>Architecture of the fusion network. Bbox denotes bounding box (<a href="#sec6dot2-sensors-17-00207" class="html-sec">Section 6.2</a>).</p>
Full article ">Figure 6
<p>Qualitative results of our proposed method. We projected the classification results on the image data. (<b>a</b>) The results of CCD unary classifier. (<b>b</b>) The results of LiDAR unary classifier. (<b>c</b>) The results of <math display="inline"> <semantics> <mrow> <mi>m</mi> <mi>o</mi> <mi>d</mi> <mi>e</mi> <msub> <mi>l</mi> <mn>17</mn> </msub> </mrow> </semantics> </math>. (<b>d</b>) The results of proposed method. Each box indicates the following: yellow box: correctly-detected and -classified objects; red box: failures; green box: un-detected objects.</p>
Full article ">
3526 KiB  
Article
A Low Frequency FBG Accelerometer with Symmetrical Bended Spring Plates
by Fufei Liu, Yutang Dai, Joseph Muna Karanja and Minghong Yang
Sensors 2017, 17(1), 206; https://doi.org/10.3390/s17010206 - 22 Jan 2017
Cited by 46 | Viewed by 6939
Abstract
To meet the requirements for low-frequency vibration monitoring, a new type of FBG (fiber Bragg grating) accelerometer with a bended spring plate is proposed. Two symmetrical bended spring plates are used as elastic elements, which drive the FBG to produce axial strains equal [...] Read more.
To meet the requirements for low-frequency vibration monitoring, a new type of FBG (fiber Bragg grating) accelerometer with a bended spring plate is proposed. Two symmetrical bended spring plates are used as elastic elements, which drive the FBG to produce axial strains equal in magnitude but opposite in direction when exciting vibrations exist, leading to doubling the wavelength shift of the FBG. The mechanics model and a numerical method are presented in this paper, with which the influence of the structural parameters on the sensitivity and the eigenfrequency are discussed. The test results show that the sensitivity of the accelerometer is more than 1000 pm/g when the frequency is within the 0.7–20 Hz range. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The schematic illustration of the accelerometer.</p>
Full article ">Figure 2
<p>Simplified structure diagram of the sensor and the meaning of each symbol. x, y: the horizontal and vertical directions; <span class="html-italic">F</span>: force at point B; <span class="html-italic">h</span>: the height of CD segment; <span class="html-italic">r</span>: the radius of the bended segment; <span class="html-italic">k</span>: the length of horizontal segment; <span class="html-italic">m</span>: mass; <span class="html-italic">c</span>, <span class="html-italic">d</span>: the length and height of the mass; <span class="html-italic">a</span>: the exciting acceleration; <span class="html-italic">dθ</span>: the infinitesimal angle.</p>
Full article ">Figure 3
<p>The influence of major parameters on the sensitivity and eigenfrequency. The influence of thickness <span class="html-italic">t</span> (<b>a</b>); the influence of width <span class="html-italic">b</span> (<b>b</b>); the influence of length <span class="html-italic">c</span> (<b>c</b>); the influence of height <span class="html-italic">h</span> (<b>d</b>).</p>
Full article ">Figure 4
<p>Experimental setup of the acceleration sensing system.</p>
Full article ">Figure 5
<p>The frequency response of the accelerometer under acceleration of 1 m/s<sup>2</sup>.</p>
Full article ">Figure 6
<p>Time domain waveform of the sensor at 5 Hz (<b>a</b>) and 10 Hz (<b>b</b>).</p>
Full article ">Figure 7
<p>Responses of the sensor under different excitation accelerations.</p>
Full article ">Figure 8
<p>Cross-sensitivity of the sensor: response under excitation acceleration of 1 m/s<sup>2</sup> and frequency at 20 Hz.</p>
Full article ">Figure 9
<p>Vibration time domain waveform with manual simulation.</p>
Full article ">
6983 KiB  
Article
Evaluation on Radiometric Capability of Chinese Optical Satellite Sensors
by Aixia Yang, Bo Zhong, Shanlong Wu and Qinhuo Liu
Sensors 2017, 17(1), 204; https://doi.org/10.3390/s17010204 - 22 Jan 2017
Cited by 12 | Viewed by 4944
Abstract
The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing [...] Read more.
The radiometric capability of on-orbit sensors should be updated on time due to changes induced by space environmental factors and instrument aging. Some sensors, such as Moderate Resolution Imaging Spectroradiometer (MODIS), have onboard calibrators, which enable real-time calibration. However, most Chinese remote sensing satellite sensors lack onboard calibrators. Their radiometric calibrations have been updated once a year based on a vicarious calibration procedure, which has affected the applications of the data. Therefore, a full evaluation of the sensors’ radiometric capabilities is essential before quantitative applications can be made. In this study, a comprehensive procedure for evaluating the radiometric capability of several Chinese optical satellite sensors is proposed. In this procedure, long-term radiometric stability and radiometric accuracy are the two major indicators for radiometric evaluation. The radiometric temporal stability is analyzed by the tendency of long-term top-of-atmosphere (TOA) reflectance variation; the radiometric accuracy is determined by comparison with the TOA reflectance from MODIS after spectrally matching. Three Chinese sensors including the Charge-Coupled Device (CCD) camera onboard Huan Jing 1 satellite (HJ-1), as well as the Visible and Infrared Radiometer (VIRR) and Medium-Resolution Spectral Imager (MERSI) onboard the Feng Yun 3 satellite (FY-3) are evaluated in reflective bands based on this procedure. The results are reasonable, and thus can provide reliable reference for the sensors’ application, and as such will promote the development of Chinese satellite data. Full article
(This article belongs to the Special Issue Sensors and Smart Sensing of Agricultural Land Systems)
Show Figures

Figure 1

Figure 1
<p>Location of the calibration site and a true color composite from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery.</p>
Full article ">Figure 2
<p>The block diagram of the framework for radiometric capability evaluation.</p>
Full article ">Figure 3
<p>The relative spectral responses of the CCDs and MODIS in VNIR bands.</p>
Full article ">Figure 3 Cont.
<p>The relative spectral responses of the CCDs and MODIS in VNIR bands.</p>
Full article ">Figure 4
<p>The relative spectral responses of the VIRRs and MODIS in VNIR bands.</p>
Full article ">Figure 5
<p>The relative spectral responses of the MERSIs and MODISs in VNIR bands.</p>
Full article ">Figure 5 Cont.
<p>The relative spectral responses of the MERSIs and MODISs in VNIR bands.</p>
Full article ">Figure 6
<p>Time series of TOA reflectance of MODIS in reflective bands. The color lines are the trend lines of the bands. (<b>a</b>) Terra/MODIS; (<b>b</b>) Aqua/MODIS.</p>
Full article ">Figure 7
<p>The calibration site’s directional characterization of the red band of Terra/MODIS. The reflectance varies with relative azimuth and solar zenith angle for each bin of view zenith angle, which shows systematic variation that is due to directional effects. The directional effect is about 15%.</p>
Full article ">Figure 8
<p>Time series of TOA reflectance of HJ-1/CCDs from 2008 to 2012 in reflective bands: (<b>a</b>) HJ-1A/CCD1; (<b>b</b>) HJ-1A/CCD2; (<b>c</b>) HJ-1B/CCD1; (<b>d</b>) HJ-1B/CCD2.</p>
Full article ">Figure 8 Cont.
<p>Time series of TOA reflectance of HJ-1/CCDs from 2008 to 2012 in reflective bands: (<b>a</b>) HJ-1A/CCD1; (<b>b</b>) HJ-1A/CCD2; (<b>c</b>) HJ-1B/CCD1; (<b>d</b>) HJ-1B/CCD2.</p>
Full article ">Figure 9
<p>Time series of TOA reflectance of MERSIs in reflective bands: (<b>a</b>) FY-3A/MERSI from 2008 to 2015; (<b>b</b>) FY-3B/MERSI from 2010 to 2015.</p>
Full article ">Figure 10
<p>Time series of TOA reflectance of VIRRs in reflective bands: (<b>a</b>) FY-3A/VIRR from 2008 to 2015; (<b>b</b>) FY-3B/VIRR from 2010 to 2015. The color lines separate different calibration stages.</p>
Full article ">
3559 KiB  
Review
Imaging and Force Recognition of Single Molecular Behaviors Using Atomic Force Microscopy
by Mi Li, Dan Dang, Lianqing Liu, Ning Xi and Yuechao Wang
Sensors 2017, 17(1), 200; https://doi.org/10.3390/s17010200 - 22 Jan 2017
Cited by 28 | Viewed by 10171
Abstract
The advent of atomic force microscopy (AFM) has provided a powerful tool for investigating the behaviors of single native biological molecules under physiological conditions. AFM can not only image the conformational changes of single biological molecules at work with sub-nanometer resolution, but also [...] Read more.
The advent of atomic force microscopy (AFM) has provided a powerful tool for investigating the behaviors of single native biological molecules under physiological conditions. AFM can not only image the conformational changes of single biological molecules at work with sub-nanometer resolution, but also sense the specific interactions of individual molecular pair with piconewton force sensitivity. In the past decade, the performance of AFM has been greatly improved, which makes it widely used in biology to address diverse biomedical issues. Characterizing the behaviors of single molecules by AFM provides considerable novel insights into the underlying mechanisms guiding life activities, contributing much to cell and molecular biology. In this article, we review the recent developments of AFM studies in single-molecule assay. The related techniques involved in AFM single-molecule assay were firstly presented, and then the progress in several aspects (including molecular imaging, molecular mechanics, molecular recognition, and molecular activities on cell surface) was summarized. The challenges and future directions were also discussed. Full article
(This article belongs to the Special Issue Single-Molecule Sensing)
Show Figures

Figure 1

Figure 1
<p>Typical AFM single-molecule techniques. (<b>A</b>) Principle of AFM. The tip raster scans the sample surface, during which the cantilever move vertically to maintain a constant interaction force between tip and sample. The force is detected by a laser reflected off the backside of the cantilever. (<b>B</b>) PFT multiparametric AFM imaging. The AFM tip approaches the withdraws from the sample in a pixel-for-pixel manner to record forces, <span class="html-italic">F</span>, over the tip-sample distance in force curves. The high precision of the approach allows detection of pixel sizes &lt;1 nm<sup>2</sup> with a positional accuracy of ~0.2 nm and forces at piconewton sensitivity. Reprinted with permission from [<a href="#B18-sensors-17-00200" class="html-bibr">18</a>]. Copyright 2013 Macmillan Publishers Limited. (<b>C</b>) Structure of a high-speed AFM scanner for narrow area (1 μm × 4 μm) imaging and scanning electron micrograph of a small cantilever for high-speed AFM. Reprinted with permission from [<a href="#B37-sensors-17-00200" class="html-bibr">37</a>]. Copyright 2014 Elsevier Ltd. (<b>D</b>) Mechanical unfolding of the membrane protein FhuA embedded in a lipid bilayer. A single FhuA is nonspecifically attached to the AFM tip. Increasing the distance of tip and membrane establishes a mechanical force that induces unfolding of FhuA. Force extension curves recording during unfolding a single FhuA show force peaks that measure the interactions established by unfolding intermediates of FhuA. Reprinted with permission from [<a href="#B49-sensors-17-00200" class="html-bibr">49</a>]. Copyright 2012 Elsevier Ltd. (<b>E</b>) Probing single receptors on cell surface with functionalized tip. A ligand is attached to the AFM tip and controlled to touch the receptor on the membrane. The receptor-ligand interaction is detected by pulling the receptor, during which force curve is recorded. The abrupt peak in the force curve corresponds to the receptor-ligand unbinding event. (<b>F</b>) TREC imaging. Special electronic circuit in the TREC box separates the maxima and minima of the oscillation amplitude during binding and generates the recognition and topographic image from them respectively. Reprinted with permission from [<a href="#B51-sensors-17-00200" class="html-bibr">51</a>]. Copyright 2016 American Chemical Society.</p>
Full article ">Figure 2
<p>Imaging the static and dynamic structures of individual biological molecules under physiological conditions by AFM. (<b>A</b>–<b>D</b>) Static imaging. (<b>A</b>) IgG antibody molecule. High-resolution AFM image of anti-HSA mouse monoclonal antibody (IgG) adsorbed on a mica and self-assembled antibody hexamers composed of six IgG molecules. Images were recorded in 50 mM ZnCl<sub>2</sub> and 50 mM MgCl<sub>2</sub> solution respectively. Reprinted with permission from [<a href="#B58-sensors-17-00200" class="html-bibr">58</a>]. Copyright 2014 Macmillan Publishers Limited. (<b>B</b>) Bacterial microcompartment protein. The concave face has a depression diameter of 52.8 angstrom whereas the convex face has a diameter of 47.1 angstrom measured by AFM cross-section analysis. Reprinted with permission from [<a href="#B59-sensors-17-00200" class="html-bibr">59</a>]. Copyright 2015 American Chemical Society. (<b>C</b>) Nuclear pore complexes (NPCs). Numerous NPCs in the cytoplasm-facing outer nuclear membrane. Average projected structure of a vacant NPC showing eight cytoplasmic filaments that surround a central pore. Reprinted with permission from [<a href="#B60-sensors-17-00200" class="html-bibr">60</a>]. Copyright 2016 Macmillan Publishers Limited. (<b>D</b>) DNA. The red and blue arrows indicate the positions of major and minor grooves of B-DNA, respectively. Gray arrows indicate the local melting regions of the plasmid DNA. Reprinted with permission from [<a href="#B61-sensors-17-00200" class="html-bibr">61</a>]. Copyright 2013 American Chemical Society. (E–G) Dynamic imaging. (<b>E</b>) Rotary motor molecule. Successive AFM images showing the conformational change of β subunits in ATP. The highest pixel in each image is indicated by the red circle. Frame rate, 12.5 frame/s. Reprinted with permission from [<a href="#B39-sensors-17-00200" class="html-bibr">39</a>]. Copyright 2011 AAAS. (<b>F</b>) Walking myosin molecule. Successive AFM images showing the processive movement M5-HMM in ATP. Arrows indicate coiled-coil tail of M5-HMM tilted towards the minus end of actin. Reprinted with permission from [<a href="#B12-sensors-17-00200" class="html-bibr">12</a>]. Copyright 2010 Macmillan Publishers Limited. (<b>G</b>) DNA after drug stimulation. AFM images of the conformational changes of single DNA induced by the injection of Dau. Reprinted with permission from [<a href="#B63-sensors-17-00200" class="html-bibr">63</a>]. Copyright 2013 American Chemical Society.</p>
Full article ">Figure 3
<p>Unfolding single proteins by AFM. (<b>A</b>) Titin molecule. Force extension curves obtained by stretching titin proteins show periodic features that are consistent with their modular construction. Repeated stretch-relaxation cycles of single titin fragments demonstrate refolding. Reprinted with permission from [<a href="#B66-sensors-17-00200" class="html-bibr">66</a>]. Copyright 1997 AAAS. (<b>B</b>) Bacteriorhodopsin membrane protein. The tip and protein surface was separated at a velocity of 40 nm/s while the force spectrum was recorded. After the force extension curve was recorded, a topography of the same surface was taken to show structural changes. Note that a single monomer is missing (white circle). Reprinted with permission from [<a href="#B25-sensors-17-00200" class="html-bibr">25</a>]. Copyright 2000 AAAS. (<b>C</b>) Membrane protein on live cells. Force extension curves recorded between an Ig-T tip and surface of yeast cells expressing Als5p. Cells are trapped in porous membrane. Unfolding forces on cell surfaces were mapped by recording arrays of force extension curves on 500 nm × 500 nm areas. Reprinted with permission from [<a href="#B69-sensors-17-00200" class="html-bibr">69</a>]. Copyright 2009 American Chemical Society. (<b>D</b>) High-speed unfolding of titin molecule. Typical force extension curves are recorded at different retraction velocities (1, 100, 1000 µm/s) and dynamic force spectrum of the intermediate unfolding state. Solid red line is the theoretical fitting. Reprinted with permission from [<a href="#B71-sensors-17-00200" class="html-bibr">71</a>]. Copyright 2013 AAAS.</p>
Full article ">Figure 4
<p>Detecting and recognizing individual receptor-ligand events by AFM. (<b>A</b>) Biotin-avidin. Force curves are recorded on biotinylated bead by avidin-functionalized tip. The specificity is demonstrated by blocking with excess free avidins. Single unbinding event is measured by decreasing the density of biotins on bead. Reprinted with permission from [<a href="#B72-sensors-17-00200" class="html-bibr">72</a>]. Copyright 1994 AAAS. (<b>B</b>) Antibody-antigen. The antibody is linked to AFM tip via PEG molecule. Force curve recorded on antigen-coated substrate exhibits a significant molecular unbinding peak. Reprinted with permission from [<a href="#B73-sensors-17-00200" class="html-bibr">73</a>]. Copyright 1996 National Academy of Sciences. (<b>C</b>) TREC imaging of nucleosomes on mica. Topography image and recognition image is recorded simultaneously by the antibody-functionalized AFM tip. White pixels in topography image are the nucleosomes and black pixels in recognition image are the recognition signals. Reprinted with permission from [<a href="#B77-sensors-17-00200" class="html-bibr">77</a>]. Copyright 2004 National Academy of Sciences. (<b>D</b>) TREC imaging of proteins reconstituted in lipid bilayer. TREC imaging on UCP1-reconstituted lipid bilayer is performed using ATP-functionalized tip. Reprinted with permission from [<a href="#B78-sensors-17-00200" class="html-bibr">78</a>]. Copyright 2013 American Chemical Society.</p>
Full article ">Figure 5
<p>Detecting individual receptors on cell surface. (<b>A</b>) Recognition of SGLT1 on the surface of intact cells by AFM tip coated with specific antibodies. Typical force curve shows specific interaction between the antibody and SGLT1 upon tip-surface retraction. The specific interaction is blocked by adding free antibodies to the solution (inset). Reprinted with permission from [<a href="#B89-sensors-17-00200" class="html-bibr">89</a>]. Copyright 2011 Macmillan Publishers Limited. (<b>B</b>) Formation and propagation of Als5p nanodomains. Blue andred pixels correspond to forces smaller and larger than 150 pN, respectively, and thus to Als5p recognition and unfolding. Reprinted with permission from [<a href="#B90-sensors-17-00200" class="html-bibr">90</a>]. Copyright 2010 National Academy of Sciences. (<b>C</b>) Location of the GnRH-Rs on the T24 cell surface by TREC imaging. Overlays of recognition maps of GnRH-Rs onto the corresponding topography images. Reprinted with permission from [<a href="#B91-sensors-17-00200" class="html-bibr">91</a>]. Copyright 2014 American Chemical Society. (<b>D</b>) Bacteriophage extrusion localizes into soft nanodomains detected by PFT imaging. The force curves recorded during PFT imaging indicate that the phage-cell wall complex behaves as a Hookean spring. Topography image, adhesion image and elasticity image is recorded simultaneously. Reprinted with permission from [<a href="#B92-sensors-17-00200" class="html-bibr">92</a>]. Copyright 2014 Macmillan Publishers Limited. (<b>E</b>) Molecular recognition on primary tumor cell from clinical lymphoma patients. Tumor cells from bone marrow sample are recognized by ROR1 fluorescence labeling. CD20-rituximab interactions are detected on tumor cell. There is a specific unbinding peak (green arrow) in the force curve recorded on tumor cell but not in the force curve recorded on healthy cell. The distributions of CD20s on tumor cells are mapped by obtaining arrays of force curves on 500 × 500 nm<sup>2</sup> areas. Reprinted with permission from [<a href="#B93-sensors-17-00200" class="html-bibr">93</a>]. Copyright 2013 Elsevier Inc.</p>
Full article ">
1123 KiB  
Article
Photoacoustic Spectroscopy for the Determination of Lung Cancer Biomarkers—A Preliminary Investigation
by Yannick Saalberg, Henry Bruhns and Marcus Wolff
Sensors 2017, 17(1), 210; https://doi.org/10.3390/s17010210 - 21 Jan 2017
Cited by 19 | Viewed by 8729
Abstract
With 1.6 million deaths per year, lung cancer is one of the leading causes of death worldwide. One reason for this high number is the absence of a preventive medical examination method. Many diagnoses occur in a late cancer stage with a low [...] Read more.
With 1.6 million deaths per year, lung cancer is one of the leading causes of death worldwide. One reason for this high number is the absence of a preventive medical examination method. Many diagnoses occur in a late cancer stage with a low survival rate. An early detection could significantly decrease the mortality. In recent decades, certain substances in human breath have been linked to certain diseases. Different studies show that it is possible to distinguish between lung cancer patients and a healthy control group by analyzing the volatile organic compounds (VOCs) in their breath. We developed a sensor based on photoacoustic spectroscopy for six of the most relevant VOCs linked to lung cancer. As a radiation source, the sensor uses an optical-parametric oscillator (OPO) in a wavelength region from 3.2 µm to 3.5 µm. The limits of detection for a single substance range between 5 ppb and 142 ppb. We also measured high resolution absorption spectra of the biomarkers compared to the data currently available from the National Institute of Standards and Technology (NIST) database, which is the basis of any selective spectroscopic detection. Future lung cancer screening devices could be based on the further development of this sensor. Full article
(This article belongs to the Special Issue Gas Sensors for Health Care and Medical Applications)
Show Figures

Figure 1

Figure 1
<p>Experimental setup of the photoacoustic sensor.</p>
Full article ">Figure 2
<p>Number of occurrences of optical–parametric oscillator wavelength step sizes between 3.2 µm and 3.5 µm.</p>
Full article ">Figure 3
<p>Number of occurrences of the optical–parametric oscillator output power between 3.2 µm and 3.5 µm.</p>
Full article ">Figure 4
<p>Biomarker spectra (<b>blue</b>: Measurement; <b>red</b>: NIST; <b>yellow</b>: PNNL) measured at a concentration of 100 ppm in nitrogen at atmospheric conditions (294 K, 1024 hPa).</p>
Full article ">
2359 KiB  
Article
Effective Calibration of Low-Cost Soil Water Content Sensors
by Heye Reemt Bogena, Johan Alexander Huisman, Bernd Schilling, Ansgar Weuthen and Harry Vereecken
Sensors 2017, 17(1), 208; https://doi.org/10.3390/s17010208 - 21 Jan 2017
Cited by 90 | Viewed by 14369
Abstract
Soil water content is a key variable for understanding and modelling ecohydrological processes. Low-cost electromagnetic sensors are increasingly being used to characterize the spatio-temporal dynamics of soil water content, despite the reduced accuracy of such sensors as compared to reference electromagnetic soil water [...] Read more.
Soil water content is a key variable for understanding and modelling ecohydrological processes. Low-cost electromagnetic sensors are increasingly being used to characterize the spatio-temporal dynamics of soil water content, despite the reduced accuracy of such sensors as compared to reference electromagnetic soil water content sensing methods such as time domain reflectometry. Here, we present an effective calibration method to improve the measurement accuracy of low-cost soil water content sensors taking the recently developed SMT100 sensor (Truebner GmbH, Neustadt, Germany) as an example. We calibrated the sensor output of more than 700 SMT100 sensors to permittivity using a standard procedure based on five reference media with a known apparent dielectric permittivity (1 < Ka < 34.8). Our results showed that a sensor-specific calibration improved the accuracy of the calibration compared to single “universal” calibration. The associated additional effort in calibrating each sensor individually is relaxed by a dedicated calibration setup that enables the calibration of large numbers of sensors in limited time while minimizing errors in the calibration process. Full article
(This article belongs to the Collection Sensors in Agriculture and Forestry)
Show Figures

Figure 1

Figure 1
<p>The SMT100 soil water content sensor.</p>
Full article ">Figure 2
<p>Schematic drawing of the calibration station.</p>
Full article ">Figure 3
<p>The container with glass beads (M2) on top of the vibration machine.</p>
Full article ">Figure 4
<p>The average temperature and soil water content response of three SMT100 sensors. Soil water content was derived from measured apparent permittivity using the complex refraction index model (CRIM) and the temperature correction was done using the temperature dependence of the dielectric permittivity of water reported by [<a href="#B43-sensors-17-00208" class="html-bibr">43</a>].</p>
Full article ">Figure 5
<p>Results of the compaction experiment with glass beads packing (left and right panels present the raw sensor output and equivalent SWC calculated with the Topp equation [<a href="#B36-sensors-17-00208" class="html-bibr">36</a>], respectively).</p>
Full article ">Figure 6
<p>Alteration of reference liquids (M3–M5) during the calibration of 500 SMT100 sensors (left and right panel present permittivity and equivalent SWC calculated with the Topp equation [<a href="#B36-sensors-17-00208" class="html-bibr">36</a>], respectively). The values represent the deviations from the permittivity of the fresh reference media. The deviations were derived from the average response of two SMT100 sensors that were repeatedly used to measure the permittivity during the calibration of 500 SMT100 sensors (i.e., always after 100 calibrations). The results for M2 (glass beads) are also presented for comparison.</p>
Full article ">Figure 7
<p>Sensor-specific calibration curves fitted to the sensor response measurements of each of the 701 SMT100 sensors, as well as the single calibration curve fitted to the whole data set of every sensor type (left and right panels are presenting permittivity and equivalent soil water content values, respectively).</p>
Full article ">
7094 KiB  
Article
Ubiquitous Emergency Medical Service System Based on Wireless Biosensors, Traffic Information, and Wireless Communication Technologies: Development and Evaluation
by Tan-Hsu Tan, Munkhjargal Gochoo, Yung-Fu Chen, Jin-Jia Hu, John Y. Chiang, Ching-Su Chang, Ming-Huei Lee, Yung-Nian Hsu and Jiin-Chyr Hsu
Sensors 2017, 17(1), 202; https://doi.org/10.3390/s17010202 - 21 Jan 2017
Cited by 26 | Viewed by 8499
Abstract
This study presents a new ubiquitous emergency medical service system (UEMS) that consists of a ubiquitous tele-diagnosis interface and a traffic guiding subsystem. The UEMS addresses unresolved issues of emergency medical services by managing the sensor wires for eliminating inconvenience for both patients [...] Read more.
This study presents a new ubiquitous emergency medical service system (UEMS) that consists of a ubiquitous tele-diagnosis interface and a traffic guiding subsystem. The UEMS addresses unresolved issues of emergency medical services by managing the sensor wires for eliminating inconvenience for both patients and paramedics in an ambulance, providing ubiquitous accessibility of patients’ biosignals in remote areas where the ambulance cannot arrive directly, and offering availability of real-time traffic information which can make the ambulance reach the destination within the shortest time. In the proposed system, patient’s biosignals and real-time video, acquired by wireless biosensors and a webcam, can be simultaneously transmitted to an emergency room for pre-hospital treatment via WiMax/3.5 G networks. Performances of WiMax and 3.5 G, in terms of initialization time, data rate, and average end-to-end delay are evaluated and compared. A driver can choose the route of the shortest time among the suggested routes by Google Maps after inspecting the current traffic conditions based on real-time CCTV camera streams and traffic information. The destination address can be inputted vocally for easiness and safety in driving. A series of field test results validates the feasibility of the proposed system for application in real-life scenarios. Full article
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)
Show Figures

Figure 1

Figure 1
<p>Framework of the proposed system.</p>
Full article ">Figure 2
<p>Flowchart of operation for the client tele-diagnosis interface.</p>
Full article ">Figure 3
<p>A snapshot of the client tele-diagnosis interface.</p>
Full article ">Figure 4
<p>Traffic guiding subsystem. The driver tab is embedded with the GPS and a 3.5 G modem and a microphone.</p>
Full article ">Figure 5
<p>Taipei area traffic CCTV cameras provided by the E-Traffic Center.</p>
Full article ">Figure 6
<p>The traffic guiding system. The first route is chosen among the three suggested routes by Google Maps along Civic Boulevard road is congested.</p>
Full article ">Figure 7
<p>An algorithm to choose the best route.</p>
Full article ">Figure 8
<p>Response routes suggested by Google Maps and available CCTV camera icons.</p>
Full article ">Figure 9
<p>Delivery routes suggested by Google Maps and available CCTV camera icons.</p>
Full article ">Figure 10
<p>The second suggested route is chosen, along Jianguo elevated road, which is uncongested.</p>
Full article ">
7109 KiB  
Article
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
by Jianbo Wu, Hui Fang, Long Li, Jie Wang, Xiaoming Huang, Yihua Kang, Yanhua Sun and Chaoqing Tang
Sensors 2017, 17(1), 201; https://doi.org/10.3390/s17010201 - 21 Jan 2017
Cited by 31 | Viewed by 9376
Abstract
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, [...] Read more.
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The simulation model for the 5 in. drill pipe magnetized by the Helmholtz coil (unit: mm).</p>
Full article ">Figure 2
<p>The axial magnetic flux density distribution in the drill pipe wall.</p>
Full article ">Figure 3
<p>The diagram of the MFL (magnetic flux leakage) method for drill pipes at the wellhead.</p>
Full article ">Figure 4
<p>The principle of the lift-off-tolerant MFL sensing method based on the magnetic field focusing effect. (<b>a</b>) The MFL distribution without ferrite cores; (<b>b</b>) the MFL distribution with a ferrite core.</p>
Full article ">Figure 5
<p>The distributions of the MFL affected by the ferrite core at different lift-off distances.</p>
Full article ">Figure 6
<p>The diagram of the lift-off-tolerant MFL sensor.</p>
Full article ">Figure 7
<p>The experiment setup of the lift-off-tolerant MFL probe for drill pipes.</p>
Full article ">Figure 8
<p>The MFL signals picked up by the induction coil at different lift-off distances.</p>
Full article ">Figure 9
<p>The MFL signals picked up by the lift-off-tolerant MFL sensor at different lift-off distances.</p>
Full article ">Figure 10
<p>The sensor array in the lift-off-tolerant MFL probe in two layers.</p>
Full article ">Figure 11
<p>The MFL signals picked up by array lift-off-tolerant MFL sensors in different scanning paths.</p>
Full article ">Figure 12
<p>The MFL probing system integrated eight probes in two layers.</p>
Full article ">Figure 13
<p>The follow-up scheme for the lift-off-tolerant MFL probing system. 1: probe; 2: wheel; 3: support; 4: slide; 5: guide rail; 6: pressure string; 7: pipe end. (<b>a</b>) the wheel rolling up the pipe end from Position (1) to Position (2); (<b>b</b>) the wheel rolling down the pipe end from Position (3) to Position (4).</p>
Full article ">Figure 14
<p>The lift-off-tolerant MFL probing system.</p>
Full article ">Figure 15
<p>The whole MFL testing apparatus at the wellhead.</p>
Full article ">Figure 16
<p>The drill pipe sample with four defects (unit: mm).</p>
Full article ">Figure 17
<p>The typical MFL testing signals for the drill pipe at the wellhead.</p>
Full article ">Figure 18
<p>The testing signal amplitudes with the defects passed in different paths.</p>
Full article ">
11346 KiB  
Article
Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems
by Feilong Li, Zhiqiang Li, Guangxia Li, Feihong Dong and Wei Zhang
Sensors 2017, 17(1), 193; https://doi.org/10.3390/s17010193 - 21 Jan 2017
Cited by 6 | Viewed by 5407
Abstract
The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, [...] Read more.
The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU) with sufficient protection to licensed primary user (PU). Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS) in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO) mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS) framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS) is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The system model of the CSS (cooperative spectrum sensing) for LEO (low earth orbit) mobile satellite services.</p>
Full article ">Figure 2
<p>Effect of compression ratio on <math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mi>f</mi> </msub> </mrow> </semantics> </math> at several different SNR (signal to noise ratio) levels (<math display="inline"> <semantics> <mrow> <msubsup> <mi>p</mi> <mi>d</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msubsup> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 3
<p>Frame structure of periodic spectrum sensing.</p>
Full article ">Figure 4
<p>General TDMA-based cooperative spectrum sensing framework.</p>
Full article ">Figure 5
<p>Novel cooperative spectrum sensing frame structure.</p>
Full article ">Figure 6
<p>Spectral efficiency <math display="inline"> <semantics> <mi mathvariant="script">C</mi> </semantics> </math> versus sensing time <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math> under different levels of SNR and <math display="inline"> <semantics> <mrow> <msubsup> <mi>p</mi> <mi>d</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msubsup> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>Spectral efficiency <math display="inline"> <semantics> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>^</mo> </mover> </semantics> </math> versus scheduling strategy for the novel CSS scheme under fusion rules of AFR (and fusion rule), OFR (or fusion rule) and MFR (majority fusion rule), respectively.</p>
Full article ">Figure 8
<p>SU spectral efficiency <math display="inline"> <semantics> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>^</mo> </mover> </semantics> </math> versus SNR <math display="inline"> <semantics> <mi>γ</mi> </semantics> </math> for the Novel ST-CSS scheme under hard decision fusion rules of OFR, MFR and AFR, respectively.</p>
Full article ">Figure 9
<p>SU spectral efficiency <math display="inline"> <semantics> <mover accent="true"> <mi mathvariant="script">C</mi> <mo>^</mo> </mover> </semantics> </math> versus <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>Q</mi> <mo>^</mo> </mover> <msubsup> <mtext> </mtext> <mi>d</mi> <mrow> <mi>t</mi> <mi>h</mi> </mrow> </msubsup> </mrow> </semantics> </math> for the scheme of Novel ST-CSS and General CSS under decision fusion rules of OFR, MFR and AFR, respectively.</p>
Full article ">Figure 10
<p>Spectral efficiency versus number of cooperating SU for the novel ST-CSS scheme and the general CSS scheme under different values of reporting time.</p>
Full article ">Figure 11
<p>Comparisons of SU spectral efficiency among the schemes of the proposed ST-CSS, the transmission delay efficient design and the energy efficient design.</p>
Full article ">Figure 12
<p>SU spectral efficiency and energy efficiency versus the number of cooperating SU for the novel ST-CSS scheme.</p>
Full article ">
9881 KiB  
Article
Novel Isoprene Sensor for a Flu Virus Breath Monitor
by Pelagia-Irene Gouma, Lisheng Wang, Sanford R. Simon and Milutin Stanacevic
Sensors 2017, 17(1), 199; https://doi.org/10.3390/s17010199 - 20 Jan 2017
Cited by 34 | Viewed by 21605
Abstract
A common feature of the inflammatory response in patients who have actually contracted influenza is the generation of a number of volatile products of the alveolar and airway epithelium. These products include a number of volatile organic compounds (VOCs) and nitric oxide (NO). [...] Read more.
A common feature of the inflammatory response in patients who have actually contracted influenza is the generation of a number of volatile products of the alveolar and airway epithelium. These products include a number of volatile organic compounds (VOCs) and nitric oxide (NO). These may be used as biomarkers to detect the disease. A portable 3-sensor array microsystem-based tool that can potentially detect flu infection biomarkers is described here. Whether used in connection with in-vitro cell culture studies or as a single exhale breathalyzer, this device may be used to provide a rapid and non-invasive screening method for flu and other virus-based epidemics. Full article
(This article belongs to the Special Issue Gas Sensors for Health Care and Medical Applications)
Show Figures

Figure 1

Figure 1
<p>Morphology and structure of <span class="html-italic">h</span>-WO<sub>3</sub> powders: (<b>a</b>) TEM image; (<b>b</b>) HRTEM image (inset: SAED) of nanoparticles; (<b>c</b>) TEM image; (<b>d</b>) HRTEM image (inset: SAED) of nanorods.</p>
Full article ">Figure 2
<p>Resistance change of <span class="html-italic">h</span>-WO<sub>3</sub> with exposure to NO, NO<sub>2</sub>, methanol, and isoprene at 350 °C.</p>
Full article ">Figure 3
<p>(<b>a</b>) A single sensor readout circuit with Bluetooth module; (<b>b</b>) A three-sensor system with integrated readout and heater control circuit as a step toward wireless handheld multi-sensor breathalyzer.</p>
Full article ">
694 KiB  
Article
SisFall: A Fall and Movement Dataset
by Angela Sucerquia, José David López and Jesús Francisco Vargas-Bonilla
Sensors 2017, 17(1), 198; https://doi.org/10.3390/s17010198 - 20 Jan 2017
Cited by 344 | Viewed by 19511
Abstract
Research on fall and movement detection with wearable devices has witnessed promising growth. However, there are few publicly available datasets, all recorded with smartphones, which are insufficient for testing new proposals due to their absence of objective population, lack of performed activities, and [...] Read more.
Research on fall and movement detection with wearable devices has witnessed promising growth. However, there are few publicly available datasets, all recorded with smartphones, which are insufficient for testing new proposals due to their absence of objective population, lack of performed activities, and limited information. Here, we present a dataset of falls and activities of daily living (ADLs) acquired with a self-developed device composed of two types of accelerometer and one gyroscope. It consists of 19 ADLs and 15 fall types performed by 23 young adults, 15 ADL types performed by 14 healthy and independent participants over 62 years old, and data from one participant of 60 years old that performed all ADLs and falls. These activities were selected based on a survey and a literature analysis. We test the dataset with widely used feature extraction and a simple to implement threshold based classification, achieving up to 96% of accuracy in fall detection. An individual activity analysis demonstrates that most errors coincide in a few number of activities where new approaches could be focused. Finally, validation tests with elderly people significantly reduced the fall detection performance of the tested features. This validates findings of other authors and encourages developing new strategies with this new dataset as the benchmark. Full article
(This article belongs to the Special Issue Wearable Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Device used for acquisition. The self-developed embedded device included two accelerometers and a gyroscope. It was fixed to the waist of the participants.</p>
Full article ">Figure 2
<p>Example of processing and classification. The features are computed after the filtering process of the raw data. (<b>a</b>) ADL D11 gives <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>8</mn> </msub> </semantics> </math> values below threshold <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math> (horizontal <b>red</b> line); (<b>b</b>) Feature <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>8</mn> </msub> </semantics> </math> crosses the threshold when the fall in activity F05 is detected.</p>
Full article ">Figure 3
<p>Accuracy obtained in validation after a 10-fold cross-validation without (raw data) and with preprocessing (filtered). Features <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>8</mn> </msub> </semantics> </math> achieved 95.0% and 96.1% of accuracy when the filter was applied, respectively. However, not all features improved their performance after filtering.</p>
Full article ">Figure 4
<p>Maximum value per activity obtained with <math display="inline"> <semantics> <msub> <mi>C</mi> <mn>8</mn> </msub> </semantics> </math>. Most <math display="inline"> <semantics> <msub> <mi>T</mi> <mn>1</mn> </msub> </semantics> </math> threshold crossings (horizontal <b>red</b> line) are contained in activities D04, D18 and F11.</p>
Full article ">
9399 KiB  
Article
Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS
by Maolin Chen, Siying Wang, Mingwei Wang, Youchuan Wan and Peipei He
Sensors 2017, 17(1), 197; https://doi.org/10.3390/s17010197 - 20 Jan 2017
Cited by 10 | Viewed by 5974
Abstract
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and [...] Read more.
Automatic registration of terrestrial laser scanning point clouds is a crucial but unresolved topic that is of great interest in many domains. This study combines terrestrial laser scanner with a smartphone for the coarse registration of leveled point clouds with small roll and pitch angles and height differences, which is a novel sensor combination mode for terrestrial laser scanning. The approximate distance between two neighboring scan positions is firstly calculated with smartphone GPS coordinates. Then, 2D distribution entropy is used to measure the distribution coherence between the two scans and search for the optimal initial transformation parameters. To this end, we propose a method called Iterative Minimum Entropy (IME) to correct initial transformation parameters based on two criteria: the difference between the average and minimum entropy and the deviation from the minimum entropy to the expected entropy. Finally, the presented method is evaluated using two data sets that contain tens of millions of points from panoramic and non-panoramic, vegetation-dominated and building-dominated cases and can achieve high accuracy and efficiency. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Relation between P and Q: (<b>a</b>,<b>b</b>) two different postures of P and Q with the same distance r, and the distance r is not enough to recover the transformation between them; and (<b>c</b>) the uncertainty of Q’s location.</p>
Full article ">Figure 2
<p>The relation between γ and its corresponding entropy: points in (<b>a</b>,<b>b</b>) are previously registered, and the entropy is calculated when points in (<b>b</b>) are being rotated γ degrees around the z axis. The entropy change with γ is recorded in (<b>c</b>).</p>
Full article ">Figure 3
<p>Transformation from 3D to 2D: (<b>a</b>) original points; (<b>b</b>) ground filtering; (<b>c</b>) clustering result with small clusters colored by green; (<b>d</b>) filtering result and the number of points is 3,407,526; and (<b>e</b>) projecting result and the number of points in each block is rendered by different colors of the block center. The number of points is 27,706.</p>
Full article ">Figure 3 Cont.
<p>Transformation from 3D to 2D: (<b>a</b>) original points; (<b>b</b>) ground filtering; (<b>c</b>) clustering result with small clusters colored by green; (<b>d</b>) filtering result and the number of points is 3,407,526; and (<b>e</b>) projecting result and the number of points in each block is rendered by different colors of the block center. The number of points is 27,706.</p>
Full article ">Figure 4
<p>Search for the minimum entropy: (<b>a</b>,<b>b</b>) the original point clouds; (<b>c</b>,<b>d</b>) searching process; and (<b>e</b>) the postures corresponding to the minimum entropy.</p>
Full article ">Figure 5
<p>Search for the initial parameters: (<b>a</b>) the searching process for rotation angles; and (<b>b</b>) the searching process for initial scan distance.</p>
Full article ">Figure 6
<p>Two criteria for initial distance and rotation angles selection: (<b>a</b>) The entropy change when <math display="inline"> <semantics> <mrow> <msub> <mi>r</mi> <mi>k</mi> </msub> </mrow> </semantics> </math> = 14.2272 m and <math display="inline"> <semantics> <mrow> <msub> <mi>κ</mi> <mi>p</mi> </msub> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math> = 93°, <math display="inline"> <semantics> <mrow> <msub> <mi>κ</mi> <mi>q</mi> </msub> <mo stretchy="false">(</mo> <mn>0</mn> <mo stretchy="false">)</mo> </mrow> </semantics> </math> = 108°, in S1 and S2 of the first data set in <a href="#sec3dot1-sensors-17-00197" class="html-sec">Section 3.1</a>. The calculation of <math display="inline"> <semantics> <mrow> <msubsup> <mi>H</mi> <mrow> <msub> <mi>r</mi> <mi>k</mi> </msub> </mrow> <mrow> <mi>A</mi> <mo>−</mo> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> corresponding to 12.2272 m is also shown. (<b>b</b>) How <math display="inline"> <semantics> <mrow> <msubsup> <mi>H</mi> <mrow> <msub> <mi>r</mi> <mi>k</mi> </msub> </mrow> <mrow> <mi>R</mi> <mo>−</mo> <mi>M</mi> </mrow> </msubsup> </mrow> </semantics> </math> works on S3 and S4 in the second data set in <a href="#sec3dot2-sensors-17-00197" class="html-sec">Section 3.2</a>.</p>
Full article ">Figure 7
<p>View of data set 1 after registration and three scan positions.</p>
Full article ">Figure 8
<p>IME results with different d<sub>G</sub>, when t<sub>G</sub> is fixed to 10 m: (<b>a</b>) mean angular error (MAE); (<b>b</b>) translation error (MTE); (<b>c</b>) scan distance error (∆d<sub>S</sub>); and (<b>d</b>) runtime (T).</p>
Full article ">Figure 8 Cont.
<p>IME results with different d<sub>G</sub>, when t<sub>G</sub> is fixed to 10 m: (<b>a</b>) mean angular error (MAE); (<b>b</b>) translation error (MTE); (<b>c</b>) scan distance error (∆d<sub>S</sub>); and (<b>d</b>) runtime (T).</p>
Full article ">Figure 9
<p>Nadir view of the spatial distributions for S1 and S2 after IME when: (<b>a</b>) d<sub>G</sub> = 1 m; (<b>b</b>) d<sub>G</sub> = 2.5 m; (<b>c</b>) d<sub>G</sub> = 5 m; and (<b>d</b>) d<sub>G</sub> = 10 m, and t<sub>G</sub> is fixed to 10 m.</p>
Full article ">Figure 10
<p>IME results with different t<sub>G</sub>, when d<sub>G</sub> is fixed to 2.5 m: (<b>a</b>) mean angular error (MAE); (<b>b</b>) mean translation error (MTE); (<b>c</b>) scan distance error (∆d<sub>S</sub>); and (<b>d</b>) runtime (T).</p>
Full article ">Figure 11
<p>Nadir view of the spatial distributions after IME when: (<b>a</b>) t<sub>G</sub> = 2.5 m; (<b>b</b>) t<sub>G</sub> = 5 m; (<b>c</b>) t<sub>G</sub> = 10 m; and (<b>d</b>) t<sub>G</sub> = 20 m and d<sub>G</sub> is fixed to 2.5 m.</p>
Full article ">Figure 12
<p>Distribution of the 6 scans.</p>
Full article ">Figure 13
<p>The IME result of S1–S6: (<b>a</b>) the Nadir view corresponding to the minimum entropy, which is a local convergence for IME; and (<b>b</b>) the theoretically correct posture for S1 and S2, which corresponds to the fifth small entropy.</p>
Full article ">Figure 14
<p>IME results with different d<sub>G</sub>, when t<sub>G</sub> is fixed to 2.5 m: (<b>a</b>) mean angular error (MAE); (<b>b</b>) mean translation error (MTE); (<b>c</b>) scan distance error (∆d<sub>S</sub>); and (<b>d</b>) runtime (T).</p>
Full article ">Figure 15
<p>IME results with different t<sub>G</sub> when d<sub>G</sub> is fixed to 0.5 m: (<b>a</b>) mean angular error (MAE); (<b>b</b>) mean translation error (MTE); (<b>c</b>) scan distance error (∆d<sub>S</sub>); and (<b>d</b>) runtime (T).</p>
Full article ">Figure 16
<p>Comparison of two initial value selection criteria, using minimum entropy and using deviation distance: (<b>a</b>) MAE; (<b>b</b>) MTE; and (<b>c</b>) Scan distance; and (<b>d</b>) the selecting process in detail. Although the distance 24.5 m corresponds to the smallest entropy, 36.3 m, which is much closer to reference distance of 38.302 m, is selected based on deviation distance.</p>
Full article ">Figure 17
<p>Matching results using SIFT: (<b>a</b>) the result of SIFT feature matching on S1–S2 in data set 2 and the false matching is mainly because of viewpoint changes and self-similarity; (<b>b</b>) the false match from holes; and (<b>c</b>) the correct match on the background with no correspondence in the space of point cloud.</p>
Full article ">
5432 KiB  
Article
A Wide-Range Displacement Sensor Based on Plastic Fiber Macro-Bend Coupling
by Jia Liu, Yulong Hou, Huixin Zhang, Pinggang Jia, Shan Su, Guocheng Fang, Wenyi Liu and Jijun Xiong
Sensors 2017, 17(1), 196; https://doi.org/10.3390/s17010196 - 20 Jan 2017
Cited by 19 | Viewed by 5240
Abstract
This paper proposes the strategy of fabricating an all fiber wide-range displacement sensor based on the macro-bend coupling effect which causes power transmission between two twisted bending plastic optical fibers (POF), where the coupling power changes with the bending radius of the fibers. [...] Read more.
This paper proposes the strategy of fabricating an all fiber wide-range displacement sensor based on the macro-bend coupling effect which causes power transmission between two twisted bending plastic optical fibers (POF), where the coupling power changes with the bending radius of the fibers. For the sensor, a structure of two twisted plastic fibers is designed with the experimental platform that we constructed. The influence of external temperature and displacement speed shifts are reported. The displacement sensor performance is the sensor test at different temperatures and speeds. The sensor was found to be satisfactory at both room temperature and 70 °C when the displacement is up to 140 mm. The output power is approximately linear to a displacement of 110 mm–140 mm under room temperature and 2 mm/s speed at 19.805 nW/mm sensitivity and 0.12 mm resolution. The simple structure of the sensor makes it reliable for other applications and further utilizations, promising a bright future. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>System of wide-range displacement sensor (<b>a</b>) and Displacement sensor experiment device (<b>b</b>).</p>
Full article ">Figure 2
<p>Macro-bend coupling effect between two fibers.</p>
Full article ">Figure 3
<p>System of single fiber displacement sensor (<b>a</b>); Output power changes with displacement (<b>b</b>).</p>
Full article ">Figure 4
<p>Response of the transmitting (<b>a</b>) and receiving (<b>b</b>) fiber at four different displacements: (1) d = 0 mm; (2) d = 45 mm; (3) d = 90 mm; (4) d = 135 mm.</p>
Full article ">Figure 5
<p>Output power of receiving fiber changes with displacement.</p>
Full article ">Figure 6
<p>Output power of receiving fiber changes with displacement at different speeds (<b>a</b>); Output power of receiving fiber changes with displacement at different temperatures (<b>b</b>).</p>
Full article ">
533 KiB  
Correction
Correction: Liu, B., et al. Quantitative Evaluation of Pulsed Thermography, Lock-In Thermography and Vibrothermography on Foreign Object Defect (FOD) in CFRP. Sensors 2016, 16, doi:10.3390/s16050743
by Bin Liu, Hai Zhang, Henrique Fernandes and Xavier Maldague
Sensors 2017, 17(1), 195; https://doi.org/10.3390/s17010195 - 20 Jan 2017
Viewed by 5089
(This article belongs to the Special Issue Infrared and THz Sensing and Imaging)
1776 KiB  
Article
1-Butyl-3-Methylimidazolium Tetrafluoroborate Film as a Highly Selective Sensing Material for Non-Invasive Detection of Acetone Using a Quartz Crystal Microbalance
by Wenyan Tao, Peng Lin, Sili Liu, Qingji Xie, Shanming Ke and Xierong Zeng
Sensors 2017, 17(1), 194; https://doi.org/10.3390/s17010194 - 20 Jan 2017
Cited by 17 | Viewed by 5601
Abstract
Breath acetone serves as a biomarker for diabetes. This article reports 1-butyl-3-methylimidazolium tetrafluoroborate ([bmim][BF4]), a type of room temperature ionic liquid (RTIL), as a selective sensing material for acetone. The RTIL sensing layer was coated on a quartz crystal microbalance (QCM) [...] Read more.
Breath acetone serves as a biomarker for diabetes. This article reports 1-butyl-3-methylimidazolium tetrafluoroborate ([bmim][BF4]), a type of room temperature ionic liquid (RTIL), as a selective sensing material for acetone. The RTIL sensing layer was coated on a quartz crystal microbalance (QCM) for detection. The sensing mechanism is based on a decrease in viscosity and density of the [bmim][BF4] film due to the solubilization of acetone leading to a positive frequency shift in the QCM. Acetone was detected with a linear range from 7.05 to 750 ppmv. Sensitivity and limit of detection were found to be 3.49 Hz/ppmv and 5.0 ppmv, respectively. The [bmim][BF4]-modified QCM sensor demonstrated anti-interference ability to commonly found volatile organic compounds in breath, e.g., isoprene, 1,2-pentadiene, d-limonene, and dl-limonene. This technology is useful for applications in non-invasive early diabetic diagnosis. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Effect of the presence of the [bmim][BF<sub>4</sub>] film on the conductance spectrum of quartz crystal in air: (<b>a</b>) bare QCM; (<b>b</b>) [bmim][BF<sub>4</sub>] film-modified QCM.</p>
Full article ">Figure 2
<p>Typical [bmim][BF<sub>4</sub>]-modified QCM sensor response curves to different concentrations of acetone vapor: (<b>a</b>) 0; (<b>b</b>) 5.64; (<b>c</b>) 7.05; (<b>d</b>) 9.40; (<b>e</b>) 28.20; (<b>f</b>) 65.80; and (<b>g</b>) 329 ppmv. The inset figure is the enlargement of the curves at the bottom.</p>
Full article ">Figure 3
<p>The relationship between the response frequency shift and acetone concentration on the [bmim][BF<sub>4</sub>]-modified QCM sensor. The inset figure is the calibration curve of acetone.</p>
Full article ">Figure 4
<p>The effect of humidity on the [bmim][BF<sub>4</sub>]-modified QCM sensor.</p>
Full article ">Figure 5
<p>The response of the [bmim]BF<sub>4</sub>-modified QCM sensor to mixed VOC gases in the absence of acetone (<b>a</b>) and the presence of 9.25 ppmv acetone (<b>b</b>).</p>
Full article ">Figure 6
<p>Chromatogram of 470 ppmv acetone in air.</p>
Full article ">Scheme 1
<p>Detection of acetone vapor on the [Bmim]BF<sub>4</sub>-quartz crystal microbalance (QCM) sensor was monitored using the piezoelectric quartz crystal impedance (PQCI) technique.</p>
Full article ">
2942 KiB  
Article
Synthetic Aperture Radar Target Recognition with Feature Fusion Based on a Stacked Autoencoder
by Miao Kang, Kefeng Ji, Xiangguang Leng, Xiangwei Xing and Huanxin Zou
Sensors 2017, 17(1), 192; https://doi.org/10.3390/s17010192 - 20 Jan 2017
Cited by 103 | Viewed by 8078
Abstract
Feature extraction is a crucial step for any automatic target recognition process, especially in the interpretation of synthetic aperture radar (SAR) imagery. In order to obtain distinctive features, this paper proposes a feature fusion algorithm for SAR target recognition based on a stacked [...] Read more.
Feature extraction is a crucial step for any automatic target recognition process, especially in the interpretation of synthetic aperture radar (SAR) imagery. In order to obtain distinctive features, this paper proposes a feature fusion algorithm for SAR target recognition based on a stacked autoencoder (SAE). The detailed procedure presented in this paper can be summarized as follows: firstly, 23 baseline features and Three-Patch Local Binary Pattern (TPLBP) features are extracted. These features can describe the global and local aspects of the image with less redundancy and more complementarity, providing richer information for feature fusion. Secondly, an effective feature fusion network is designed. Baseline and TPLBP features are cascaded and fed into a SAE. Then, with an unsupervised learning algorithm, the SAE is pre-trained by greedy layer-wise training method. Capable of feature expression, SAE makes the fused features more distinguishable. Finally, the model is fine-tuned by a softmax classifier and applied to the classification of targets. 10-class SAR targets based on Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset got a classification accuracy up to 95.43%, which verifies the effectiveness of the presented algorithm. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The feature fusion flowchart.</p>
Full article ">Figure 2
<p>Framework of feature extraction.</p>
Full article ">Figure 3
<p>Examples of baseline features. (<b>a</b>) A SAR chip processed by energy detection; (<b>b</b>) A binary image of SAR; (<b>c</b>) A dilated binary image of SAR. In (<b>b</b>,<b>c</b>), centroid, boundingbox, extreme and center of gravity of target area were marked in blue, red, green and magenta.</p>
Full article ">Figure 4
<p>The TPLBP code of BMP2 (<math display="inline"> <semantics> <mrow> <mi>S</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mi>w</mi> <mo>=</mo> <mn>3</mn> <mo>,</mo> <mi>α</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 5
<p>The structure of autoencoder and SAE. (<b>a</b>) A three-layers autoencoder; (<b>b</b>) A SAE composed of two autoencoders.</p>
Full article ">Figure 6
<p>The optical images of ten military targets in MSTAR dataset. (<b>a</b>) BMP2-C21; (<b>b</b>) BTR70; (<b>c</b>) T72-132; (<b>d</b>) BRDM2; (<b>e</b>) BTR60; (<b>f</b>) T62; (<b>g</b>) ZSU234; (<b>h</b>) 2S1; (<b>i</b>) D7; (<b>j</b>) ZIL131.</p>
Full article ">Figure 7
<p>The effect of changed neuron’s numbers on classification accuracy.</p>
Full article ">Figure 8
<p>The distribution of cascaded features and fused features. (<b>a</b>) The distribution of cascaded features; (<b>b</b>) The distribution of the fused features in the same feature space.</p>
Full article ">Figure 9
<p>The classification accuracy of 10-class targets. (<b>a</b>) Comparison of the classification accuracy of baseline features, TPLBP features and fused features on 10-class targets; (<b>b</b>) Comparison of the classification accuracy of different method including SVM, SRC, CNN, SAE and the proposed method.</p>
Full article ">
9259 KiB  
Article
Euro Banknote Recognition System for Blind People
by Larisa Dunai Dunai, Mónica Chillarón Pérez, Guillermo Peris-Fajarnés and Ismael Lengua Lengua
Sensors 2017, 17(1), 184; https://doi.org/10.3390/s17010184 - 20 Jan 2017
Cited by 31 | Viewed by 11734
Abstract
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) [...] Read more.
This paper presents the development of a portable system with the aim of allowing blind people to detect and recognize Euro banknotes. The developed device is based on a Raspberry Pi electronic instrument and a Raspberry Pi camera, Pi NoIR (No Infrared filter) dotted with additional infrared light, which is embedded into a pair of sunglasses that permit blind and visually impaired people to independently handle Euro banknotes, especially when receiving their cash back when shopping. The banknote detection is based on the modified Viola and Jones algorithms, while the banknote value recognition relies on the Speed Up Robust Features (SURF) technique. The accuracies of banknote detection and banknote value recognition are 84% and 97.5%, respectively. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2016)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) System hardware and graphic user interface. The micro camera connected to the Raspberry Pi hardware is embedded in the glasses; (<b>b</b>) Schema of the hardware prototype.</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>) System hardware and graphic user interface. The micro camera connected to the Raspberry Pi hardware is embedded in the glasses; (<b>b</b>) Schema of the hardware prototype.</p>
Full article ">Figure 2
<p>Block diagram of the proposed method.</p>
Full article ">Figure 3
<p>Block diagram of the detection process.</p>
Full article ">Figure 4
<p>Positive samples: Pictures taken from real Euro banknotes, both old and new issue.</p>
Full article ">Figure 5
<p>Training samples: Samples of real Euro banknotes and banknotes with background exposed. (<b>a</b>) Real euro banknotes with background; (<b>b</b>) Real euro banknote pictures.</p>
Full article ">Figure 6
<p>Points of interest matching with Speed Up Robust Features (SURF) method: experiment during the training period with €20 banknote.</p>
Full article ">Figure 7
<p>ROC (Receiver Operating Characteristic) curve for the Euro banknote for scale Factor = 1.45, scale Factor = 1.40, scale Factor = 1.35, scale Factor = 1.30 and scale Factor = 1.25. FPR: false positive rate; TPR: true positive rate.</p>
Full article ">Figure 8
<p>Hit rate with the scale factor (SF) = 1.25: the best values are between the hit rates 0.8 and 0.9.</p>
Full article ">Figure 9
<p>Banknote detection examples: white rectangle is generated around the banknote once it is detected. (<b>a</b>) Banknote detection from a surface; (<b>b</b>) Banknote detection on the human hand.</p>
Full article ">Figure 10
<p>Folded and covered Euro banknotes in indoor and outdoor environment cases used in trials: (<b>a</b>) Cases where real banknotes are on the table; (<b>b</b>) Real banknotes held by the users—in this case, the environment is an open environment, both night and day; (<b>c</b>) The banknotes are held by the user in an enclosed environment (at home and in a shop with artificial illumination).</p>
Full article ">Figure 11
<p>Crumpled and folded Euro banknotes: false recognition cases.</p>
Full article ">
3201 KiB  
Article
Au-Graphene Hybrid Plasmonic Nanostructure Sensor Based on Intensity Shift
by Raed Alharbi, Mehrdad Irannejad and Mustafa Yavuz
Sensors 2017, 17(1), 191; https://doi.org/10.3390/s17010191 - 19 Jan 2017
Cited by 10 | Viewed by 6873
Abstract
Integrating plasmonic materials, like gold with a two-dimensional material (e.g., graphene) enhances the light-material interaction and, hence, plasmonic properties of the metallic nanostructure. A localized surface plasmon resonance sensor is an effective platform for biomarker detection. They offer a better bulk surface (local) [...] Read more.
Integrating plasmonic materials, like gold with a two-dimensional material (e.g., graphene) enhances the light-material interaction and, hence, plasmonic properties of the metallic nanostructure. A localized surface plasmon resonance sensor is an effective platform for biomarker detection. They offer a better bulk surface (local) sensitivity than a regular surface plasmon resonance (SPR) sensor; however, they suffer from a lower figure of merit compared to that one in a propagating surface plasmon resonance sensors. In this work, a decorated multilayer graphene film with an Au nanostructures was proposed as a liquid sensor. The results showed a significant improvement in the figure of merit compared with other reported localized surface plasmon resonance sensors. The maximum figure of merit and intensity sensitivity of 240 and 55 RIU−1 (refractive index unit) at refractive index change of 0.001 were achieved which indicate the capability of the proposed sensor to detect a small change in concentration of liquids in the ng/mL level which is essential in early-stage cancer disease detection. Full article
(This article belongs to the Special Issue MEMS and Nano-Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) 2D schematic diagram of proposed sensor used in this study. L is the side length of cubic and prism NPs, and the diameter of the cylindrical NPs, t is the NPs thickness, and P is the separation distance between two consecutive NPs; and (<b>b</b>) the perspective view of the Au-graphene hybrid sensor.</p>
Full article ">Figure 2
<p>Top row: (<b>a</b>) extinction spectrum for three different nanostructures; Au NP square array, nanohole array perforated in 20 nm thick graphene film, and Au NPs/G hybrid structure. Middle row: electric field profiles at the resonance wavelengths of (<b>b</b>) 1532 nm (mode I), (<b>c</b>) 1632 nm (mode II), and (<b>d</b>) 1793 nm (mode III) of the graphene nanohole structure. Bottom row: Au-graphene hybrid structure at resonance wavelengths of (<b>e</b>) 1562 nm (mode I); (<b>f</b>) 1663 nm (mode II), and (<b>g</b>) 1860 nm (mode III), respectively.</p>
Full article ">Figure 3
<p>Extinction spectra of hybrid nanostructure with (<b>a</b>) prism; (<b>b</b>) cylindrical; and (<b>c</b>) cubic NPs at different L/P ratios and thickness of 20 nm.</p>
Full article ">Figure 4
<p>(<b>a</b>) Extinction and (<b>b</b>) (1,0) resonance wavelength as a function of the L/P ratio. L is the side length and P is the separation distance between two consecutive NPs.</p>
Full article ">Figure 5
<p>Extinction spectrum of a hybrid nanostructure with cubic NPs side at L/P = 0.17.</p>
Full article ">Figure 6
<p>Extinction intensity of the hybrid Au-G nanostructure with (<b>a</b>) prism; (<b>b</b>) cylindrical; and (<b>c</b>) cubic NPs at different L/P ratios as a function of refractive index.</p>
Full article ">Figure 7
<p>Sensitivity of the hybrid Au-G nanostructure with (<b>a</b>) prism; (<b>b</b>) cylindrical; and (<b>c</b>) cubic NPs at different L/P ratios as a function of the refractive index.</p>
Full article ">Figure 8
<p>FOM of the hybrid Au-G nanostructure with (<b>a</b>) prism; (<b>b</b>) cylindrical; and (<b>c</b>) cubic NPs at different L/P ratios as a function of the refractive index.</p>
Full article ">
855 KiB  
Article
A Low-Complexity Method for Two-Dimensional Direction-of-Arrival Estimation Using an L-Shaped Array
by Qing Wang, Hang Yang, Hua Chen, Yangyang Dong and Laihua Wang
Sensors 2017, 17(1), 190; https://doi.org/10.3390/s17010190 - 19 Jan 2017
Cited by 21 | Viewed by 5342
Abstract
In this paper, a new low-complexity method for two-dimensional (2D) direction-of-arrival (DOA) estimation is proposed. Based on a cross-correlation matrix formed from the L-shaped array, the proposed algorithm obtains the automatic pairing elevation and azimuth angles without eigendecomposition, which can avoid high computational [...] Read more.
In this paper, a new low-complexity method for two-dimensional (2D) direction-of-arrival (DOA) estimation is proposed. Based on a cross-correlation matrix formed from the L-shaped array, the proposed algorithm obtains the automatic pairing elevation and azimuth angles without eigendecomposition, which can avoid high computational cost. In addition, the cross-correlation matrix eliminates the effect of noise, which can achieve better DOA performance. Then, the theoretical error of the algorithm is analyzed and the Cramer–Rao bound (CRB) for the direction of arrival estimation is derived . Simulation results demonstrate that, at low signal-to-noise ratios (SNRs) and with a small number of snapshots, in contrast to Tayem’s algorithm and Kikuchi’s algorithm, the proposed algorithm achieves better DOA performance with lower complexity, while, for Gu’s algorithm, the proposed algorithm has slightly inferior DOA performance but with significantly lower complexity. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>L-shaped array configuration for 2D DOA estimation.</p>
Full article ">Figure 2
<p>(<b>a</b>) Complexity comparison versus sensors; and (<b>b</b>) complexity comparison versus snapshots.</p>
Full article ">Figure 3
<p>The angle estimation scattergram in a white noise situation. (<b>a</b>) The proposed algorithm; and (<b>b</b>) the Kikuchi algorithm.</p>
Full article ">Figure 4
<p>The angle estimation scattergram in an unknown noise situation. (<b>a</b>) The proposed algorithm; and (<b>b</b>) the Kikuchi algorithm.</p>
Full article ">Figure 5
<p>RMSE versus SNRs in a white noise situation. (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>RMSE versus Snapshots in a white noise situation.(<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>RMSE versus SNR in an unknown noise situation. (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>RMSE versus Snapshots in an unknown noise situation.(<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>; and (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>s</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math>.</p>
Full article ">
4502 KiB  
Article
Underwater Electromagnetic Sensor Networks—Part I: Link Characterization
by Gara Quintana-Díaz, Pablo Mena-Rodríguez, Iván Pérez-Álvarez, Eugenio Jiménez, Blas-Pablo Dorta-Naranjo, Santiago Zazo, Marina Pérez, Eduardo Quevedo, Laura Cardona and J. Joaquín Hernández
Sensors 2017, 17(1), 189; https://doi.org/10.3390/s17010189 - 19 Jan 2017
Cited by 15 | Viewed by 7829
Abstract
Underwater Wireless Sensor Networks (UWSNs) using electromagnetic (EM) technology in marine shallow waters are examined, not just for environmental monitoring but for further interesting applications. Particularly, the use of EM waves is reconsidered in shallow waters due to the benefits offered in this [...] Read more.
Underwater Wireless Sensor Networks (UWSNs) using electromagnetic (EM) technology in marine shallow waters are examined, not just for environmental monitoring but for further interesting applications. Particularly, the use of EM waves is reconsidered in shallow waters due to the benefits offered in this context, where acoustic and optical technologies have serious disadvantages. Sea water scenario is a harsh environment for radiocommunications, and there is no standard model for the underwater EM channel. The high conductivity of sea water, the effect of seabed and the surface make the behaviour of the channel hard to predict. This justifies the need of link characterization as the first step to approach the development of EM underwater sensor networks. To obtain a reliable link model, measurements and simulations are required. The measuring setup for this purpose is explained and described, as well as the procedures used. Several antennas have been designed and tested in low frequency bands. Agreement between attenuation measurements and simulations at different distances was analysed and made possible the validation of simulation setups and the design of different communications layers of the system. This leads to the second step of this work, where data and routing protocols for the sensor network are examined. Full article
Show Figures

Figure 1

Figure 1
<p>Measurement area in Taliarte’s harbour.</p>
Full article ">Figure 2
<p>Transmitter chamber (the inside on the <b>left</b> side and the outside on the <b>right</b> side).</p>
Full article ">Figure 3
<p>Receiver chamber (front and lateral view of the inside).</p>
Full article ">Figure 4
<p>Antenna examples: on the <b>left</b> 22 cm magnetic loop and insulated crossed-dipole on the <b>right</b>.</p>
Full article ">Figure 5
<p>Launching procedure.</p>
Full article ">Figure 6
<p>Attenuation description.</p>
Full article ">Figure 7
<p>Influence of seabed layer inclusion in simulations.</p>
Full article ">Figure 8
<p>Influence of loop radius (10 KHz–100 KHz).</p>
Full article ">Figure 9
<p>Influence of loop radius (100 KHz–1 MHz).</p>
Full article ">Figure 10
<p>Path loss (dB) comparison up to 8 m (10 KHz–1 MHz).</p>
Full article ">Figure 11
<p>Path loss (dB) comparison up to 8 m (10 KHz–100 KHz).</p>
Full article ">Figure 12
<p>Path loss (dB) comparison up to 13 m (10 KHz–100 KHz).</p>
Full article ">Figure 13
<p>Optimum frequency for a given distance.</p>
Full article ">Figure 14
<p>Optimum frequency for a given distance.</p>
Full article ">Figure 15
<p>Amplitude time evolution during a 2 h time period.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop