[go: up one dir, main page]

Next Issue
Volume 16, August
Previous Issue
Volume 16, June
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 16, Issue 7 (July 2016) – 208 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
5084 KiB  
Article
Reduced Graphene Oxide/Au Nanocomposite for NO2 Sensing at Low Operating Temperature
by Hao Zhang, Qun Li, Jinyu Huang, Yu Du and Shuang Chen Ruan
Sensors 2016, 16(7), 1152; https://doi.org/10.3390/s16071152 - 22 Jul 2016
Cited by 48 | Viewed by 8222
Abstract
A reduced grapheme oxide (rGO)/Au hybrid nanocomposite has been synthesized by hydrothermal treatment using graphite and HAuCl4 as the precursors. Characterization, including X-ray diffraction (XRD), Raman spectra, X-ray photoelecton spectroscopy (XPS) and transmission electron microscopy (TEM), indicates the formation of rGO/Au. A [...] Read more.
A reduced grapheme oxide (rGO)/Au hybrid nanocomposite has been synthesized by hydrothermal treatment using graphite and HAuCl4 as the precursors. Characterization, including X-ray diffraction (XRD), Raman spectra, X-ray photoelecton spectroscopy (XPS) and transmission electron microscopy (TEM), indicates the formation of rGO/Au. A gas sensor fabricated with rGO/Au nanocomposite was applied for NO2 detection at 50 °C. Compared with pure rGO, rGO/Au nanocomposite exhibits higher sensitivity, a more rapid response–recovery process and excellent reproducibility. Full article
(This article belongs to the Special Issue Gas Nanosensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A schematic illustration of the sensor coated with the sensing material.</p>
Full article ">Figure 2
<p>The XRD patterns of GO (black line) and rGO/Au (red line).</p>
Full article ">Figure 3
<p>Raman spectroscopy of the GO (black line) and rGO/Au (red line) samples.</p>
Full article ">Figure 4
<p>(<b>a</b>) A TEM images of rGO/Au; (<b>b</b>) An enlarged image of selected area.</p>
Full article ">Figure 5
<p>(<b>a</b>) XPS spectras of rGO/Au; (<b>b</b>) Au4f spectrum of rGO/Au; (<b>c</b>) C1s spectrum of GO; (<b>d</b>) C1s spectrum of rGO/Au.</p>
Full article ">Figure 6
<p>The response curve to 5 ppm NO<sub>2</sub> of the sensors based on (<b>a</b>) rGO; (<b>b</b>) rGO/Au at 50 °C.</p>
Full article ">Figure 7
<p>(<b>a</b>) Dynamic NO<sub>2</sub> sensing transients curve of the rGO/Au-based sensor to 0.5–5 ppm NO<sub>2</sub> at 50 °C; (<b>b</b>) The responses of the rGO/Au based sensor to 0.5–5 ppm NO<sub>2</sub> at 50 °C.</p>
Full article ">Figure 8
<p>The responses of rGO/Au based sensor to 5 ppm of different gases at 50 °C.</p>
Full article ">Figure 9
<p>The reproducibility of the rGO/Au sensor on successive exposure (3 cycles) to 5 ppm NO<sub>2</sub> at 50 °C.</p>
Full article ">Figure 10
<p>The scheme of the proposed gas sensing mechanism: the adsorption behavior of NO<sub>2</sub> molecules on the rGO/Au nanocomposite.</p>
Full article ">
3341 KiB  
Article
Modeling and Assessment of GPS/BDS Combined Precise Point Positioning
by Junping Chen, Jungang Wang, Yize Zhang, Sainan Yang, Qian Chen and Xiuqiang Gong
Sensors 2016, 16(7), 1151; https://doi.org/10.3390/s16071151 - 22 Jul 2016
Cited by 25 | Viewed by 5313
Abstract
Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model [...] Read more.
Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The geographic distribution of GPS/BDS stations used in the data analysis.</p>
Full article ">Figure 2
<p>RMS of coordinate differences between daily static PPP and IGS daily solutions for each station in the North, East, and Up component, where PPP in different scenarios is shown in different color. The last column of each subplot is the mean value of the 11 stations.</p>
Full article ">Figure 3
<p>Magnitude distribution of 3D RMS of coordinate differences between daily static PPP and IGS daily solutions. Subplot presents the GPS/BDS combined, GPS-only and BDS-only solutions from left to right, respectively. The text of each subplot shows the median and mean value of 3D RMS.</p>
Full article ">Figure 4
<p>Magnitude distribution of 3D RMS of coordinate differences between epoch-wise kinematic PPP and IGS daily solutions. Subplot presents the GPS/BDS combined, GPS-only and BDS-only solutions from left to right, respectively. The texts of each subplot shows the median and mean 3D RMS.</p>
Full article ">Figure 5
<p>Epoch-wise kinematic PPP coordinate bias of GPS/BDS combined, GPS-only and BDS-only solutions in the North, East and Up component. Station: JFNG, DOY 028, 2014. Bottom-right subplot shows the number of satellites tracked at each epoch.</p>
Full article ">Figure 6
<p>Magnitude distribution of all position differences of GPS/BDS combined static PPP between the traditional and simplified models in the North, East, and Up components. All subplots exhibit normal distributions. The top-left corner of each subplot shows the bias and the standard deviation (σ), as well as the percentages of deviations that are within 2σ and 3σ.</p>
Full article ">Figure 7
<p>Coordinate Difference between traditional and new model of JFNG at DOY 028, 2014. (<b>a</b>) Coordinate difference of the first 100 epochs; (<b>b</b>) Coordinate difference after 100 epochs.</p>
Full article ">Figure 8
<p>Magnitude distribution of differences between the traditional and simplified models of (<b>a</b>) epoch-wise sum of GPS ambiguity and station clock parameters; (<b>b</b>) epoch-wise sum of BDS ambiguity, ISB and station clock parameters. Both plots exhibit normal distributions. The top-left corner of each subplot shows the bias and the standard deviation (σ), whereas the top-right corner shows the percentages of deviations that are within 2σ and 3σ.</p>
Full article ">Figure 9
<p>(<b>a</b>) Station clock differences between traditional and simplified models; (<b>b</b>) Magnitude distribution of differences of the epoch-wise sum of station clock and GPS pseudo-range residuals between the traditional and simplified models. The top-left corner of bottom subplot shows the bias and the standard deviation (σ), whereas the top-right corner shows the percentages of deviations that are within 2σ and 3σ.</p>
Full article ">Figure 10
<p>(<b>a</b>,<b>c</b>) GPS and BDS Pseudo-range residuals between traditional and simplified models; (<b>b</b>,<b>d</b>) magnitude distribution of the double differences for GPS and BDS observations. The left corner of right subplots (subplot b) and d)) shows the bias and the standard deviation (σ), whereas the right corner of right subplots (subplot b) and d)) shows the percentages of deviations that are within 2σ and 3σ.</p>
Full article ">
1632 KiB  
Article
A High-Gain Passive UHF-RFID Tag with Increased Read Range
by Simone Zuffanelli, Pau Aguila, Gerard Zamora, Ferran Paredes, Ferran Martin and Jordi Bonache
Sensors 2016, 16(7), 1150; https://doi.org/10.3390/s16071150 - 22 Jul 2016
Cited by 9 | Viewed by 7200
Abstract
In this work, a passive ultra-high frequency radio-frequency identification UHF-RFID tag based on a 1.25 wavelengths thin dipole antenna is presented for the first time. The length of the antenna is properly chosen in order to maximize the tag read range, while maintaining [...] Read more.
In this work, a passive ultra-high frequency radio-frequency identification UHF-RFID tag based on a 1.25 wavelengths thin dipole antenna is presented for the first time. The length of the antenna is properly chosen in order to maximize the tag read range, while maintaining a reasonable tag size and radiation pattern. The antenna is matched to the RFID chip by means of a very simple matching network based on a shunt inductance. A tag prototype, based on the Alien Higgs-3 chip, is designed and fabricated. The overall dimensions are 400 mm × 14.6 mm, but the tag width for most of its length is delimited by the wire diameter (0.8 mm). The measured read range exhibits a maximum value of 17.5 m at the 902–928 MHz frequency band. This represents an important improvement over state-of-the-art passive UHF-RFID tags. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Directivity of a wire antenna as a function of its electrical length.</p>
Full article ">Figure 2
<p>Directive gain in the <span class="html-italic">e</span>-plane, normalized to a directivity value of <span class="html-italic">D</span><sub>0</sub> = 5.2 dBi.</p>
Full article ">Figure 3
<p>Simulated input impedance of the wire antenna.</p>
Full article ">Figure 4
<p>(<b>a</b>) Final tag layout; (<b>b</b>) Simulated power reflection coefficient (half-power bandwidth in grey).</p>
Full article ">Figure 5
<p>Simulated radiation efficiency of the planar tag (<b>a</b>) versus conductivity for <span class="html-italic">t</span> = 35 μm and; (<b>b</b>) versus conductor thickness for <span class="html-italic">w</span> = 1 mm. The conductive paint has a conductivity <span class="html-italic">σ</span> = 10<sup>6</sup> S/m [<a href="#B10-sensors-16-01150" class="html-bibr">10</a>,<a href="#B11-sensors-16-01150" class="html-bibr">11</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Manufactured tag prototype; (<b>b</b>) Simulated and measured read range of the fabricated 1.25 wavelength long dipole antenna tag.</p>
Full article ">Figure 6 Cont.
<p>(<b>a</b>) Manufactured tag prototype; (<b>b</b>) Simulated and measured read range of the fabricated 1.25 wavelength long dipole antenna tag.</p>
Full article ">
7734 KiB  
Article
Piezoresistive Membrane Surface Stress Sensors for Characterization of Breath Samples of Head and Neck Cancer Patients
by Hans Peter Lang, Frédéric Loizeau, Agnès Hiou-Feige, Jean-Paul Rivals, Pedro Romero, Terunobu Akiyama, Christoph Gerber and Ernst Meyer
Sensors 2016, 16(7), 1149; https://doi.org/10.3390/s16071149 - 22 Jul 2016
Cited by 21 | Viewed by 6131
Abstract
For many diseases, where a particular organ is affected, chemical by-products can be found in the patient’s exhaled breath. Breath analysis is often done using gas chromatography and mass spectrometry, but interpretation of results is difficult and time-consuming. We performed characterization of patients’ [...] Read more.
For many diseases, where a particular organ is affected, chemical by-products can be found in the patient’s exhaled breath. Breath analysis is often done using gas chromatography and mass spectrometry, but interpretation of results is difficult and time-consuming. We performed characterization of patients’ exhaled breath samples by an electronic nose technique based on an array of nanomechanical membrane sensors. Each membrane is coated with a different thin polymer layer. By pumping the exhaled breath into a measurement chamber, volatile organic compounds present in patients’ breath diffuse into the polymer layers and deform the membranes by changes in surface stress. The bending of the membranes is measured piezoresistively and the signals are converted into voltages. The sensor deflection pattern allows one to characterize the condition of the patient. In a clinical pilot study, we investigated breath samples from head and neck cancer patients and healthy control persons. Evaluation using principal component analysis (PCA) allowed a clear distinction between the two groups. As head and neck cancer can be completely removed by surgery, the breath of cured patients was investigated after surgery again and the results were similar to those of the healthy control group, indicating that surgery was successful. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic representation of an array of membrane-type surface stress sensor (MSS). The actual diameter of the round membrane (shown in blue) is 500 µm and its thickness is 2.5 µm. The membrane is suspended by four sensing beams with integrated p-type piezoresistors (shown in red), representing a full Wheatstone bridge. A solid supporting frame (green) holds the sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Each membrane is coated with a different polymer that responds by swelling in a characteristic way to surrounding molecules. Functionalization of MSS is done using inkjet spotting of polymer solutions in water (10 mg/mL); (<b>b</b>) MSS are arranged in arrays for detection of VOCs in a gas stream passing through the measurement chamber. The numbers on the left indicate the scale in millimeters. (<b>c</b>) Portable universal serial bus powered compact measurement device with pumping system for gaseous samples, signal readout and data acquisition.</p>
Full article ">Figure 3
<p>Piezoresistive (PR) membrane response curves upon injection with patients’ breath samples and purging with dry nitrogen. Injection and purging duration: 30 s, flow rate 15 mL/min.</p>
Full article ">Figure 4
<p>Principal component analysis (PCA) plot showing three distinct clusters (indicated with ellipses) that represent healthy control persons, HNSCC patients before surgery and HNSCC patients after surgery, i.e., after removal of the tumor by operation. The points of the HNSCC patients after surgery are at a similar location in the PCA plot as those from the healthy persons and differ clearly from the points of the HNSCC patients before surgery, indicating that the removal of the tumor has been successful.</p>
Full article ">Figure 5
<p>The UPGMA diagram (dendrogram) shows bifurcations for distinct distances between pairs of measurements implying that the datasets from cancer patients (HNSCC) before surgery are clearly different from healthy control persons and cured NHSCC patients after surgery. Number labels indicate individual injection-purge cycles.</p>
Full article ">
4456 KiB  
Article
An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation
by Xin Yuan, José-Fernán Martínez, Martina Eckert and Lourdes López-Santidrián
Sensors 2016, 16(7), 1148; https://doi.org/10.3390/s16071148 - 22 Jul 2016
Cited by 45 | Viewed by 10706
Abstract
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) [...] Read more.
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) High resolution SSS image recorded with the DE340D SSS at Stockholm sea [<a href="#B40-sensors-16-01148" class="html-bibr">40</a>]; (<b>b</b>) Low resolution SSS image generated by the 3500 Klein SSS (ECA Group company [<a href="#B41-sensors-16-01148" class="html-bibr">41</a>]).</p>
Full article ">Figure 2
<p>The procedure of the improved Otsu TSM.</p>
Full article ">Figure 3
<p>The plots of the power-law equation with different <span class="html-italic">r</span> values.</p>
Full article ">Figure 4
<p>(<b>a</b>) Traditional Otsu TSM, <span class="html-italic">Th</span> = 0.3216; (<b>b</b>) Local TSM, <span class="html-italic">Th</span> = 0.1628; (<b>c</b>) Iterative TSM, <span class="html-italic">Th</span> = 0.4238; (<b>d</b>) Maximum entropy TSM, <span class="html-italic">Th</span> = 0.6627.</p>
Full article ">Figure 5
<p>(<b>a</b>) Canny edge detection after applying the traditional Otsu method, bw = edge (b, ‘canny’, 0.33), <span class="html-italic">N</span><sub>30</sub> = 752 &gt; 300; (<b>b</b>) Improved Otsu TSM,<span class="html-italic">T</span> = 0.3216, <span class="html-italic">T</span>* = 0.6784; (<b>c</b>) Result of the improved Otsu TSM after morphological operations marking the centroids of the obtained regions; (<b>d</b>) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the acquired areas.</p>
Full article ">Figure 6
<p>(<b>a</b>) Traditional Otsu TSM, <span class="html-italic">Th</span> = 0.1137; (<b>b</b>) Local TSM, <span class="html-italic">Th</span> = 0.0941; (<b>c</b>) Iterative TSM, <span class="html-italic">Th</span> = 0.2609; (<b>d</b>) Maximum entropy TSM, <span class="html-italic">Th</span> = 0.3176.</p>
Full article ">Figure 7
<p>(<b>a</b>) Canny contour detection after applying the traditional Otsu method, bw=edge (b, ‘canny’, 0.1255), <span class="html-italic">N</span><sub>15</sub> = 419 &gt; 100; (<b>b</b>) Improved Otsu TSM, <span class="html-italic">T</span> = 0.1137, T* = 0.3529; (<b>c</b>) Result of the improved Otsu TSM after morphological operations marking the centroids of the obtained regions; (<b>d</b>) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the acquired areas.</p>
Full article ">Figure 8
<p>The original FLS image comes from [<a href="#B48-sensors-16-01148" class="html-bibr">48</a>], and there is a plastic mannequin in the down center.</p>
Full article ">Figure 9
<p>(<b>a</b>) Traditional Otsu TSM, <span class="html-italic">Th</span> = 0.1176; (<b>b</b>) Local TSM, <span class="html-italic">Th</span> = 0.0941; (<b>c</b>) Iterative TSM, <span class="html-italic">Th</span> = 0.2990; (<b>d</b>) Maximum entropy TSM, <span class="html-italic">Th</span> = 0.4118.</p>
Full article ">Figure 10
<p>(<b>a</b>) Canny edge detection after employing the traditional Otsu method, bw = edge (b, ‘canny’, 0.13), <span class="html-italic">N</span><sub>40</sub> = 1341 &gt; 600; (<b>b</b>) Improved Otsu TSM, <span class="html-italic">T</span> = 0.1176, <span class="html-italic">T</span>* = 0.5412; (<b>c</b>) Result of the improved Otsu TSM after morphological operations marking the centroids of the acquired areas; (<b>d</b>) Result of the maximum entropy TSM after the same morphological operations marking the centroids of the obtained regions.</p>
Full article ">Figure 11
<p>The flow chart of SLAM procedure based on an AEKF. Modified after [<a href="#B27-sensors-16-01148" class="html-bibr">27</a>].</p>
Full article ">Figure 12
<p>The architecture of the AEKF-SLAM system, as described in [<a href="#B50-sensors-16-01148" class="html-bibr">50</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) The robot is observing the centroids of certain parts of the body before loop closure; (<b>b</b>) The final AEKF-SLAM loop map where the landmarks are detected by the improved Otsu TSM.</p>
Full article ">Figure 14
<p>(<b>a</b>) The robot is observing the centroids of certain parts of the body before loop closure; (<b>b</b>) The final AEKF-SLAM loop map where the landmarks are detected by the maximum entropy TSM.</p>
Full article ">
2278 KiB  
Article
Design and Field Experimentation of a Cooperative ITS Architecture Based on Distributed RSUs
by Asier Moreno, Eneko Osaba, Enrique Onieva, Asier Perallos, Giovanni Iovino and Pablo Fernández
Sensors 2016, 16(7), 1147; https://doi.org/10.3390/s16071147 - 22 Jul 2016
Cited by 4 | Viewed by 6739 | Correction
Abstract
This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent [...] Read more.
This paper describes a new cooperative Intelligent Transportation System architecture that aims to enable collaborative sensing services. The main goal of this architecture is to improve transportation efficiency and performance. The system, which has been proven within the participation in the ICSI (Intelligent Cooperative Sensing for Improved traffic efficiency) European project, encompasses the entire process of capture and management of available road data. For this purpose, it applies a combination of cooperative services and methods for data sensing, acquisition, processing and communication amongst road users, vehicles, infrastructures and related stakeholders. Additionally, the advantages of using the proposed system are exposed. The most important of these advantages is the use of a distributed architecture, moving the system intelligence from the control centre to the peripheral devices. The global architecture of the system is presented, as well as the software design and the interaction between its main components. Finally, functional and operational results observed through the experimentation are described. This experimentation has been carried out in two real scenarios, in Lisbon (Portugal) and Pisa (Italy). Full article
(This article belongs to the Special Issue Selected Papers from UCAmI, IWAAL and AmIHEALTH 2015)
Show Figures

Figure 1

Figure 1
<p>Global system architecture.</p>
Full article ">Figure 2
<p>Software Architecture of platform on GWs.</p>
Full article ">Figure 3
<p>Architecture and technical design of the CLU.</p>
Full article ">Figure 4
<p>Map in SUMO used for the simulation: (<b>a</b>) Pisa city centre. In blue the selected area for the urban use cases simulation; (<b>b</b>) Zoom over the selected area, with a 350 slots parking facility; (<b>c</b>) Simplified model for the simulations, with the main road connections and the parking facility.</p>
Full article ">Figure 5
<p>Architecture Test Environment.</p>
Full article ">Figure 6
<p>Demonstration Web Application.</p>
Full article ">Figure 7
<p>Alert messages received by the user: fragment of the demonstrator web application.</p>
Full article ">Figure 8
Full article ">Figure 9
Full article ">Figure 10
Full article ">Figure 11
Full article ">Figure 12
Full article ">Figure 13
Full article ">Figure 14
Full article ">Figure 15
Full article ">Figure 16
Full article ">Figure 17
Full article ">Figure 18
Full article ">
5575 KiB  
Article
Vibration Sensitivity Reduction of Micromachined Tuning Fork Gyroscopes through Stiffness Match Method with Negative Electrostatic Spring Effect
by Yanwei Guan, Shiqiao Gao, Haipeng Liu, Lei Jin and Yaping Zhang
Sensors 2016, 16(7), 1146; https://doi.org/10.3390/s16071146 - 22 Jul 2016
Cited by 8 | Viewed by 5624
Abstract
In this paper, a stiffness match method is proposed to reduce the vibration sensitivity of micromachined tuning fork gyroscopes. Taking advantage of the coordinate transformation method, a theoretical model is established to analyze the anti-phase vibration output caused by the stiffness mismatch due [...] Read more.
In this paper, a stiffness match method is proposed to reduce the vibration sensitivity of micromachined tuning fork gyroscopes. Taking advantage of the coordinate transformation method, a theoretical model is established to analyze the anti-phase vibration output caused by the stiffness mismatch due to the fabrication imperfections. The analytical solutions demonstrate that the stiffness mismatch is proportional to the output induced by the external linear vibration from the sense direction in the anti-phase mode frequency. In order to verify the proposed stiffness match method, a tuning fork gyroscope (TFG) with the stiffness match electrodes is designed and implemented using the micromachining technology and the experimental study is carried out. The experimental tests illustrate that the vibration output can be reduced by 73.8% through the stiffness match method than the structure without the stiffness match. Therefore, the proposed stiffness match method is experimentally validated to be applicable to vibration sensitivity reduction in the Micro-Electro-Mechanical-Systems (MEMS) tuning fork gyroscopes without sacrificing the scale factor. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The model of the non-ideal TFG with the match stiffness.</p>
Full article ">Figure 2
<p>The stiffness match electrodes.</p>
Full article ">Figure 3
<p>Optical photograph of a dual-mass tuning fork gyroscope.</p>
Full article ">Figure 4
<p>Fabrication process of MEMS tuning fork gyroscope.</p>
Full article ">Figure 5
<p>The fabricated MEMS tuning fork gyroscope.</p>
Full article ">Figure 6
<p>The experimental setup.</p>
Full article ">Figure 7
<p>The resonant frequency with the increase of the voltage <math display="inline"> <semantics> <mrow> <msub> <mi>V</mi> <mrow> <mo>Δ</mo> <mi>k</mi> </mrow> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>The quality factor with the increase of the voltage <math display="inline"> <semantics> <mrow> <msub> <mi>V</mi> <mrow> <mo>Δ</mo> <mi>k</mi> </mrow> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 9
<p>The output voltage of the differential sense capacitance without the stiffness match.</p>
Full article ">Figure 10
<p>The measured vibration output of the differential sense capacitance in the anti-phase mode frequency.</p>
Full article ">Figure 11
<p>The theoretical vibration output of the differential sense capacitance in the anti-phase mode frequency.</p>
Full article ">Figure 12
<p>Comparisons with experimental and theoretical values of the vibration output.</p>
Full article ">
13354 KiB  
Article
Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System
by Feng Guo, Huawei Liu, Jingchang Huang, Xin Zhang, Xingshui Zu, Baoqing Li and Xiaobing Yuan
Sensors 2016, 16(7), 1145; https://doi.org/10.3390/s16071145 - 22 Jul 2016
Cited by 15 | Viewed by 5113
Abstract
In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the [...] Read more.
In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The flow chart to estimate the sub-band magnitude-squared coherence (SMSC).</p>
Full article ">Figure 2
<p>System architecture of the automatic bearing tracking system.</p>
Full article ">Figure 3
<p>Photograph of the automatic bearing tracking system.</p>
Full article ">Figure 4
<p>The flow chart of the sub-band direction-of-arrival (DOA) estimation method.</p>
Full article ">Figure 5
<p>The SMSC under different SNRs.</p>
Full article ">Figure 6
<p>Performance comparison. (<b>a</b>) The RMSEs of the three DOA estimation methods; (<b>b</b>) Spatial spectrums of the three DOA estimation methods.</p>
Full article ">Figure 7
<p>The time elapse of the three DOA estimation methods under 1000 estimations. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>J</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>The environment of the field experiment.</p>
Full article ">Figure 9
<p>Single vehicle tracking. (<b>a</b>) is the spectrum; (<b>b</b>) is the estimated DOAs.</p>
Full article ">Figure 10
<p>Vehicle tracking under interferences. (<b>a</b>) is from the multiple signal classification (MUSIC); (<b>b</b>) is from the magnitude-squared coherence (MSC)-MUSIC; (<b>c</b>) is from the two-sided correlation transformation (TCT); (<b>d</b>) is from the proposed sub-band method.</p>
Full article ">Figure 11
<p>Multiple vehicles tracking. (<b>a</b>) is from the MUSIC; (<b>b</b>) is from the MSC-MUSIC; (<b>c</b>) is from the TCT; (<b>d</b>) is from the proposed sub-band method.</p>
Full article ">
1763 KiB  
Review
Fiber Optic Sensors for Temperature Monitoring during Thermal Treatments: An Overview
by Emiliano Schena, Daniele Tosi, Paola Saccomandi, Elfed Lewis and Taesung Kim
Sensors 2016, 16(7), 1144; https://doi.org/10.3390/s16071144 - 22 Jul 2016
Cited by 168 | Viewed by 13740 | Correction
Abstract
During recent decades, minimally invasive thermal treatments (i.e., Radiofrequency ablation, Laser ablation, Microwave ablation, High Intensity Focused Ultrasound ablation, and Cryo-ablation) have gained widespread recognition in the field of tumor removal. These techniques induce a localized temperature increase or decrease to remove the [...] Read more.
During recent decades, minimally invasive thermal treatments (i.e., Radiofrequency ablation, Laser ablation, Microwave ablation, High Intensity Focused Ultrasound ablation, and Cryo-ablation) have gained widespread recognition in the field of tumor removal. These techniques induce a localized temperature increase or decrease to remove the tumor while the surrounding healthy tissue remains intact. An accurate measurement of tissue temperature may be particularly beneficial to improve treatment outcomes, because it can be used as a clear end-point to achieve complete tumor ablation and minimize recurrence. Among the several thermometric techniques used in this field, fiber optic sensors (FOSs) have several attractive features: high flexibility and small size of both sensor and cabling, allowing insertion of FOSs within deep-seated tissue; metrological characteristics, such as accuracy (better than 1 °C), sensitivity (e.g., 10 pm·°C−1 for Fiber Bragg Gratings), and frequency response (hundreds of kHz), are adequate for this application; immunity to electromagnetic interference allows the use of FOSs during Magnetic Resonance- or Computed Tomography-guided thermal procedures. In this review the current status of the most used FOSs for temperature monitoring during thermal procedure (e.g., fiber Bragg Grating sensors; fluoroptic sensors) is presented, with emphasis placed on their working principles and metrological characteristics. The essential physics of the common ablation techniques are included to explain the advantages of using FOSs during these procedures. Full article
(This article belongs to the Special Issue Optical Fiber Sensors 2016)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Minimally invasive thermal treatments for tumor removal: laser ablation (LA); microwave ablation (MWA); radiofrequency ablation (RFA); high intensity focused ultrasound (HIFU); and cryoablation.</p>
Full article ">Figure 2
<p>Concept of the utility related to the temperature monitoring.</p>
Full article ">Figure 3
<p>Schematic diagram of fiber optic sensor based on fluorescence lifetime measurement (from [<a href="#B50-sensors-16-01144" class="html-bibr">50</a>]).</p>
Full article ">Figure 4
<p>Application of FBG sensors to RF ablation [<a href="#B54-sensors-16-01144" class="html-bibr">54</a>,<a href="#B55-sensors-16-01144" class="html-bibr">55</a>]: the spectrum of an array of five FBGs is recorded during the heating and cooling stages; spectra on the chart after every 20 s of application.</p>
Full article ">
1679 KiB  
Letter
Localisation of Sensor Nodes with Hybrid Measurements in Wireless Sensor Networks
by Muhammad W. Khan, Naveed Salman, Andrew H. Kemp and Lyudmila Mihaylova
Sensors 2016, 16(7), 1143; https://doi.org/10.3390/s16071143 - 22 Jul 2016
Cited by 32 | Viewed by 6144
Abstract
Localisation in wireless networks faces challenges such as high levels of signal attenuation and unknown path-loss exponents, especially in urban environments. In response to these challenges, this paper proposes solutions to localisation problems in noisy environments. A new observation model for localisation of [...] Read more.
Localisation in wireless networks faces challenges such as high levels of signal attenuation and unknown path-loss exponents, especially in urban environments. In response to these challenges, this paper proposes solutions to localisation problems in noisy environments. A new observation model for localisation of static nodes is developed based on hybrid measurements, namely angle of arrival and received signal strength data. An approach for localisation of sensor nodes is proposed as a weighted linear least squares algorithm. The unknown path-loss exponent associated with the received signal strength is estimated jointly with the coordinates of the sensor nodes via the generalised pattern search method. The algorithm’s performance validation is conducted both theoretically and by simulation. A theoretical mean square error expression is derived, followed by the derivation of the linear Cramer-Rao bound which serves as a benchmark for the proposed location estimators. Accurate results are demonstrated with 25%–30% improvement in estimation accuracy with a weighted linear least squares algorithm as compared to linear least squares solution. Full article
(This article belongs to the Special Issue Scalable Localization in Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Network deployment with 30 target nodes (TNs) positioned at random unknown locations and 8 anchor nodes (ANs) at fixed known locations.</p>
Full article ">Figure 2
<p>Performance comparison between linear least squares (LLS) and weighted linear least squares (WLLS) for hybrid angle of arrival (AoA)-received signal strength (RSS) measurement. <math display="inline"> <semantics> <mrow> <msubsup> <mi>σ</mi> <mrow> <mi>m</mi> </mrow> <mn>2</mn> </msubsup> <mo>=</mo> <msup> <mn>4</mn> <mn>0</mn> </msup> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mn>1</mn> <mo>−</mo> <mn>8</mn> </mfenced> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.277778em"/> <mo>∀</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>ℓ</mi> <mo>=</mo> <mn>2500</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 3
<p>Division of network into different zone based on the theoretical mean square error (MSE). <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>8</mn> </mfenced> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.277778em"/> <mo>∀</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 4
<p>Performance comparison in terms of Avg. RMSE, using optimal subsets of ANs and using all ANs simultaneously. <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>8</mn> </mfenced> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>ℓ</mi> <mo>=</mo> <mn>1000</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.277778em"/> <mo>∀</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 5
<p>Performance evaluation via theoretical MSE expression and simulation for LLS. <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mfenced separators="" open="[" close=""> <mfenced separators="" open="(" close=")"> <mn>2</mn> <mo>,</mo> <mn>4</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>8</mn> </mfenced> </mfenced> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>6</mn> <mo>,</mo> <mn>7</mn> </mfenced> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mfenced separators="" open="" close="]"> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>−</mo> <mn>8</mn> </mfenced> </mfenced> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ℓ</mi> <mo>=</mo> <mn>1500</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.277778em"/> <mo>∀</mo> <mspace width="0.166667em"/> <mi>i</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Avg. RMSE comparison using estimated PLEs and true PLE’s. <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mn>1</mn> <mo>−</mo> <mn>8</mn> </mfenced> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>ℓ</mi> <mo>=</mo> <mn>2000</mn> <mo>,</mo> </mrow> </semantics> </math> <span class="html-italic">τ</span> = 1, <math display="inline"> <semantics> <mrow> <mi>ξ</mi> <mo>=</mo> <mn>2</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mo>Δ</mo> <mn>0</mn> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>v</mi> <mo>=</mo> <mn>10</mn> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>∈</mo> <mi mathvariant="script">U</mi> <mfenced separators="" open="[" close="]"> <mn>2</mn> <mo>,</mo> <mspace width="0.222222em"/> <mn>5</mn> </mfenced> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mn>0</mn> </msub> <mo>∈</mo> <mi mathvariant="script">U</mi> <mfenced separators="" open="[" close="]"> <mn>2</mn> <mo>,</mo> <mspace width="0.222222em"/> <mn>5</mn> </mfenced> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>σ</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>2</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 7
<p>Performance comparison between LLS, WLLS and LCRB using hybrid AoA-RSS measurements. <math display="inline"> <semantics> <mrow> <mi>ANs</mi> <mo>=</mo> <mfenced separators="" open="[" close="]"> <mn>1</mn> <mo>−</mo> <mn>8</mn> </mfenced> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>.</mo> <mn>5</mn> <mspace width="0.277778em"/> <mo>∀</mo> <mspace width="0.166667em"/> <mi>i</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>ℓ</mi> <mo>=</mo> <mn>2000</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">
3922 KiB  
Article
Passive Resistor Temperature Compensation for a High-Temperature Piezoresistive Pressure Sensor
by Zong Yao, Ting Liang, Pinggang Jia, Yingping Hong, Lei Qi, Cheng Lei, Bin Zhang, Wangwang Li, Diya Zhang and Jijun Xiong
Sensors 2016, 16(7), 1142; https://doi.org/10.3390/s16071142 - 22 Jul 2016
Cited by 31 | Viewed by 10091
Abstract
The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, [...] Read more.
The main limitation of high-temperature piezoresistive pressure sensors is the variation of output voltage with operating temperature, which seriously reduces their measurement accuracy. This paper presents a passive resistor temperature compensation technique whose parameters are calculated using differential equations. Unlike traditional experiential arithmetic, the differential equations are independent of the parameter deviation among the piezoresistors of the microelectromechanical pressure sensor and the residual stress caused by the fabrication process or a mismatch in the thermal expansion coefficients. The differential equations are solved using calibration data from uncompensated high-temperature piezoresistive pressure sensors. Tests conducted on the calibrated equipment at various temperatures and pressures show that the passive resistor temperature compensation produces a remarkable effect. Additionally, a high-temperature signal-conditioning circuit is used to improve the output sensitivity of the sensor, which can be reduced by the temperature compensation. Compared to traditional experiential arithmetic, the proposed passive resistor temperature compensation technique exhibits less temperature drift and is expected to be highly applicable for pressure measurements in harsh environments with large temperature variations. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Typical compensation circuit in a low-temperature coefficient resistor network.</p>
Full article ">Figure 2
<p>Serial connection for compensation of the bridge offset output voltage.</p>
Full article ">Figure 3
<p>Parallel connection for compensation of the bridge offset output voltage.</p>
Full article ">Figure 4
<p>Compensation of the bridge sensitivity.</p>
Full article ">Figure 5
<p>Passive resistor temperature compensation model with a constant voltage supply: <b>(a</b>) negative initial offset voltage; (<b>b</b>) positive initial offset voltage.</p>
Full article ">Figure 6
<p>The developed high-temperature pressure sensor and fabrication process: (<b>a</b>) the high-temperature pressure sensor; (<b>b</b>) the MEMS fabrication process.</p>
Full article ">Figure 7
<p>High-temperature and pressure calibration device developed by the authors.</p>
Full article ">Figure 8
<p>Test results for the uncompensated high-temperature pressure sensor: (<b>a</b>) output voltage calibration curve in the temperature and pressure environment; (<b>b</b>) thermal zero shift; (<b>c</b>) thermal sensitivity shift.</p>
Full article ">Figure 9
<p>Test results for the compensated high-temperature pressure sensor with the traditional temperature compensation model and experiential arithmetic: (<b>a</b>) output voltage calibration curve in the temperature and pressure environment; (<b>b</b>) thermal zero shift; (<b>c</b>) thermal sensitivity shift.</p>
Full article ">Figure 10
<p>Solving equations by plotting the parameter space in MATLAB.</p>
Full article ">Figure 11
<p>Passive resistor temperature compensation circuit using the data in <a href="#sensors-16-01142-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 12
<p>Test results for the compensated high-temperature pressure sensor with the passive resistor temperature compensation model and experiential arithmetic: (<b>a</b>) output voltage calibration curve in the temperature and pressure environment; (<b>b</b>) thermal zero shift; (<b>c</b>) thermal sensitivity shift.</p>
Full article ">Figure 12 Cont.
<p>Test results for the compensated high-temperature pressure sensor with the passive resistor temperature compensation model and experiential arithmetic: (<b>a</b>) output voltage calibration curve in the temperature and pressure environment; (<b>b</b>) thermal zero shift; (<b>c</b>) thermal sensitivity shift.</p>
Full article ">Figure 13
<p>Schematic of a high-temperature signal-conditioning circuit.</p>
Full article ">Figure 14
<p>Pressure sensor calibration test results.</p>
Full article ">Figure 15
<p>Sensor device pictures.</p>
Full article ">
8791 KiB  
Article
Developing Ubiquitous Sensor Network Platform Using Internet of Things: Application in Precision Agriculture
by Francisco Javier Ferrández-Pastor, Juan Manuel García-Chamizo, Mario Nieto-Hidalgo, Jerónimo Mora-Pascual and José Mora-Martínez
Sensors 2016, 16(7), 1141; https://doi.org/10.3390/s16071141 - 22 Jul 2016
Cited by 178 | Viewed by 18862
Abstract
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of [...] Read more.
The application of Information Technologies into Precision Agriculture methods has clear benefits. Precision Agriculture optimises production efficiency, increases quality, minimises environmental impact and reduces the use of resources (energy, water); however, there are different barriers that have delayed its wide development. Some of these main barriers are expensive equipment, the difficulty to operate and maintain and the standard for sensor networks are still under development. Nowadays, new technological development in embedded devices (hardware and communication protocols), the evolution of Internet technologies (Internet of Things) and ubiquitous computing (Ubiquitous Sensor Networks) allow developing less expensive systems, easier to control, install and maintain, using standard protocols with low-power consumption. This work develops and test a low-cost sensor/actuator network platform, based in Internet of Things, integrating machine-to-machine and human-machine-interface protocols. Edge computing uses this multi-protocol approach to develop control processes on Precision Agriculture scenarios. A greenhouse with hydroponic crop production was developed and tested using Ubiquitous Sensor Network monitoring and edge control on Internet of Things paradigm. The experimental results showed that the Internet technologies and Smart Object Communication Patterns can be combined to encourage development of Precision Agriculture. They demonstrated added benefits (cost, energy, smart developing, acceptance by agricultural specialists) when a project is launched. Full article
(This article belongs to the Special Issue Selected Papers from UCAmI, IWAAL and AmIHEALTH 2015)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Agricultural Production Subsystems.</p>
Full article ">Figure 2
<p>IoT model used: device-to-gateway pattern (RFC7452).</p>
Full article ">Figure 3
<p>Design and development: project planning.</p>
Full article ">Figure 4
<p>IoT ecosystem developed. Sensors, actuators and IP access devices make up the ubiquitous thing layer.</p>
Full article ">Figure 5
<p>Platform Architecture. Internet and Intranet computing are integrated in the Edge Layer. Local processes pushes applications, data and computing power (services) away from local points to the logical extremes of the network. Logical things are stored and analysed in Cloud Layer.</p>
Full article ">Figure 6
<p>REST/PUT-GET and MQTT subscriber/publisher interactions.</p>
Full article ">Figure 7
<p>Experimental Hydroponic Station. Hydroponic crop in greenhouse. Localization and different components.</p>
Full article ">Figure 8
<p>Experimental Hydroponic Station deployment. IoT communication is tested using three USN controlled by two kind of embedded devices. Sensors and actuators are logical variables in Ubidots IoT framework. GUI interfaces, analytic, storage and events programming are tested during plants growth. Control local processes are implemented in these devices.</p>
Full article ">Figure 9
<p>Response time (<b>a</b>) Messages using MQTT server on control processes; (<b>b</b>) HTTP requests using cloud web server on graphical data monitoring.</p>
Full article ">Figure 10
<p>Example of Temperature and relative Humidity of greenhouse (in/out) showed on cloud-web server. Sampling time is defined by the agronomist. These sensors are included in USN3.</p>
Full article ">Figure 11
<p>Cloud server graphic of soil moisture sensors data and control algorithms designed by agronomist.</p>
Full article ">
29101 KiB  
Article
Evaluation of Deployment Challenges of Wireless Sensor Networks at Signalized Intersections
by Leyre Azpilicueta, Peio López-Iturri, Erik Aguirre, Carlos Martínez, José Javier Astrain, Jesús Villadangos and Francisco Falcone
Sensors 2016, 16(7), 1140; https://doi.org/10.3390/s16071140 - 22 Jul 2016
Cited by 12 | Viewed by 5530
Abstract
With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, [...] Read more.
With the growing demand of Intelligent Transportation Systems (ITS) for safer and more efficient transportation, research on and development of such vehicular communication systems have increased considerably in the last years. The use of wireless networks in vehicular environments has grown exponentially. However, it is highly important to analyze radio propagation prior to the deployment of a wireless sensor network in such complex scenarios. In this work, the radio wave characterization for ISM 2.4 GHz and 5 GHz Wireless Sensor Networks (WSNs) deployed taking advantage of the existence of traffic light infrastructure has been assessed. By means of an in-house developed 3D ray launching algorithm, the impact of topology as well as urban morphology of the environment has been analyzed, emulating the realistic operation in the framework of the scenario. The complexity of the scenario, which is an intersection city area with traffic lights, vehicles, people, buildings, vegetation and urban environment, makes necessary the channel characterization with accurate models before the deployment of wireless networks. A measurement campaign has been conducted emulating the interaction of the system, in the vicinity of pedestrians as well as nearby vehicles. A real time interactive application has been developed and tested in order to visualize and monitor traffic as well as pedestrian user location and behavior. Results show that the use of deterministic tools in WSN deployment can aid in providing optimal layouts in terms of coverage, capacity and energy efficiency of the network. Full article
Show Figures

Figure 1

Figure 1
<p>Wave front propagation with rays associated with single wave front points in the considered scenario.</p>
Full article ">Figure 2
<p>Schematic representation of the principle of operation of the in-house developed 3D RL algorithm.</p>
Full article ">Figure 3
<p>Real (<b>left</b>) and schematic (<b>right</b>) view of the considered scenario for simulation in the 3D Ray Launching Algorithm.</p>
Full article ">Figure 4
<p>Schematic view of the position of the different antennas within the considered scenario.</p>
Full article ">Figure 5
<p>Estimation of received power (dBm) on the considered scenario (<span class="html-italic">XY</span> planes) for different heights obtained by the 3D Ray Launching algorithm (<b>a</b>) 1.4 m height for the sensor #1; (<b>b</b>) 3 m height for sensor #1; (<b>c</b>) 1.4 m height for sensor #6; (<b>d</b>) 3 m height for sensor #6.</p>
Full article ">Figure 6
<p>Estimation of received power (dBm) on the considered scenario (<span class="html-italic">YZ</span> planes) for different distances of <span class="html-italic">X</span> obtained by the 3D Ray Launching algorithm (<b>a</b>) <span class="html-italic">X</span> = 73 m for the sensor #6; (<b>b</b>) <span class="html-italic">X</span> = 55 m for sensor #9.</p>
Full article ">Figure 7
<p>Estimation of received power (dBm) on the considered scenario (<span class="html-italic">XY</span> planes) divided by zones obtained by the 3D Ray Launching algorithm (<b>a</b>) 1.4 m height for the sensor #1; (<b>b</b>) 1.4 m height for sensor #6.</p>
Full article ">Figure 7 Cont.
<p>Estimation of received power (dBm) on the considered scenario (<span class="html-italic">XY</span> planes) divided by zones obtained by the 3D Ray Launching algorithm (<b>a</b>) 1.4 m height for the sensor #1; (<b>b</b>) 1.4 m height for sensor #6.</p>
Full article ">Figure 8
<p>Estimation of delay spread (ns) on the considered scenario for different positions of the transmitter antenna in different traffic street lights (<b>a</b>) 3 m height for sensor #5; (<b>b</b>) 3 m height for sensor #6.</p>
Full article ">Figure 9
<p>Power Delay Profile at a given cuboid, located at the central location in the considered scenario.</p>
Full article ">Figure 10
<p>Bit Error Rate for QPSK modulation for different values of <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> for data rate of 250 Kbps.</p>
Full article ">Figure 11
<p>Bit Error Rate for QPSK modulation for different values of <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> for data rate of 57,600 bps.</p>
Full article ">Figure 12
<p>View of the considered scenario with the transmitter position and the measurement points.</p>
Full article ">Figure 13
<p>Measured spectrogram in the 2.41 GHz (<b>top</b>) and 5.9 GHz band (<b>bottom</b>).</p>
Full article ">Figure 14
<p>Comparison simulation vs. measurements for 2.41 GHz and 5.9 GHz in the scenario considered.</p>
Full article ">Figure 15
<p>Comparison of radial of received power (dBm) along the <span class="html-italic">X</span>-axis with the receiver sensitivity (<b>a</b>) ZigBee XBee Pro and ZigBee XBee for <span class="html-italic">Y</span> = 6 m; (<b>b</b>) ZigBee XbEe Pro and ZigBee XBee for <span class="html-italic">Y</span> = 14 m; (<b>c</b>) BLE system; (<b>d</b>) Classic Bluetooth; (<b>e</b>) 802.11p Radio for <span class="html-italic">Y</span> = 6 m; (<b>f</b>) 802.11p Radio for <span class="html-italic">Y</span> = 14 m.</p>
Full article ">Figure 15 Cont.
<p>Comparison of radial of received power (dBm) along the <span class="html-italic">X</span>-axis with the receiver sensitivity (<b>a</b>) ZigBee XBee Pro and ZigBee XBee for <span class="html-italic">Y</span> = 6 m; (<b>b</b>) ZigBee XbEe Pro and ZigBee XBee for <span class="html-italic">Y</span> = 14 m; (<b>c</b>) BLE system; (<b>d</b>) Classic Bluetooth; (<b>e</b>) 802.11p Radio for <span class="html-italic">Y</span> = 6 m; (<b>f</b>) 802.11p Radio for <span class="html-italic">Y</span> = 14 m.</p>
Full article ">Figure 16
<p>Aerial view of the radial lines, which are represented in <a href="#sensors-16-01140-f015" class="html-fig">Figure 15</a>, along the <span class="html-italic">X</span>-axis and <span class="html-italic">Y</span>-axis.</p>
Full article ">Figure 17
<p>Radioplanning coverage for different technologies within the considered environment.</p>
Full article ">Figure 18
<p>Channel capacity (bps) vs. the number of users for different number of gateways considered. (<b>a</b>) Data Rate = 250 Kbps; (<b>b</b>) Data Rate = 13 Mbps; (<b>c</b>) Data Rate = 18 Mbps; (<b>d</b>) Data Rate = 27 Mbps.</p>
Full article ">Figure 19
<p>Channel capacity (bps) vs. the number of users for different data rates for eight gateways considered.</p>
Full article ">Figure 20
<p>Traffic monitoring tool.</p>
Full article ">Figure 21
<p>Message bus software architecture.</p>
Full article ">Figure 22
<p>Queueing system and data insertion into the database.</p>
Full article ">Figure 23
<p>Parameters monitored by the WSN. (<b>a</b>) Temperature variation per hour; (<b>b</b>) Maximum, minimum and average temperatures per day; (<b>c</b>) Relative humidity per day; (<b>d</b>) Wind orientation and intensity.</p>
Full article ">Figure 24
<p>Relative humidity measured (<b>a</b>) and PIR activity detected (<b>b</b>).</p>
Full article ">
6516 KiB  
Article
A High Precision Terahertz Wave Image Reconstruction Algorithm
by Qijia Guo, Tianying Chang, Guoshuai Geng, Chengyan Jia and Hong-Liang Cui
Sensors 2016, 16(7), 1139; https://doi.org/10.3390/s16071139 - 22 Jul 2016
Cited by 16 | Viewed by 5904
Abstract
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging [...] Read more.
With the development of terahertz (THz) technology, the applications of this spectrum have become increasingly wide-ranging, in areas such as non-destructive testing, security applications and medical scanning, in which one of the most important methods is imaging. Unlike remote sensing applications, THz imaging features sources of array elements that are almost always supposed to be spherical wave radiators, including single antennae. As such, well-developed methodologies such as Range-Doppler Algorithm (RDA) are not directly applicable in such near-range situations. The Back Projection Algorithm (BPA) can provide products of high precision at the the cost of a high computational burden, while the Range Migration Algorithm (RMA) sacrifices the quality of images for efficiency. The Phase-shift Migration Algorithm (PMA) is a good alternative, the features of which combine both of the classical algorithms mentioned above. In this research, it is used for mechanical scanning, and is extended to array imaging for the first time. In addition, the performances of PMA are studied in detail in contrast to BPA and RMA. It is demonstrated in our simulations and experiments described herein that the algorithm can reconstruct images with high precision. Full article
(This article belongs to the Special Issue Infrared and THz Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The setup of single antenna scanning scenario.</p>
Full article ">Figure 2
<p>Block diagram of PMA for monostatic case.</p>
Full article ">Figure 3
<p>The array geometry and the setup of the experiment for multiple input and output elements: (<b>a</b>) the plus array geometry; (<b>b</b>) the setup of the experiment.</p>
Full article ">Figure 4
<p>Block diagram of PMA for array imaging.</p>
Full article ">Figure 5
<p>Two kinds of targets used in the simulations: (<b>a</b>) Seven extremely small metal points that are arranged in a specific way; (<b>b</b>) Metal fan with eight blades.</p>
Full article ">Figure 6
<p>The slice figures along range direction of different distance with BPA, RMA and PMA for monostatic case respectively: (<b>a</b>) x = 0.03 with BPA; (<b>b</b>) x = 0 with BPA; (<b>c</b>) x = −0.03 with BPA; (<b>d</b>) x = 0.055 with RMA; (<b>e</b>) x = 0 with RMA; (<b>f</b>) x = −0.064 with RMA; (<b>g</b>) x = 0.03 with PMA; (<b>h</b>) x = 0 with PMA; (<b>i</b>) x = −0.03 with PMA.</p>
Full article ">Figure 7
<p>The 3-D reflectivity coefficients calculated with the algorithm of: (<b>a</b>) BPA; (<b>b</b>) RMA; (<b>c</b>) PMA. The brightness of each pixel represents the maximum modulus of the complex voxels along the range direction, while the color, which ranges from red to blue with increasing distance, of each pixel indicates the position where the maximum modulus is situated.</p>
Full article ">Figure 8
<p>Slice figures reconstructed by the algorithm of: (<b>a</b>) BPA; (<b>b</b>) RMA; (<b>c</b>) PMA.</p>
Full article ">Figure 9
<p>The 3-D reflectivity figures are presented which are calculated with the algorithm of: (<b>a</b>) BPA; (<b>b</b>) RMA; (<b>c</b>) PMA.</p>
Full article ">Figure 10
<p>PMA applied to electromagnetic field simulation in the multistatic antenna array case: (<b>a</b>) the slice figure; (<b>b</b>) 3-D reflectivity figure.</p>
Full article ">Figure 11
<p>The setup of the experiment: (<b>a</b>) the experiment platform; (<b>b</b>) the sample to be imaged.</p>
Full article ">Figure 12
<p>Slice figures of the metal fan at: (<b>a</b>) x = 0.398 with BPA; (<b>b</b>) x = 0.317 with RMA; (<b>c</b>) x = 0.397 with PMA.</p>
Full article ">Figure 13
<p>3-D reflectivity figures by: (<b>a</b>) BPA; (<b>b</b>) RMA; (<b>c</b>) PMA. The brightness of each pixel represents the maximum modulus of the complex voxels along the range direction, while the color, which ranges from red to blue with increasing distance, of each pixel indicates the position where the maximum modulus is situated.</p>
Full article ">
3353 KiB  
Article
On Inertial Body Tracking in the Presence of Model Calibration Errors
by Markus Miezal, Bertram Taetz and Gabriele Bleser
Sensors 2016, 16(7), 1132; https://doi.org/10.3390/s16071132 - 22 Jul 2016
Cited by 80 | Viewed by 10264
Abstract
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). [...] Read more.
In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. Full article
(This article belongs to the Special Issue Inertial Sensors and Systems 2016)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Two different biomechanical model representations. Note the additional world coordinate system in the kinematic chain model.</p>
Full article ">Figure 2
<p>Capturing setup for the real data scenario. In the picture on the left, the segment coordinate systems are associated to the proximal ends of the segments. Note, the axes are orthogonal and only roughly aligned with the anatomical axes of rotation through the skeleton fitting of the optical system as described in <a href="#sec2dot6dot1-sensors-16-01132" class="html-sec">Section 2.6.1</a>. Precise alignment with the anatomical axes was not in the focus of this study. In the N-pose, for the right arm, the <span class="html-italic">x</span>-axes are chosen perpendicular to the frontal plane pointing anterior, the <span class="html-italic">y</span>-axes are perpendicular to the transverse plane pointing along the segments in the direction from the distal to the proximal ends and the <span class="html-italic">z</span>-axes are perpendicular to the sagittal plane pointing lateral. The picture also indicates the initial arm configuration for <span class="html-italic">real-slow</span> and <span class="html-italic">real-fast</span>.</p>
Full article ">Figure 3
<p>Real data scenario: Euler angle sequences (<math display="inline"> <semantics> <mrow> <mi>z</mi> <mo>,</mo> <msup> <mi>x</mi> <mo>′</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mrow> <mo>″</mo> </mrow> </msup> </mrow> </semantics> </math> convention) and ranges of motion, <math display="inline"> <semantics> <mrow> <mo>[</mo> <mtext>minimum</mtext> <mspace width="4.pt"/> <mtext>angle</mtext> <mo>,</mo> <mtext>maximum</mtext> <mspace width="4.pt"/> <mtext>angle</mtext> <mo>]</mo> </mrow> </semantics> </math> (each provided in degree), of <span class="html-italic">real-slow</span> (<b>a</b>–<b>c</b>) and <span class="html-italic">real-fast</span> (<b>d</b>–<b>f</b>). The segment axes and initial segment orientations are as shown in <a href="#sensors-16-01132-f002" class="html-fig">Figure 2</a>. Note, the shoulder angles (left column) are represented <span class="html-italic">w.r.t.</span> to the initial upper arm configuration <math display="inline"> <semantics> <msubsup> <mi>q</mi> <mrow> <mn>0</mn> <mo>,</mo> <mn>0</mn> </mrow> <mrow> <mi>G</mi> <mi>S</mi> </mrow> </msubsup> </semantics> </math>, rather than <span class="html-italic">w.r.t.</span> the global frame, in order to cancel out the unknown heading offset for easier interpretation.</p>
Full article ">Figure 4
<p>Simulation scenario: angle sequence applied to each rotational DoF of the three segment kinematic chain model (cf. <a href="#sensors-16-01132-t005" class="html-table">Table 5</a>) used for simulating <span class="html-italic">sim-fast-artificial</span>.</p>
Full article ">Figure 5
<p>Simulation scenario: Per segment mean angular error distributions on <span class="html-italic">sim-fast</span> for along-bone and out-of-bone I2S orientation calibration errors (cf. <a href="#sec2dot6dot3-sensors-16-01132" class="html-sec">Section 2.6.3</a>).</p>
Full article ">Figure 6
<p>Simulation scenario: Per segment mean angular error distributions on <span class="html-italic">sim-fast</span> for along-bone and out-of-bone I2S position calibration errors (cf. <a href="#sec2dot6dot3-sensors-16-01132" class="html-sec">Section 2.6.3</a>).</p>
Full article ">Figure 7
<p>Simulation scenario: The upper row shows the per segment mean angular error distributions on <span class="html-italic">sim-fast</span> for segment length errors. The lower row shows the errors w/o magnetometers splitted into yaw and pitch/roll errors.</p>
Full article ">
419 KiB  
Article
Incentives for Delay-Constrained Data Query and Feedback in Mobile Opportunistic Crowdsensing
by Yang Liu, Fan Li and Yu Wang
Sensors 2016, 16(7), 1138; https://doi.org/10.3390/s16071138 - 21 Jul 2016
Cited by 15 | Viewed by 5772
Abstract
In this paper, we propose effective data collection schemes that stimulate cooperation between selfish users in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first [...] Read more.
In this paper, we propose effective data collection schemes that stimulate cooperation between selfish users in mobile opportunistic crowdsensing. A query issuer generates a query and requests replies within a given delay budget. When a data provider receives the query for the first time from an intermediate user, the former replies to it and authorizes the latter as the owner of the reply. Different data providers can reply to the same query. When a user that owns a reply meets the query issuer that generates the query, it requests the query issuer to pay credits. The query issuer pays credits and provides feedback to the data provider, which gives the reply. When a user that carries a feedback meets the data provider, the data provider pays credits to the user in order to adjust its claimed expertise. Queries, replies and feedbacks can be traded between mobile users. We propose an effective mechanism to define rewards for queries, replies and feedbacks. We formulate the bargain process as a two-person cooperative game, whose solution is found by using the Nash theorem. To improve the credit circulation, we design an online auction process, in which the wealthy user can buy replies and feedbacks from the starving one using credits. We have carried out extensive simulations based on real-world traces to evaluate the proposed schemes. Full article
Show Figures

Figure 1

Figure 1
<p>An example of data query and feedback in a small community where the red dotted curved arrow indicates the movement of a user, and the black solid straight line arrow depicts communication.</p>
Full article ">Figure 2
<p>Distribution of available credits.</p>
Full article ">Figure 3
<p>Impact of credit amount on reply delay.</p>
Full article ">Figure 4
<p>Distribution of the packet exchange.</p>
Full article ">Figure 5
<p>Distribution of the failed transmissions.</p>
Full article ">Figure 6
<p>Convergence of expertise.</p>
Full article ">Figure 7
<p>Performance trend with increasing queue size.</p>
Full article ">Figure 8
<p>Performance trend with increasing generation rate.</p>
Full article ">Figure 9
<p>Performance trend with increasing delay budget.</p>
Full article ">
1138 KiB  
Article
Experimental Evaluation of Unicast and Multicast CoAP Group Communication
by Isam Ishaq, Jeroen Hoebeke, Ingrid Moerman and Piet Demeester
Sensors 2016, 16(7), 1137; https://doi.org/10.3390/s16071137 - 21 Jul 2016
Cited by 28 | Viewed by 6905
Abstract
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained [...] Read more.
The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP. Full article
(This article belongs to the Special Issue Intelligent Internet of Things (IoT) Networks)
Show Figures

Figure 1

Figure 1
<p>Example of CoAP Non-confirmable Message (NON) exchange. A Message ID (MID) is needed in the header for duplicate detection.</p>
Full article ">Figure 2
<p>Example of CoAP Confirmable Message (CON) exchange. If the client does not receive an ACK for its CON within a certain time, it retransmits the same CON again until it gets acknowledged or until the client runs out of retransmission attempts.</p>
Full article ">Figure 3
<p>Clients create entities consisting of several smart object resources on the entity manager.</p>
Full article ">Figure 4
<p>An example of the creation and usage of a multicast entity. LLN, Low-power and Lossy Network.</p>
Full article ">Figure 5
<p>Screenshots using the CoAP++ client GUI to create, query and delete a multicast entity of three members. (<b>a</b>) Creating a multicast entity mcast1; (<b>b</b>) profile of mcast1; (<b>c</b>) querying mcast1 with default properties; (<b>d</b>) querying mcast1 with entity operation avg; (<b>e</b>) using mcast1 as an anycast entity; (<b>f</b>) deleting the entity mcast1.</p>
Full article ">Figure 6
<p>Experimental setup at w-iLab.t Zwijnaarde: generic wireless testbed. The circles represent the location of the nodes.</p>
Full article ">Figure 7
<p>Response time of an entity with five members as a function of the delay between individual requests, evaluated using both a simulator and the experimental testbed.</p>
Full article ">Figure 8
<p>Entity response time for different group sizes as a function of the delay between individual requests, evaluated using the testbed.</p>
Full article ">Figure 9
<p>Entity response time per member for different group sizes as a function of the delay between individual requests to members, evaluated using the testbed.</p>
Full article ">Figure 10
<p>Member reliability for both unicast and multicast group communication and varying group sizes in the presence of Wi-Fi interference. The member reliability is much better when using unicast-based group communication.</p>
Full article ">Figure 11
<p>Entity reliability for both unicast and multicast group communication and varying group sizes in the presence of Wi-Fi interference. The reliability of the complete group is less than the reliability of individual members (<a href="#sensors-16-01137-f010" class="html-fig">Figure 10</a>). Again, the reliability of the complete group is much better when using entity-based group communication.</p>
Full article ">Figure 12
<p>Entity response time for both unicast and multicast group communication and varying group sizes in the presence of Wi-Fi interference.</p>
Full article ">Figure 13
<p>Entity response time in the presence of Wi-Fi interference for an entity of 20 members versus a nested entity consisting of two smaller entities having each 10 members.</p>
Full article ">Figure 14
<p>Entity response time of an entity with 10 members in the presence of Wi-Fi interference for varying initial back-off times.</p>
Full article ">Figure 15
<p>Experimental setup at the w-iLab.t office. The circles represent the location of the nodes on the third floor of the office building. The filled circles represent the nodes used in the experiment. The other nodes were idle.</p>
Full article ">Figure 16
<p>Entity response times for both unicast and multicast group communication in a real-life environment under low and normal network usage. Unicast reliability is achieved by exponential retransmissions and leads to a large increase in the response times.</p>
Full article ">
6715 KiB  
Article
A Smart Spoofing Face Detector by Display Features Analysis
by ChinLun Lai and ChiuYuan Tai
Sensors 2016, 16(7), 1136; https://doi.org/10.3390/s16071136 - 21 Jul 2016
Cited by 7 | Viewed by 6832
Abstract
In this paper, a smart face liveness detector is proposed to prevent the biometric system from being “deceived” by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By [...] Read more.
In this paper, a smart face liveness detector is proposed to prevent the biometric system from being “deceived” by the video or picture of a valid user that the counterfeiter took with a high definition handheld device (e.g., iPad with retina display). By analyzing the characteristics of the display platform and using an expert decision-making core, we can effectively detect whether a spoofing action comes from a fake face displayed in the high definition display by verifying the chromaticity regions in the captured face. That is, a live or spoof face can be distinguished precisely by the designed optical image sensor. To sum up, by the proposed method/system, a normal optical image sensor can be upgraded to a powerful version to detect the spoofing actions. The experimental results prove that the proposed detection system can achieve very high detection rate compared to the existing methods and thus be practical to implement directly in the authentication systems. Full article
Show Figures

Figure 1

Figure 1
<p>Spoofing the biometric system with retina identification technology. Demonstration of spoofing the biometric system with retina resolution display, (<b>a</b>) the actual filmed image, and (<b>b</b>) image reconstruction by an iPad with retinal display.</p>
Full article ">Figure 2
<p>An example spectrum of most of the white color light-emitting diodes (LEDs).</p>
Full article ">Figure 3
<p>The color space of hue vector.</p>
Full article ">Figure 4
<p>The four kinds of original color image. Original image of the displayed color for (<b>a</b>) black; (<b>b</b>) white; (<b>c</b>) red; and (<b>d</b>) blue.</p>
Full article ">Figure 5
<p>Comparison images of the corresponding colors shown on the high definition LED display monitor. The LED compared images for (<b>a</b>) black; (<b>b</b>) white; (<b>c</b>) red; and (<b>d</b>) blue.</p>
Full article ">Figure 6
<p>Face image with the region of interest (ROI) being identified and captured by the STASM algorithm. (<b>a</b>) Capturing the region of the face with STASM; and (<b>b</b>) Converting the image into the resolution of 320 × 320.</p>
Full article ">Figure 7
<p>The interest face features of eyes, nose, mouths, and eyebrows.</p>
Full article ">Figure 8
<p>Hue, Saturation and Value (HSV) images of the authentic image and the spoofed image. The eyes, nose, mouse, and eyebrows feature images in HSV space. (<b>a</b>) The original (authentic) face; (<b>b</b>) The reproduced (spoofed) face.</p>
Full article ">Figure 9
<p>The relation between the saturation and the average hue of the authentic image (left) and the spoofed image (right) using nose (blue) and mouth (red) as examples. (<b>a</b>) real image; and (<b>b</b>) spoofed image.</p>
Full article ">Figure 10
<p>The relation between the saturation and the average hue of the authentic image (left) and the spoofed image (right) using eyes (blue) and eyebrows (red) as examples. (<b>a</b>) authentic image; and (<b>b</b>) spoofed image.</p>
Full article ">Figure 11
<p>The hue distribution of the authentic image of (<b>a</b>) nose, (<b>c</b>) mouth,) and the spoofed image of (<b>b</b>) nose, (<b>d</b>) mouth as examples.</p>
Full article ">Figure 12
<p>The hue distribution of the authentic image (Left column) and the spoofed image (Right column) using eyes (Upper two rows) and eyebrows (Lower two rows) as examples.</p>
Full article ">Figure 13
<p>The proposed probabilistic neural network (PNN) structure.</p>
Full article ">Figure 14
<p>Samples of test face images (in 4557 images). (<b>a</b>) Samples of authentic images; (<b>b</b>) Samples of spoofed images (displayed by iPad).</p>
Full article ">Figure 15
<p>Samples of detection error cases. (<b>a</b>) False reject case images; and (<b>b</b>) False reject case images. It is observed that face samples with blue eyes more often resulted in false reject error.</p>
Full article ">Figure 16
<p>Samples of face with high reflected regions.</p>
Full article ">
6002 KiB  
Article
Design and Development for Capacitive Humidity Sensor Applications of Lead-Free Ca,Mg,Fe,Ti-Oxides-Based Electro-Ceramics with Improved Sensing Properties via Physisorption
by Ashis Tripathy, Sumit Pramanik, Ayan Manna, Satyanarayan Bhuyan, Nabila Farhana Azrin Shah, Zamri Radzi and Noor Azuan Abu Osman
Sensors 2016, 16(7), 1135; https://doi.org/10.3390/s16071135 - 21 Jul 2016
Cited by 113 | Viewed by 10590
Abstract
Despite the many attractive potential uses of ceramic materials as humidity sensors, some unavoidable drawbacks, including toxicity, poor biocompatibility, long response and recovery times, low sensitivity and high hysteresis have stymied the use of these materials in advanced applications. Therefore, in present investigation, [...] Read more.
Despite the many attractive potential uses of ceramic materials as humidity sensors, some unavoidable drawbacks, including toxicity, poor biocompatibility, long response and recovery times, low sensitivity and high hysteresis have stymied the use of these materials in advanced applications. Therefore, in present investigation, we developed a capacitive humidity sensor using lead-free Ca,Mg,Fe,Ti-Oxide (CMFTO)-based electro-ceramics with perovskite structures synthesized by solid-state step-sintering. This technique helps maintain the submicron size porous morphology of the developed lead-free CMFTO electro-ceramics while providing enhanced water physisorption behaviour. In comparison with conventional capacitive humidity sensors, the presented CMFTO-based humidity sensor shows a high sensitivity of up to 3000% compared to other materials, even at lower signal frequency. The best also shows a rapid response (14.5 s) and recovery (34.27 s), and very low hysteresis (3.2%) in a 33%–95% relative humidity range which are much lower values than those of existing conventional sensors. Therefore, CMFTO nano-electro-ceramics appear to be very promising materials for fabricating high-performance capacitive humidity sensors. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart for sensor fabrication with the morphology at different sintering temperature.</p>
Full article ">Figure 2
<p>Experimental setup for the measurement of the capacitive humidity response of the electro-ceramic based sensors.</p>
Full article ">Figure 3
<p>Pore size distribution (PSD) and relative cumulative frequency (RCF) of (<b>a</b>) unsintered and sintered at (<b>b</b>) 450 °C, (<b>c</b>) 650 °C, (<b>d</b>) 850 °C and (<b>e</b>) 1050 °C materials measuring from the electron micrographs using ImageJ.</p>
Full article ">Figure 4
<p>Density, open-porosity, water absorption and water contact angle (WCA) of (<b>a</b>) unsintered and sintered at (<b>b</b>) 450 °C; (<b>c</b>) 650 °C; (<b>d</b>) 850 °C and (<b>e</b>) 1050 °C ceramic samples.</p>
Full article ">Figure 5
<p>The response curves of the capacitance versus relative humidity (RH) at different frequencies of CMFTO electro-ceramic at 25 °C. Inset image represents the variation of capacitance with RH at 25 °C at different frequency in logarithmic scale (log(<span class="html-italic">C</span>) vs. % RH). Note: the capacitance increases monotonically with % RH at different frequencies, but increased rate is faster at 10<sup>2</sup> Hz.</p>
Full article ">Figure 6
<p>The variations of capacitance with frequency at different humidity condition (33%–95% RH) for CMFTO based humidity sensor at 25 °C. Inset image represents the variation of capacitance with frequency at different RH in logarithmic scale (log(<span class="html-italic">C</span>) vs. log(RH)). Note: The value of capacitance increases with increased % RH, but decreases with increased frequency. The decreased rate is faster in lower frequency (&lt;10<sup>4</sup> Hz) and higher humidity range (&gt;85% RH).</p>
Full article ">Figure 7
<p>The sensitivity (%S) response of CMFTO based capacitive sensor with % RH at different test frequencies at 25 °C. Note: the sensitivity increases monotonically with % RH at different frequencies, but the value of sensitivity is highest (~3000%) at 10<sup>2</sup> Hz. Hence, 10<sup>2</sup> Hz is considered as the most suitable frequency for the further analysis.</p>
Full article ">Figure 8
<p>Schematic representation of the humidity sensing mechanism of CMFTO electro-ceramic at different humidity environment. Note: the adsorption of water molecules on CMFTO nanoceramic is characterized by two processes. The first-layer water molecules (at lower humidity) are attached on the CMFTO electro-ceramic through two hydrogen bonds. As a result, the water molecules are not able to move freely and thus, the impedance value increases. In contrast, from the second layer (at higher humidity), water molecules are adsorbed only through one hydrogen bond. Hence, the water molecules are able to move freely and thus, the impedance value decreases. This insists to increase the capacitance value.</p>
Full article ">Figure 9
<p>The transformed response curves of logarithmic capacitance (logC) vs. RH of CMFTO electro-ceramic based capacitive sensor. Note: first linear transformation curve (red-line) is well fitted by logC = 0.0102RH − 10.8148 in the RH range from 33% to 75% and the second linear transformation curve (green-line) is well fitted by the formula logC = 0.0532RH − 14.0401 at the higher humidity range (&gt;75% RH). Here, regression, R<sup>2</sup> represents a best fit of the curves to improve linearity.</p>
Full article ">Figure 10
<p>The hysteresis property of CMFTO electro-ceramic-based capacitive humidity sensor at 10<sup>2</sup> Hz under 25 °C. Note: the value of hysteresis is extremely low (~3.2%) compared to other conventional capacitive sensors. The low hysteresis value is mainly due to the fast adsorption and desorption rate of water particles on the surface of the CMFTO electro-ceramic.</p>
Full article ">Figure 11
<p>Response and recovery times of the CMFTO humidity sensors for humidity levels between 33% RH and 95% RH at 10<sup>2</sup> Hz. (<b>A</b>) Response time (14.5 s); (<b>B</b>) Recovery time (34.27 s).</p>
Full article ">Figure 12
<p>Stability analysis of CMFTO electro-ceramic-based humidity sensor measured at a test frequency 10<sup>2</sup> Hz at 25 °C. Note: The measurement was conducted repeatedly for 30 days at 2-day interval and very negligible changes are observed.</p>
Full article ">Figure 13
<p>Complex impedance plots and equivalent circuits of CMFTO based electro-ceramic under different humidity levels. (<b>A</b>) At lower humidity range (33%–75% RH), single semicircles are formed; the inset represents an equivalent circuit at lower RH; (<b>B</b>) At higher humidity condition (85%–95% RH), the radii of semicircle decrease and a straight line appears, and the straight lines become longer with increasing of humidity; the inset represents an equivalent circuit at higher RH. <span class="html-italic">R</span><sub>f</sub> and <span class="html-italic">C</span><sub>f</sub>: are the resistance and capacitance of CMFTO electro-ceramics, respectively; <span class="html-italic">Z</span><sub>i</sub>: interface impedance between CMFTO electro-ceramic surface and electrode.</p>
Full article ">
3995 KiB  
Article
Enhanced Gender Recognition System Using an Improved Histogram of Oriented Gradient (HOG) Feature from Quality Assessment of Visible Light and Thermal Images of the Human Body
by Dat Tien Nguyen and Kang Ryoung Park
Sensors 2016, 16(7), 1134; https://doi.org/10.3390/s16071134 - 21 Jul 2016
Cited by 17 | Viewed by 8098
Abstract
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed [...] Read more.
With higher demand from users, surveillance systems are currently being designed to provide more information about the observed scene, such as the appearance of objects, types of objects, and other information extracted from detected objects. Although the recognition of gender of an observed human can be easily performed using human perception, it remains a difficult task when using computer vision system images. In this paper, we propose a new human gender recognition method that can be applied to surveillance systems based on quality assessment of human areas in visible light and thermal camera images. Our research is novel in the following two ways: First, we utilize the combination of visible light and thermal images of the human body for a recognition task based on quality assessment. We propose a quality measurement method to assess the quality of image regions so as to remove the effects of background regions in the recognition system. Second, by combining the features extracted using the histogram of oriented gradient (HOG) method and the measured qualities of image regions, we form a new image features, called the weighted HOG (wHOG), which is used for efficient gender recognition. Experimental results show that our method produces more accurate estimation results than the state-of-the-art recognition method that uses human body images. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Overall procedure of our proposed method for gender recognition using visible light and thermal images, including image quality assessment.</p>
Full article ">Figure 2
<p>A demostration of the HOG feature extraction method: (<b>a</b>) the input image; (<b>b</b>) gradient map with gradient strength and direction of a sub-block of the input image; (<b>c</b>) accumulated gradient orientation; and (<b>d</b>) histogram of oriented gradients.</p>
Full article ">Figure 3
<p>Example of mean and standard deviation maps obtained from a thermal image: (<b>a</b>) a thermal image with background (low illumination regions) and foreground (high illumination regions); (<b>b</b>) MEAN map; and (<b>c</b>) STD map.</p>
Full article ">Figure 4
<p>Demonstration of methodology for extracting the wHOG feature by combining the HOG features of images and weighted values of corresponding sub-blocks: (<b>a</b>) input image; (<b>b</b>) quality measurement map (MEAN map or STD map); and (<b>c</b>) the wHOG feature by combining (<b>a</b>) and (<b>b</b>).</p>
Full article ">Figure 5
<p>Dual-camera set up that combines visible light and thermal cameras, as used to collect the database in our experiments: (<b>a</b>) dual-camera system; (<b>b</b>) setup of our camera system in actual surveillance environments; (<b>c</b>) distances between camera and user with the height of camera.</p>
Full article ">Figure 6
<p>Example of visible light and thermal images in the collected database used in our experiments: (<b>a</b>–<b>c</b>) visible light-thermal image pairs of the female class with (<b>a</b>) front view; (<b>b</b>) back view; and (<b>c</b>) side view; (<b>d</b>–<b>f</b>) visible light-thermal image pairs of the male class with (<b>d</b>) front view; (<b>e</b>) back view; and (<b>f</b>) side view.</p>
Full article ">Figure 7
<p>The average ROC curve of a previous recognition method [<a href="#B16-sensors-16-01134" class="html-bibr">16</a>] with different kinds of images and combination methods.</p>
Full article ">Figure 8
<p>Average ROC curve of our proposed method for gender recognition using different kinds of images and combination methods.</p>
Full article ">Figure 9
<p>Example recognition results for our proposed method, as compared to the previous recognition method: (<b>a</b>) male image in the back view; (<b>b</b>) and (<b>f</b>) female images in the back view; (<b>c</b>) male image in the side view; (<b>d</b>) male image in the front view; and (<b>e</b>) female image in the front view.</p>
Full article ">Figure 10
<p>Example of recognition results of our proposed method where the error cases were occurred: (<b>a</b>,<b>b</b>) female images in back view; (<b>c</b>,<b>f</b>) male images in side view; (<b>d</b>) female image in front view; and (<b>e</b>) male image in front view.</p>
Full article ">Figure 11
<p>Average ROC curve of our method and EWHOG method for gender recognition.</p>
Full article ">
9419 KiB  
Article
Force-Sensing Silicone Retractor for Attachment to Surgical Suction Pipes
by Tetsuyou Watanabe, Toshio Koyama, Takeshi Yoneyama and Mitsutoshi Nakada
Sensors 2016, 16(7), 1133; https://doi.org/10.3390/s16071133 - 21 Jul 2016
Cited by 7 | Viewed by 6130
Abstract
This paper presents a novel force-sensing silicone retractor that can be attached to a surgical suction pipe to improve the usability of the suction and retraction functions during neurosurgery. The retractor enables simultaneous utilization of three functions: suction, retraction, and retraction-force sensing. The [...] Read more.
This paper presents a novel force-sensing silicone retractor that can be attached to a surgical suction pipe to improve the usability of the suction and retraction functions during neurosurgery. The retractor enables simultaneous utilization of three functions: suction, retraction, and retraction-force sensing. The retractor also reduces the number of tool changes and ensures safe retraction through visualization of the magnitude of the retraction force. The proposed force-sensing system is based on a force visualization mechanism through which the force is displayed in the form of motion of a colored pole. This enables surgeons to estimate the retraction force. When a fiberscope or camera is present, the retractor enables measurement of the retraction force with a resolution of 0.05 N. The retractor has advantages of being disposable, inexpensive, and easy to sterilize or disinfect. Finite element analysis and experiments demonstrate the validity of the proposed force-sensing system. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Suction pipe/device and silicone retractor for suction pipe.</p>
Full article ">Figure 2
<p>Target situation I: surgeon evaluates retraction force visually.</p>
Full article ">Figure 3
<p>Target situation II: retraction force is captured with fiberscope.</p>
Full article ">Figure 4
<p>Principle of force sensing.</p>
Full article ">Figure 5
<p>Schematic top and side views of structure of silicone retractor including a force-sensing function.</p>
Full article ">Figure 6
<p>Overview of manufacture and assembly processes of silicone retractor including a force-sensing function.</p>
Full article ">Figure 7
<p>Manufactured silicone retractor including a force-sensing function.</p>
Full article ">Figure 8
<p>Results of FEM analysis under applied load of 0.1 N.</p>
Full article ">Figure 9
<p>Schematic of experimental setup.</p>
Full article ">Figure 10
<p>Photograph of experimental setup.</p>
Full article ">Figure 11
<p>Photographs of silicone retractor at loads of (<b>a</b>) 0.00 N and (<b>b</b>) 0.2 N.</p>
Full article ">Figure 12
<p>Derivation of distance moved by pole tip by using Imtool.</p>
Full article ">Figure 13
<p>Relationship between retraction force and distance moved by pole tip.</p>
Full article ">Figure 14
<p>Schematic of experimental setup in the case of retraction of curved and soft surfaces.</p>
Full article ">Figure 15
<p>Curved surfaces made of semi-spherical gelatin: (<b>a</b>) Large (radius: 3 mm); (<b>b</b>) Medium (radius: 2.5 mm); and (<b>c</b>) Small (radius: 2 mm).</p>
Full article ">Figure 16
<p>Photograph of case of retracting large-sized gelatin curved surface ((<b>a</b>) 0.00 N; (<b>b</b>) 0.2 N).</p>
Full article ">Figure 17
<p>Photograph of case of retracting medium-sized gelatin curved surface ((<b>a</b>) 0.00 N; (<b>b</b>) 0.2 N).</p>
Full article ">Figure 18
<p>Photograph of case of retracting small-sized gelatin curved surface ((<b>a</b>) 0.00 N; (<b>b</b>) 0.2 N).</p>
Full article ">Figure 19
<p>Relationship between retraction force and distance moved by pole tip for the case of retraction at center of curved surfaces; the results shown in <a href="#sensors-16-01133-f013" class="html-fig">Figure 13</a> are also included here for comparison purposes.</p>
Full article ">Figure 20
<p>Relationship between retraction force and distance moved by pole tip for the case of retraction at a 5 mm distance from center of curved surfaces, the results shown in <a href="#sensors-16-01133-f013" class="html-fig">Figure 13</a> are also included here for comparison purposes.</p>
Full article ">
2661 KiB  
Article
Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil
by Zhongxing Gao, Yonggang Zhang and Yunhao Zhang
Sensors 2016, 16(7), 1131; https://doi.org/10.3390/s16071131 - 21 Jul 2016
Cited by 11 | Viewed by 5773
Abstract
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational [...] Read more.
Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of digital closed-loop interferometric fiber optic gyroscope (IFOG).</p>
Full article ">Figure 2
<p>Simplified model of digital closed-loop IFOG.</p>
Full article ">Figure 3
<p>Normalized Bode diagram (<b>a</b>) Frequency-amplitude characteristic curve; and (<b>b</b>) Frequency-phase characteristic curve.</p>
Full article ">Figure 4
<p>Measurement results of the fiber mechanical properties (<b>a</b>) The tensile stress versus the tensile strain; and (<b>b</b>) the elastic modulus versus the tensile strain.</p>
Full article ">Figure 5
<p>Diagram of the mechanical vibration applied to the fiber coil. The vibration direction is along the <span class="html-italic">z</span>-axis and perpendicular to the fiber coil.</p>
Full article ">Figure 6
<p>Dynamic model for each fiber loop under vibration.</p>
Full article ">Figure 7
<p>Factors that influence IFOG vibration error.</p>
Full article ">Figure 8
<p>Standard representation of the quadrupolar (QAD) winding pattern.</p>
Full article ">Figure 9
<p>Strain distribution of the QAD fiber coil.</p>
Full article ">Figure 10
<p>Standard deviation of vibration errors for open-loop IFOG (<b>a</b>) Open-loop errors versus amplitude; (<b>b</b>) Open-loop errors versus frequency; (<b>c</b>) Open-loop errors versus ASD.</p>
Full article ">Figure 11
<p>Standard deviation of vibration errors for closed-loop IFOG (<b>a</b>) Closed-loop errors versus amplitude; (<b>b</b>) Closed -loop errors versus frequency; (<b>c</b>) Closed -loop errors versus ASD.</p>
Full article ">Figure 12
<p>PSD of vibration in F-15B aircraft and the corresponding standard deviation of errors in IFOG (<b>a</b>) PSD of vibration; and (<b>b</b>) IFOG errors versus PSD.</p>
Full article ">
948 KiB  
Article
Beamforming Based Full-Duplex for Millimeter-Wave Communication
by Xiao Liu, Zhenyu Xiao, Lin Bai, Jinho Choi, Pengfei Xia and Xiang-Gen Xia
Sensors 2016, 16(7), 1130; https://doi.org/10.3390/s16071130 - 21 Jul 2016
Cited by 40 | Viewed by 7828
Abstract
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to [...] Read more.
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. Full article
(This article belongs to the Special Issue Millimeter Wave Wireless Communications and Networks)
Show Figures

Figure 1

Figure 1
<p>Illustration of the FD mmWave communication system.</p>
Full article ">Figure 2
<p>The transmit and receive antenna arrays of a node.</p>
Full article ">Figure 3
<p>JAR and convergence performances of ZF-Max-Power with random initial transmit AWVs (<b>Left</b>: LOS channel, <b>Right</b>: NLOS channel). <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> dB. For LOS channel, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> dB, while for NLOS channel, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB. For the case of separate arrays, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mi>π</mi> <mo>/</mo> <mn>6</mn> </mrow> </semantics> </math> rad, while for the case of sharing the same array, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math> rad.</p>
Full article ">Figure 4
<p>JAR performance of the involved schemes with respect to varying <span class="html-italic">ω</span> under LOS (<b>Left</b>) and NLOS (<b>Right</b>) channels in the case of separate arrays. <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math>. For LB-MMSE, the JAR with <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> is also plotted.</p>
Full article ">Figure 5
<p>JAR performance of the involved schemes with respect to varying <span class="html-italic">d</span> under LOS channel in the case of separate arrays. <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics> </math> rad. In the (<b>Left</b>) hand figure SI is assumed fixed, i.e., <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB; while in the (<b>Right</b>) hand figure SI varies with <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> </mrow> </semantics> </math>, i.e., <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>60</mn> <mo>−</mo> <mn>20</mn> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB.</p>
Full article ">Figure 6
<p>JAR comparison between different array settings (separate arrays versus the same array) under LOS (<b>Left</b>) and NLOS (<b>Right</b>) channels with varying SI. <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB. For the case of separate arrays, <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.6</mn> <mi>π</mi> </mrow> </semantics> </math> rad, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>Effects of channel estimation errors on the proposed schemes with separate arrays under LOS ( <b>Left</b>) and NLOS (<b>Right</b>) channels. <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mi>π</mi> </mrow> </semantics> </math> rad, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Effects of AWV error and EVM error on the JAR performance of ZF-Max-Power under LOS (<b>Left</b>) and NLOS (<b>Right</b>) channels. <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>12</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>21</mn> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mn>11</mn> </msub> <mo>=</mo> <msub> <mi>ε</mi> <mn>22</mn> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math> dB, <math display="inline"> <semantics> <mrow> <mi>ω</mi> <mo>=</mo> <mn>0.8</mn> <mi>π</mi> </mrow> </semantics> </math> rad, <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>/</mo> <mi>λ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">
2368 KiB  
Article
Efficient Preamble Design Technique for Millimeter-Wave Cellular Systems with Beamforming
by Dae Geun Han, Yeong Jun Kim and Yong Soo Cho
Sensors 2016, 16(7), 1129; https://doi.org/10.3390/s16071129 - 21 Jul 2016
Cited by 2 | Viewed by 5743
Abstract
The processing time for beam training in millimeter-wave (mmWave) cellular systems can be significantly reduced by a code division multiplexing (CDM)-based technique, where multiple beams are transmitted simultaneously with their corresponding Tx beam IDs (BIDs) in the preamble. However, mmWave cellular systems with [...] Read more.
The processing time for beam training in millimeter-wave (mmWave) cellular systems can be significantly reduced by a code division multiplexing (CDM)-based technique, where multiple beams are transmitted simultaneously with their corresponding Tx beam IDs (BIDs) in the preamble. However, mmWave cellular systems with CDM-based preambles require a large number of cell IDs (CIDs) and BIDs, and a high computational complexity for CID and BID (CBID) searches. In this paper, a new preamble design technique that can increase the number of CBIDs significantly is proposed, using a preamble sequence constructed by a combination of two Zadoff-Chu (ZC) sequences. An efficient technique for the CBID detection is also described for the proposed preamble. It is shown by simulations using a simple model of an mmWave cellular system that the proposed technique can obtain a significant reduction in the complexity of the CBID detection without a noticeable performance degradation, compared to the previous technique. Full article
(This article belongs to the Special Issue Millimeter Wave Wireless Communications and Networks)
Show Figures

Figure 1

Figure 1
<p>Concept of the preamble generation in the proposed technique.</p>
Full article ">Figure 2
<p>Example of an mmWave cellular system.</p>
Full article ">Figure 3
<p>Preamble structure in the proposed technique.</p>
Full article ">Figure 4
<p>Correlation properties of the proposed preamble depending on the value of <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> </mrow> </semantics> </math>: (<b>a</b>) Type-1; (<b>b</b>) Type-2.</p>
Full article ">Figure 5
<p>Success probability of the CBID detection in one-cell and two-cell environments: (<b>a</b>) 8 <math display="inline"> <semantics> <mrow> <mo>×</mo> </mrow> </semantics> </math> 8; (<b>b</b>) 8 <math display="inline"> <semantics> <mrow> <mo>×</mo> </mrow> </semantics> </math> 1; (<b>c</b>) 8 <math display="inline"> <semantics> <mrow> <mo>×</mo> </mrow> </semantics> </math> 1.</p>
Full article ">Figure 6
<p>BER performance in one-cell and two-cell environments.</p>
Full article ">Figure 7
<p>Number of complex multiplications required for the CBID detection when <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mi>C</mi> </msub> </mrow> </semantics> </math> varies.</p>
Full article ">
2082 KiB  
Article
Generalized Theory and Decoupled Evaluation Criteria for Unmatched Despreading of Modernized GNSS Signals
by Jiayi Zhang, Zheng Yao and Mingquan Lu
Sensors 2016, 16(7), 1128; https://doi.org/10.3390/s16071128 - 20 Jul 2016
Cited by 9 | Viewed by 4939
Abstract
In order to provide better navigation service for a wide range of applications, modernized global navigation satellite systems (GNSS) employs increasingly advanced and complicated techniques in modulation and multiplexing of signals. This trend correspondingly increases the complexity of signal despreading at the receiver [...] Read more.
In order to provide better navigation service for a wide range of applications, modernized global navigation satellite systems (GNSS) employs increasingly advanced and complicated techniques in modulation and multiplexing of signals. This trend correspondingly increases the complexity of signal despreading at the receiver when matched receiving is used. Considering the numerous low-end receiver who can hardly afford such receiving complexity, it is feasible to apply some receiving strategies, which uses simplified forms of local despreading signals, which is termed unmatched despreading. However, the mismatch between local signal and received signal causes performance loss in code tracking, which is necessary to be considered in the theoretical evaluation methods of signals. In this context, we generalize the theoretical signal evaluation model for unmatched receiving. Then, a series of evaluation criteria are proposed, which are decoupled from unrelated influencing factors and concentrates on the key factors related to the signal and its receiving, thus better revealing the inherent performance of signals. The proposed evaluation criteria are used to study two GNSS signals, from which constructive guidance are derived for receivers and signal designer. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Block Diagram of Non-coherent Code Tracking Loop, where the discriminator model is early minus late power.</p>
Full article ">Figure 2
<p>The correlation output SNIR loss of ACE-BOC(15,10,[1,3,1,3]), where the central frequency of receiving of Matched receiving and IMR is at <span class="html-italic">f</span><sub>0</sub>, and the central frequency of BLR is at <span class="html-italic">f</span><sub>0</sub> – <span class="html-italic">f<sub>sc</sub></span>. The IMR and BLR receives the component <span class="html-italic">s</span><sub>LQ</sub>, whose power ratio among the useful transmitting power is 3/8.</p>
Full article ">Figure 3
<p>Equivalent Gabor bandwidth of ACE-BOC, where the central frequency of receiving of Matched receiving and IMR is at <span class="html-italic">f</span><sub>0</sub>, and the central frequency of BLR is at <span class="html-italic">f</span><sub>0</sub> − <span class="html-italic">f<sub>sc</sub></span>. The IMR and BLR receives the component <span class="html-italic">s</span><sub>LQ</sub>, whose power ratio among the useful transmitting power is 3/8.</p>
Full article ">Figure 4
<p>Anti-interference rate of ACE-BOC under three kinds of interference: narrowband interference, bandlimited interference and matched spectrum interference. Receiving strategies are IMR and BLR for component <span class="html-italic">s</span><sub>LQ</sub>. Larger value means more vulnerability to such interference.</p>
Full article ">Figure 5
<p>Average range error envelope of ACE-BOC(15,10,[1,1,3,3]), under matched receiving, IMR and BLR. The double-sided receiving bandwidth 52 MHz. MDR is −5 dB.</p>
Full article ">Figure 6
<p>The correlation output SNIR loss of multiple implementation of MBOC(6,1,1/11) signals, under matched receiving and BOC<sub>11</sub>-like unmatched receiving. It can be used to characterize the acquisition performance of MBOCs.</p>
Full article ">Figure 7
<p>Equivalent Gabor bandwidth of for implementations of MBOC(6,1,1/11), under matched receiving and BOC<sub>11</sub>-like unmatched receiving, representing the code tracking performance. (<b>a</b>) Equivalent Gabor bandwidth; (<b>b</b>) Additional equivalent Gabor bandwidth gain, compared with matched QMBOC and TMBOC, which is achieved by Equation (31).</p>
Full article ">Figure 8
<p>Anti-interference rate of QMBOC(6,1,1/11) with respect of receiving bandwidth, under the matched receiving and BOC<sub>11</sub>-like unmatched receiving. Three types of interferences are considered, including narrowband interference, bandlimited Gaussian interference and matched spectrum interference, of which the parameters setting of this figure is provided in <a href="#sensors-16-01128-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 9
<p>Average range error envelope of MBOC(6,1,1/11), under matched receiving and BOC<sub>11</sub>-like unmatched receiving. (<b>a</b>) Double-sided receiving bandwidth is 40 MHz; (<b>b</b>) Double-sided receiving bandwidth is 10 MHz. MDR is −5 dB.</p>
Full article ">
1583 KiB  
Article
Spectrum Handoffs Based on Preemptive Repeat Priority Queue in Cognitive Radio Networks
by Xiaolong Yang, Xuezhi Tan, Liang Ye and Lin Ma
Sensors 2016, 16(7), 1127; https://doi.org/10.3390/s16071127 - 20 Jul 2016
Cited by 9 | Viewed by 5385
Abstract
Cognitive radio can significantly improve the spectrum efficiency, and spectrum handoff is considered as an important functionality to guarantee the quality of service (QoS) of primary users (PUs) and the continuity of data transmission of secondary users (SUs). In this paper, we propose [...] Read more.
Cognitive radio can significantly improve the spectrum efficiency, and spectrum handoff is considered as an important functionality to guarantee the quality of service (QoS) of primary users (PUs) and the continuity of data transmission of secondary users (SUs). In this paper, we propose an analytical framework based on a preemptive repeat identical (PRI) M/G/1 queuing network model to characterize spectrum handoff behaviors with general service time distribution of both primary and secondary connections, multiple interruptions and transmission delay resulting from the appearance of primary connections. Then, we derive the close-expression of the extended data delivery and the system sojourn time in both staying and changing scenarios. In addition, based on analysis of spectrum handoff behaviors resulting from multiple interruptions caused by the appearance of the primary connections, we investigate the traffic-adaptive policy, by which the considered SU will optimally adjust its handoff spectrum policy. Moreover, we investigate the admissible region and provide the reference for designing the admission control rule for the arriving secondary connection requests. Finally, simulation results verify that our proposed analytical framework is reasonable and can provide the reference for executing the optimal spectrum handoff strategy and designing the admission control rule for the SU in cognitive radio networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The behaviors of a secondary user (SU) under the staying policy in the preemptive repeat identical (PRI) M/G/1 queuing network model.</p>
Full article ">Figure 2
<p>The behaviors of a secondary user (SU) under the changing policy in the preemptive repeat identical (PRI) M/G/1 queuing network model.</p>
Full article ">Figure 3
<p>Effects of the channel busy probability <math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi>p</mi> </msub> </mrow> </semantics> </math> resulting from primary connections and the average service time <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> </mrow> </semantics> </math> of secondary connections on the probability <math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mi>s</mi> </msub> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo stretchy="false">]</mo> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> (slots/arrival), <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math> (arrivals/slot), <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics> </math> (arrivals/slot) and <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> <mo>+</mo> <msub> <mi>ρ</mi> <mi>p</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Effects of the upper-truncated Pareto distribution and the exponential distribution for primary connections when the staying policy is adopted in the preemptive repeat identical (PRI) M/G/1 and the preemptive resume priority (PRP) M/G/1 queuing network models.</p>
Full article ">Figure 5
<p>Effects of the upper-truncated Pareto distribution and the exponential distribution for primary connections when the changing policy is adopted in the preemptive repeat identical (PRI) M/G/1 and the preemptive resume priority (PRP) M/G/1 queuing network models, where <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math> (arrivals/slot), <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> (slots/arrival), <math display="inline"> <semantics> <mrow> <msub> <mi>t</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and (slot).</p>
Full article ">Figure 6
<p>Compares the traffic-adaptive policy of the considered secondary user (SU) in the preemptive repeat identical (PRI) M/G/1 and the preemptive resume priority (PRP) M/G/1 queuing network models, where the primary connection follows the exponential distribution, <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math> (arrivals/slot), <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> <mo>=</mo> <mn>10</mn> </mrow> </semantics> </math> (slots/arrival), <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>≤</mo> <msub> <mi>ρ</mi> <mi>p</mi> </msub> <mo>&lt;</mo> <mn>0.8</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>t</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> (slot) and <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> <mo>+</mo> <msub> <mi>ρ</mi> <mi>p</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 7
<p>The effects of the secondary connection average service time on the average extended data delivery time in the preemptive repeat identical (PRI) M/G/1 queuing network, where the primary connection follows the exponential distribution, <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>0.01</mn> </mrow> </semantics> </math> (arrivals/slot), <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics> </math> (arrivals /slot), <math display="inline"> <semantics> <mrow> <mn>0</mn> <mo>≤</mo> <msub> <mi>ρ</mi> <mi>p</mi> </msub> <mo>&lt;</mo> <mn>0.75</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>t</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> (slot), <math display="inline"> <semantics> <mrow> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>p</mi> </msub> <mo stretchy="false">]</mo> <mo>=</mo> <mn>20</mn> </mrow> </semantics> </math> (slots /arrival) and <math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mi>E</mi> <mo stretchy="false">[</mo> <msub> <mi>X</mi> <mi>s</mi> </msub> <mo stretchy="false">]</mo> <mo>+</mo> <msub> <mi>ρ</mi> <mi>p</mi> </msub> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>The admissible region of a cognitive radio (CR) network when the service time of the primary connection follows the upper-truncated Pareto distribution: (<b>a</b>) the maximum allowable delay is set to be four slots; and (<b>b</b>) the maximum allowable delay is set to be eight slots.</p>
Full article ">Figure 9
<p>The admissible region of a cognitive radio (CR) network when the service time of the primary connection follows the exponential distribution: (<b>a</b>) the maximum allowable delay is set to be four slots; and (<b>b</b>) the maximum allowable delay is set to be eight slots.</p>
Full article ">
1318 KiB  
Article
The Evaluation of Physical Stillness with Wearable Chest and Arm Accelerometer during Chan Ding Practice
by Kang-Ming Chang, Yu-Teng Chun, Sih-Huei Chen, Luo Lu, Hsiao-Ting Jannis Su, Hung-Meng Liang, Jayasree Santhosh, Congo Tak-Shing Ching and Shing-Hong Liu
Sensors 2016, 16(7), 1126; https://doi.org/10.3390/s16071126 - 20 Jul 2016
Cited by 6 | Viewed by 5691
Abstract
Chan Ding training is beneficial to health and emotional wellbeing. More and more people have taken up this practice over the past few years. A major training method of Chan Ding is to focus on the ten Mailuns, i.e., energy points, and to [...] Read more.
Chan Ding training is beneficial to health and emotional wellbeing. More and more people have taken up this practice over the past few years. A major training method of Chan Ding is to focus on the ten Mailuns, i.e., energy points, and to maintain physical stillness. In this article, wireless wearable accelerometers were used to detect physical stillness, and the created physical stillness index (PSI) was also shown. Ninety college students participated in this study. Primarily, accelerometers used on the arms and chest were examined. The results showed that the PSI values on the arms were higher than that of the chest, when participants moved their bodies in three different ways, left-right, anterior-posterior, and hand, movements with natural breathing. Then, they were divided into three groups to practice Chan Ding for approximately thirty minutes. Participants without any Chan Ding experience were in Group I. Participants with one year of Chan Ding experience were in Group II, and participants with over three year of experience were in Group III. The Chinese Happiness Inventory (CHI) was also conducted. Results showed that the PSI of the three groups measured during 20–30 min were 0.123 ± 0.155, 0.012 ± 0.013, and 0.001 ± 0.0003, respectively (p < 0.001 ***). The averaged CHI scores of the three groups were 10.13, 17.17, and 25.53, respectively (p < 0.001 ***). Correlation coefficients between PSI and CHI of the three groups were −0.440, −0.369, and −0.537, respectively (p < 0.01 **). PSI value and the wearable accelerometer that are presently available on the market could be used to evaluate the quality of the physical stillness of the participants during Chan Ding practice. Full article
(This article belongs to the Special Issue Wearable Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the first experiment.</p>
Full article ">Figure 2
<p>Flowchart of the second experiment.</p>
Full article ">
17207 KiB  
Article
Uncertainty Comparison of Visual Sensing in Adverse Weather Conditions
by Shi-Wei Lo, Jyh-Horng Wu, Lun-Chi Chen, Chien-Hao Tseng, Fang-Pang Lin and Ching-Han Hsu
Sensors 2016, 16(7), 1125; https://doi.org/10.3390/s16071125 - 20 Jul 2016
Cited by 6 | Viewed by 6275
Abstract
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and [...] Read more.
This paper focuses on flood-region detection using monitoring images. However, adverse weather affects the outcome of image segmentation methods. In this paper, we present an experimental comparison of an outdoor visual sensing system using region-growing methods with two different growing rules—namely, GrowCut and RegGro. For each growing rule, several tests on adverse weather and lens-stained scenes were performed, taking into account and analyzing different weather conditions with the outdoor visual sensing system. The influence of several weather conditions was analyzed, highlighting their effect on the outdoor visual sensing system with different growing rules. Furthermore, experimental errors and uncertainties obtained with the growing rules were compared. The segmentation accuracy of flood regions yielded by the GrowCut, RegGro, and hybrid methods was 75%, 85%, and 87.7%, respectively. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

Figure 1
<p>Two stain types: (<b>a</b>) a small stain and (<b>b</b>) a flat stain overlapping the camera lens. The effect of stains when applied to flood detection for (<b>c</b>) a stained outdoor image and (<b>d</b>) the concave region affected by stains on an image. Image 295’s flood region segmented with RegGro is represented by a red contour. The blue contour represents the ground truth, and the green dot is the location of the seed used with RegGro. (Note: The Traditional Chinese in header of (<b>a</b>) and (<b>b</b>–<b>d</b>) are represented the location in the Dianbao River and the Changed Bridge, respectively).</p>
Full article ">Figure 2
<p>(<b>a</b>) Region-growing process from q to 8-connected neighbors p that satisfy the <math display="inline"> <semantics> <mi>δ</mi> </semantics> </math> function; and (<b>b</b>) threshold for the window of intensity. The center of the window is the value of Mean_Intensity, and the window size is ± the intensity distance (±0.065).</p>
Full article ">Figure 3
<p>(<b>a</b>) Region-growing process from cell q to its neighbors or reverse-growing from its neighbors, with the <math display="inline"> <semantics> <mi>δ</mi> </semantics> </math> function; and (<b>b</b>) the strength threshold for the growing rule. The region grows when <math display="inline"> <semantics> <mrow> <msub> <mi>θ</mi> <mi>q</mi> </msub> <mo>&gt;</mo> <msub> <mi>θ</mi> <mi>p</mi> </msub> </mrow> </semantics> </math>; otherwise the region reverse-grows.</p>
Full article ">Figure 4
<p>Flowchart of the training model and classification input images. During the training process, the training images are labeled as fog, stained, or normal. The hybrid RgGc method classifies input images and then pipes to different growing methods. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 5
<p>Image set and weather conditions. The image set was captured in adverse weather conditions. The selected sample images show fog (<b>a</b>) and stained (<b>b</b>) patterns. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 6
<p>Accuracy determined according to the ground truth. (<b>a</b>) the resulting region pixels of the algorithm in the image plane are classified as: rT, matching the ground truth; rO, over-segmented; and rU, under-segmented; (<b>b</b>) the outcome segments produced with algorithm and ground truth.</p>
Full article ">Figure 7
<p>Part of the ground truth of the image set. The red boundaries are manually-labeled segments of flood regions. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 8
<p>RegGro accuracy. The accuracy was determined according to the ground truth. Each horizontal bar shows the accuracy with a different intensity distance, ranging from 0.025 (RegGro_025) to 0.15 (RegGto_150). The highest accuracy was 85.7% with an intensity distance of 0.065.</p>
Full article ">Figure 9
<p>Segmentation success or failure with RegGro. True (0) indicates success, and False (1) indicates failure. Most false detections occurred in the first 40 images with heavy rain and fog.</p>
Full article ">Figure 10
<p>Part of RegGro’s results with red segments from growing methods. The blue line is the ground truth, and the green marker is the initial seed for the growing methods. There were few flood segmentation failures in heavy rain and fog. Some failures occurred with raindrop stains on the CCTV screen. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 11
<p>GrowCut accuracy. The accuracy was determined according to the ground truth. Each horizontal bar shows the accuracy with a different filter: viz., the mean, imadjust, histeqadapt, histeq, and filter-free GrowCut. The highest accuracy is 75.2% with the mean filter and both 18 × 18 and 16 × 16 masks.</p>
Full article ">Figure 12
<p>Segmentation success or failure with GrowCut. True (0) indicates success, and False (1) indicates failure. Most false detections occurred between Images 0–40 and Images 70–100 with heavy rain and fog. However, GrowCut was more robust to raindrop stains on the CCTV screen (after Image 100).</p>
Full article ">Figure 13
<p>Part GrowCut’s results with red segments from growing methods. The blue line is the ground truth, and the green marker is the initial seed for the growing methods. Most flood segmentation failures occurred during heavy rain and fog. GrowCut is robust to raindrop stains on the CCTV screen. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 14
<p>Hybrid RgGc accuracy. The accuracy was determined according to the ground truth. Each horizontal bar shows the accuracy with a different growing method. Outperforming both RegGro and GrowCut, the hybrid RgGc was 87.7% accurate.</p>
Full article ">Figure 15
<p>Segmentation success or failure with the hybrid RgGc. True (0) indicates success and False (1) indicates failure. Most false detections occurred in the first 40 images with heavy rain and fog. Both methods failed to segment the flood regions as well as the hybrid RgGc.</p>
Full article ">Figure 16
<p>Hybrid RgGc results with red segments from growing methods. The blue line is the ground truth, and the green marker is the initial seed for the growing methods. This figure only shows Images 0, 35, 70, 105, 140, 175, 210, 245, and 280 from the image set, in order to clearly show the contours and seeded marker. The text in the upper-right corner distinguishes between images processed with RegGro (Rg) and GrowCut (Gc). (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 16 Cont.
<p>Hybrid RgGc results with red segments from growing methods. The blue line is the ground truth, and the green marker is the initial seed for the growing methods. This figure only shows Images 0, 35, 70, 105, 140, 175, 210, 245, and 280 from the image set, in order to clearly show the contours and seeded marker. The text in the upper-right corner distinguishes between images processed with RegGro (Rg) and GrowCut (Gc). (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 17
<p>Fog example. Fog and haze regress pixel intensity. (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 18
<p>Comparison of the two algorithms with example images from the start (Image 152) and middle (Image 270). (<b>a</b>) GrowCut covering the stains (Image 152) and overestimating them (Image 270); (<b>b</b>) RegGro identifying the flood region (Image 152) and affected by stains (Image 270). (Note: The Traditional Chinese in header of all images is represented the location in the Changed Bridge).</p>
Full article ">Figure 19
<p>Example of a failed identification. The RegGro region (in red) overlaps with the ground truth (in blue). (<b>a</b>) ground truth; (<b>b</b>) RegGro region; and (<b>c</b>) comparison of regions.</p>
Full article ">
9641 KiB  
Article
Facile Fabrication of a Gold Nanocluster-Based Membrane for the Detection of Hydrogen Peroxide
by Pu Zhang, Yi Wang and Yibing Yin
Sensors 2016, 16(7), 1124; https://doi.org/10.3390/s16071124 - 20 Jul 2016
Cited by 13 | Viewed by 7003
Abstract
In this work, we present a simple and rapid method to synthesize red luminescent gold nanoclusters (AuNCs) with high quantum yield (QY, ~16%), excellent photostability and biocompatibility. Next, we fabricated a solid membrane by loading the as-prepared AuNCs in an agar matrix. Different [...] Read more.
In this work, we present a simple and rapid method to synthesize red luminescent gold nanoclusters (AuNCs) with high quantum yield (QY, ~16%), excellent photostability and biocompatibility. Next, we fabricated a solid membrane by loading the as-prepared AuNCs in an agar matrix. Different from nanomaterials dispersed in solution, the AuNCs-based solid membrane has distinct advantages including convenience of transportation, while still maintaining strong red luminescence, and relatively long duration storage without aggregation. Taking hydrogen peroxide (H2O2) as a typical example, we then employed the AuNCs as a luminescent probe and investigated their sensing performance, either in solution phase or on a solid substrate. The detection of H2O2 could be achieved in wide concentration ranges over 805 nM–1.61 mM and 161 μM–19.32 mM in solution and on a solid membrane, respectively, with limits of detection (LOD) of 80 nM and 20 μM. Moreover, the AuNCs-based membrane could also be used for visual detection of H2O2 in the range of 0–3.22 mM. In view of the convenient synthesis route and attractive luminescent properties, the AuNCs-based membrane presented in this work is quite promising for applications such as optical sensing, fluorescent imaging, and photovoltaics. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Photoluminescence of the as-prepared AuNCs in the present work. The excitation spectrum shows two peaks at 370 and 470 nm, respectively. The maximal emission peak is located at 620 nm. The inset shows the digital photos of the AuNCs solution under sunlight and 365-nm UV light, respectively.</p>
Full article ">Figure 2
<p>AFM image and the corresponding height measurement of the as-prepared AuNCs.</p>
Full article ">Figure 3
<p>(<b>A</b>) XPS spectra showing the binding energy of Au<sub>4f</sub> before and after the reaction with H<sub>2</sub>O<sub>2</sub>; (<b>B</b>) FT-IR spectra of the pure BSA and BSA-stabilized AuNCs, respectively.</p>
Full article ">Figure 4
<p>(<b>A</b>) Luminescence intensity of the as-prepared AuNCs at different pH. The parameters of <span class="html-italic">I</span><sub>0</sub> and <span class="html-italic">I</span> represent luminescence intensity of the original AuNCs as prepared and after adjusting the pH, respectively; (<b>B</b>) Photostability of the AuNCs under continuous irradiation (excitation wavelength: 370 nm) for 1 h.</p>
Full article ">Figure 5
<p>The effect of four different solvents on the luminescence wavelength of AuNCs, including water, ethylene glycol (EG), methanol, and <span class="html-italic">N,N</span>-dimethylformamide (DMF).</p>
Full article ">Figure 6
<p>The viabilities of HeLa cells after incubated with different dosages of AuNCs in vitro for 24 h. All the data were collected by conducting three parallel experiments.</p>
Full article ">Figure 7
<p>Quantificational detection of H<sub>2</sub>O<sub>2</sub> in solution phase using the as-prepared AuNCs as a luminescent probe: (<b>A</b>) luminescence quenching of the AuNCs with the addition of different amounts of H<sub>2</sub>O<sub>2</sub> (the concentrations of H<sub>2</sub>O<sub>2</sub> are 0–4830 μM from top to bottom) in solution; (<b>B</b>) the values of <span class="html-italic">I</span><sub>0</sub>/<span class="html-italic">I</span> as a function of the concentration of H<sub>2</sub>O<sub>2</sub> in the range of 805 nM–1.61 mM in solution.</p>
Full article ">Figure 8
<p>Photographs showing the as-prepared AuNCs-involved membrane, which can be used as a solid probe for the detection of H<sub>2</sub>O<sub>2</sub>: (<b>A</b>) under sunlight; (<b>B</b>) under UV light (365 nm).</p>
Full article ">Figure 9
<p>Detection of H<sub>2</sub>O<sub>2</sub> using AuNCs-based solid membrane: (<b>A</b>) luminescence quenching of the AuNCs with the addition of different amounts of H<sub>2</sub>O<sub>2</sub> (the concentrations of H<sub>2</sub>O<sub>2</sub> are 0–19.32 mM from top to bottom) on AuNCs-based membrane; (<b>B</b>) the values of <span class="html-italic">I</span><sub>0</sub>/<span class="html-italic">I</span> as a function of the concentration of H<sub>2</sub>O<sub>2</sub>; (<b>C</b>) a photograph showing the visual detection of H<sub>2</sub>O<sub>2</sub> in the concentration range of 0–3.22 mM.</p>
Full article ">
4349 KiB  
Article
Novel Intersection Type Recognition for Autonomous Vehicles Using a Multi-Layer Laser Scanner
by Jhonghyun An, Baehoon Choi, Kwee-Bo Sim and Euntai Kim
Sensors 2016, 16(7), 1123; https://doi.org/10.3390/s16071123 - 20 Jul 2016
Cited by 9 | Viewed by 5417
Abstract
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, [...] Read more.
There are several types of intersections such as merge-roads, diverge-roads, plus-shape intersections and two types of T-shape junctions in urban roads. When an autonomous vehicle encounters new intersections, it is crucial to recognize the types of intersections for safe navigation. In this paper, a novel intersection type recognition method is proposed for an autonomous vehicle using a multi-layer laser scanner. The proposed method consists of two steps: (1) static local coordinate occupancy grid map (SLOGM) building and (2) intersection classification. In the first step, the SLOGM is built relative to the local coordinate using the dynamic binary Bayes filter. In the second step, the SLOGM is used as an attribute for the classification. The proposed method is applied to a real-world environment and its validity is demonstrated through experimentation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The illustration of incomplete OGM update using standard binary Bayes filter in local coordinate system. (<b>a</b>) The local OGM, which is built at <math display="inline"> <semantics> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msubsup> <mi>m</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>=</mo> <mi>O</mi> <mo stretchy="false">|</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>u</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; (<b>b</b>) Ego vehicle moves forward with two preceding vehicles at the same speed; (<b>c</b>) The local OGM is shifted downwards to turn <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msubsup> <mi>m</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mi>O</mi> <mo stretchy="false">|</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>u</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics> </math> using the vehicle’s odometry; (<b>d</b>) The new measurement (inverse) likelihood OGM <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msubsup> <mi>m</mi> <mi>t</mi> <mi>i</mi> </msubsup> <mo>=</mo> <mi>O</mi> <mo stretchy="false">|</mo> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mi>t</mi> <mi>i</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics> </math>; and (<b>e</b>) The updated OGM <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi>m</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> </mrow> </msub> <mo stretchy="false">|</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>u</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> </mrow> </msub> <mo>,</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mrow> <mn>1</mn> <mo>:</mo> <mi>t</mi> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics> </math>. The red circle region A’’ indicates the drawback of the static binary filter.</p>
Full article ">Figure 2
<p>(<b>a</b>) Result of static binary Bayes filters; (<b>b</b>) Result of dynamic binary Bayes filters; and (<b>c</b>) The real world image (highway).</p>
Full article ">Figure 3
<p>(<b>a</b>) The camera image superimposed with raw laser scanner data; (<b>b</b>) Raw laser scanner data. Layer 0 (<b>blue</b>), layer 1 (<b>red</b>), layer 2 (<b>green</b>), and layer 3 (<b>black</b>); and (<b>c</b>) The segmentations on the grid map. Different color means that different segments.</p>
Full article ">Figure 4
<p>The illustration of difference between <math display="inline"> <semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mi>t</mi> <mi>i</mi> </msubsup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold">z</mi> <mo>^</mo> </mover> <mi>t</mi> <mi>j</mi> </msubsup> </mrow> </semantics> </math>. The <math display="inline"> <semantics> <mrow> <msubsup> <mstyle mathvariant="bold" mathsize="normal"> <mi>z</mi> </mstyle> <mi>t</mi> <mi>i</mi> </msubsup> </mrow> </semantics> </math> indicates the set of laser beams, and <math display="inline"> <semantics> <mrow> <msubsup> <mover accent="true"> <mi mathvariant="bold">z</mi> <mo>^</mo> </mover> <mi>t</mi> <mi>j</mi> </msubsup> </mrow> </semantics> </math> indicates the set of cells in the occupancy grid map which were hit by laser beams. The gray cells are the unknown region, and the set of dark cells is one segment, which is segmented as an independent object.</p>
Full article ">Figure 5
<p>(<b>a</b>) The grid map with dynamic object; (<b>b</b>) The trails of dynamic segment; and (<b>c</b>) The static grid map with dynamic object removed. The <b>blue</b> box means the dynamic object information.</p>
Full article ">Figure 6
<p>These are simplified shapes of the grid map represented as graphically; (<b>a</b>) Highway more than two lanes; (<b>b</b>) Merge-roads; (<b>c</b>) Diverge-roads; (<b>d</b>) Plus-shape intersections; (<b>e</b>) First type T-shape junctions; (<b>f</b>) Second type T-shape junctions.</p>
Full article ">Figure 7
<p>Vehicle equipped with a multi-layer laser scanner and a camera [<a href="#B18-sensors-16-01123" class="html-bibr">18</a>].</p>
Full article ">Figure 8
<p>Outline of the experiment.</p>
Full article ">Figure 9
<p>The results of intersections recognition. (<b>a</b>) Highway; (<b>b</b>) Merge-road; (<b>c</b>) Diverge-road; (<b>d</b>) Plus-shape intersections; (<b>e</b>) First type T-shape junctions; (<b>f</b>) Second type T-shape junctions; left side images are camera image with raw data calibration; right side images are static local coordinate occupancy grid map (SLOGM).</p>
Full article ">Figure 10
<p>The box plot for each class.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop