[go: up one dir, main page]

Next Issue
Volume 18, September
Previous Issue
Volume 18, July
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 18, Issue 8 (August 2018) – 341 articles

Cover Story (view full-size image): The ultimate frontier of EMG-based applications relies on the implementation of fully electrodeless front-end electronics. However, in recent decades, the use of dry electrodes provided minimal improvements. A low-cost, electrodeless sensor, based on a Force Sensitive Resistor (FSR) which is able to simultaneously measure muscle contraction and the mechanomyogram, is presented. The sensor is connected to the skin with a rigid half sphere and, through a transimpedance amplifier, proved capable of consistently generating EMG linear envelope (EMG-LE) comparable signals. Moreover, the sensor provides other benefits, such as removing the need of high sample rates and circuitry for noise or artefacts’ rejection, as well as the additional computational load to compute the EMG-LE. The novel sensor provides a new option for improving prosthetic control or human–machine interface applications. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 7212 KiB  
Article
Device Management and Data Transport in IoT Networks Based on Visible Light Communication
by Cheol-Min Kim and Seok-Joo Koh
Sensors 2018, 18(8), 2741; https://doi.org/10.3390/s18082741 - 20 Aug 2018
Cited by 13 | Viewed by 6141
Abstract
LED-based Visible Light Communication (VLC) has been proposed as the IEEE 802.15.7 standard and is regarded as a new wireless access medium in the Internet-of-Things (IoT) environment. With this trend, many works have already been made to improve the performance of VLC. However, [...] Read more.
LED-based Visible Light Communication (VLC) has been proposed as the IEEE 802.15.7 standard and is regarded as a new wireless access medium in the Internet-of-Things (IoT) environment. With this trend, many works have already been made to improve the performance of VLC. However, the effectively integration of VLC services into IoT networks has not yet been sufficiently studied. In this paper, we propose a scheme for device management and data transport in IoT networks using VLC. Specifically, we discuss how to manage VLC transmitters and receivers, and to support VLC data transmission in IoT networks. The proposed scheme considers uni-directional VLC transmissions from transmitter to receivers for delivery of location-based VLC data. The backward transmission from VLC receivers will be made by using platform server and aggregation agents in the network. For validation and performance analysis, we implemented the proposed scheme with VLC-capable LED lights and open sources of oneM2M. From the experimental results for virtual museum services, we see that the VLC data packets can be exchanged within 590 ms, and the handover between VLC transmitters can be completed within 210 ms in the testbed network. Full article
Show Figures

Figure 1

Figure 1
<p>VLC communication scenarios: (<b>a</b>) bi-directional VLC; (<b>b</b>) uni-directional VLC.</p>
Full article ">Figure 2
<p>Network reference model for VLC-based IoT.</p>
Full article ">Figure 3
<p>Protocol stacks used for VLC-based IoT.</p>
Full article ">Figure 4
<p>Overview of device initialization.</p>
Full article ">Figure 5
<p>AA initialization.</p>
Full article ">Figure 6
<p>VT initialization.</p>
Full article ">Figure 7
<p>VR initialization.</p>
Full article ">Figure 8
<p>VLC frame format.</p>
Full article ">Figure 9
<p>Device monitoring.</p>
Full article ">Figure 10
<p>VLC data transport.</p>
Full article ">Figure 11
<p>VR handover across VTs.</p>
Full article ">Figure 12
<p>Testbed network configuration for <span class="html-italic">virtual museum</span> service.</p>
Full article ">Figure 13
<p>Protocol stack for implementation of VLC-based IoT.</p>
Full article ">Figure 14
<p>VLC frame structure used in experimentation.</p>
Full article ">Figure 15
<p>Packets captured during AA initialization.</p>
Full article ">Figure 16
<p>Example of VLC frame used in experimentation.</p>
Full article ">Figure 17
<p>Packets captured during VR Initialization.</p>
Full article ">Figure 18
<p>Packets captured during device monitoring.</p>
Full article ">Figure 19
<p>Packets captured during the VLC Data Transport operation.</p>
Full article ">Figure 20
<p>Packets captured during VR Handover operation.</p>
Full article ">Figure 21
<p>Times taken during VLC data transport.</p>
Full article ">Figure 22
<p>Times taken during VR handover.</p>
Full article ">
16 pages, 2232 KiB  
Article
A New Approach to Unwanted-Object Detection in GNSS/LiDAR-Based Navigation
by Mathieu Joerger, Guillermo Duenas Arana, Matthew Spenko and Boris Pervan
Sensors 2018, 18(8), 2740; https://doi.org/10.3390/s18082740 - 20 Aug 2018
Cited by 11 | Viewed by 4153
Abstract
In this paper, we develop new methods to assess safety risks of an integrated GNSS/LiDAR navigation system for highly automated vehicle (HAV) applications. LiDAR navigation requires feature extraction (FE) and data association (DA). In prior work, we established an FE and DA risk [...] Read more.
In this paper, we develop new methods to assess safety risks of an integrated GNSS/LiDAR navigation system for highly automated vehicle (HAV) applications. LiDAR navigation requires feature extraction (FE) and data association (DA). In prior work, we established an FE and DA risk prediction algorithm assuming that the set of extracted features matched the set of mapped landmarks. This paper addresses these limiting assumptions by incorporating a Kalman filter innovation-based test to detect unwanted object (UO). UO include unmapped, moving, and wrongly excluded landmarks. An integrity risk bound is derived to account for the risk of not detecting UO. Direct simulations and preliminary testing help quantify the impact on integrity and continuity of UO monitoring in an example GNSS/LiDAR implementation. Full article
(This article belongs to the Special Issue GNSS and Fusion with Other Sensors)
Show Figures

Figure 1

Figure 1
<p>Defining Integrity Risk for Automotive Applications. The integrity risk is the probability of the car being outside the alert limit requirement box (blue shaded area) when it was estimated to be inside the box. When lateral deviation is of primary concern, then the alert limit is the distance <math display="inline"><semantics> <mi>ℓ</mi> </semantics></math> between edge of car and edge of lane.</p>
Full article ">Figure 2
<p>Simulation results assuming no unwanted objects (UO). (<b>top left</b>) On the upper plot, the thick black line represents the actual cross-track positioning error and the thin line is the one-sigma covariance envelope. The lower plot shows <span class="html-italic">P</span>(<span class="html-italic">HI<sub>k</sub></span>) bounds for the GPS-denied area crossing scenario. (<b>top right</b>) Snapshot vehicle-landmark geometry at the time step corresponding to the large increase in <span class="html-italic">P</span>(<span class="html-italic">HI<sub>k</sub></span>) Bound (time = 29 s). (<b>bottom left</b>) Azimuth elevation sky plot showing GPS satellite geometry at time = 29 s. (<b>bottom right</b>) Snapshot LiDAR scan at time = 29 s when landmark “1” is hidden behind landmark “4”.</p>
Full article ">Figure 3
<p><span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) bounds taking into account the possibility of IA and the potential presence of UOs. The difference between the dashed black line and the solid black line quantifies the impact on <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) of undetected UOs when assuming correct association (CA). The difference between the dashed red line and the solid red line measures the impact on <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>) of undetected UOs when accounting for incorrect associations.</p>
Full article ">Figure 4
<p>Simulation results accounting for UOs. (<b>a</b>) <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>)-bound contributions under each UO hypothesis (<span class="html-italic">H</span><sub>0</sub> assumes no UO, <span class="html-italic">H</span><sub>1</sub> assumes a UO masks landmark “1”, etc.): the overall risk is the thick green line. (<b>b</b>) Color-coded landmark geometry: the color code identifies which landmark is masked by a UO under the corresponding hypothesis in the left-hand-side plot.</p>
Full article ">Figure 5
<p>Experimental setup of a forest-type scenario, where a GPS/LiDAR-equipped rover is driving by six landmarks (cardboard columns) in a GPS-denied area. GPS is artificially blocked by a simulated tree canopy and a precise differential GPS solution is used for truth trajectory determination.</p>
Full article ">Figure 6
<p>Experimental results accounting for UOs (<b>a</b>) <span class="html-italic">P</span>(<span class="html-italic">HMI<sub>k</sub></span>)-bound contributions for each unmapped object (UO) hypothesis for the preliminary experimental dataset: the overall risk is the thick black line. (<b>b</b>) Color-coded subsets identifying which landmark is occluded by a UO under each one of the six single-UO hypotheses.</p>
Full article ">
32 pages, 3513 KiB  
Article
EEG-Based Emotion Recognition Using Quadratic Time-Frequency Distribution
by Rami Alazrai, Rasha Homoud, Hisham Alwanni and Mohammad I. Daoud
Sensors 2018, 18(8), 2739; https://doi.org/10.3390/s18082739 - 20 Aug 2018
Cited by 98 | Viewed by 9079
Abstract
Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG [...] Read more.
Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of 73.8 % 86.2 % . Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing)
Show Figures

Figure 1

Figure 1
<p>Graphical illustration of the developed four emotion labeling schemes. (<b>a</b>) The two emotion classes defined for the arousal scale (<b>top</b>) and valence scale (<b>bottom</b>) using the 1D-2-class labeling scheme (2CLS). (<b>b</b>) The three emotion classes defined for the arousal scale (<b>top</b>) and valence scale (<b>bottom</b>) using the 1D-3CLS. (<b>c</b>) The four emotion classes defined using the 2D-4CLS. (<b>d</b>) The five emotion classes defined using the 2D-5CLS.</p>
Full article ">Figure 2
<p>Top view images of the constructed time-frequency representations (TFRs) for four EEG segments that are labeled using the 2D-4CLS. (<b>a</b>) The Wigner–Ville distribution (WVD)- and Choi–Williams distribution (CWD)-based TFRs computed for an EEG segment that belongs to the HAHV emotion class. (<b>b</b>) The WVD- and CWD-based TFRs computed for an EEG segment that belongs to the LAHV emotion class. (<b>c</b>) The WVD- and CWD-based TFRs computed for an EEG segment that belongs to the LALV emotion class. (<b>d</b>) The WVD- and CWD-based TFRs computed for an EEG segment that belongs to the HALV emotion class. The time axis represents the indices of the samples within the EEG segment, while the frequency axis represents the frequency components within the EEG segment. The color map located to the right of each plot represents the values of the computed WVD and CWD at each point in the time-frequency plane.</p>
Full article ">Figure 3
<p>The circumplex model for emotion description that shows the arrangement of the emotional states around the circumference of the 2D arousal-valence plane [<a href="#B23-sensors-18-02739" class="html-bibr">23</a>].</p>
Full article ">Figure 4
<p>The ratio between the number of times each time-frequency feature is selected to the total number of selected features computed for each of the four feature selection scenarios. (<b>A</b>) presents the computed percentages for the 1D-2CLS using the arousal scale, and (<b>B</b>) presents the computed percentages for the 1D-2CLS using the valence scale.</p>
Full article ">Figure 5
<p>The ratio between the number of times each time-frequency feature is selected to the total number of selected features computed for each of the four feature selection scenarios. (<b>A</b>) presents the computed percentages for the 1D-3CLS using the arousal scale, and (<b>B</b>) shows the computed percentages for the 1D-3CLS using the valence scale.</p>
Full article ">Figure 6
<p>The ratio between the number of times each time-frequency feature is selected to the total number of selected features computed for each of the four feature selection scenarios. (<b>A</b>) presents the computed percentages for the 2D-4CLS, and (<b>B</b>) shows the computed percentages for the 2D-5CLS.</p>
Full article ">Figure 7
<p>The ratio between the number of times each time-frequency feature is selected to the total number of selected features for each of the four feature selection scenarios after excluding the feature vectors associated with the neutral class. (<b>A</b>) presents the computed percentages for the 1D-3CLS using the arousal scale; (<b>B</b>) presents the computed percentages for the 1D-3CLS using the valence scale; and (<b>C</b>) presents the computed percentages for the 2D-5CLS using the valence scale.</p>
Full article ">
19 pages, 1916 KiB  
Article
A Wearable Wrist Band-Type System for Multimodal Biometrics Integrated with Multispectral Skin Photomatrix and Electrocardiogram Sensors
by Hanvit Kim, Haena Kim, Se Young Chun, Jae-Hwan Kang, Ian Oakley, Youryang Lee, Jun Oh Ryu, Min Joon Kim, In Kyu Park, Hyuck Ki Hong, Young Chang Jo and Sung-Phil Kim
Sensors 2018, 18(8), 2738; https://doi.org/10.3390/s18082738 - 20 Aug 2018
Cited by 9 | Viewed by 10712
Abstract
Multimodal biometrics are promising for providing a strong security level for personal authentication, yet the implementation of a multimodal biometric system for practical usage need to meet such criteria that multimodal biometric signals should be easy to acquire but not easily compromised. We [...] Read more.
Multimodal biometrics are promising for providing a strong security level for personal authentication, yet the implementation of a multimodal biometric system for practical usage need to meet such criteria that multimodal biometric signals should be easy to acquire but not easily compromised. We developed a wearable wrist band integrated with multispectral skin photomatrix (MSP) and electrocardiogram (ECG) sensors to improve the issues of collectability, performance and circumvention of multimodal biometric authentication. The band was designed to ensure collectability by sensing both MSP and ECG easily and to achieve high authentication performance with low computation, efficient memory usage, and relatively fast response. Acquisition of MSP and ECG using contact-based sensors could also prevent remote access to personal data. Personal authentication with multimodal biometrics using the integrated wearable wrist band was evaluated in 150 subjects and resulted in 0.2% equal error rate ( EER ) and 100% detection probability at 1% FAR (false acceptance rate) ( PD . 1 ), which is comparable to other state-of-the-art multimodal biometrics. An additional investigation with a separate MSP sensor, which enhanced contact with the skin, along with ECG reached 0.1% EER and 100% PD . 1 , showing a great potential of our in-house wearable band for practical applications. The results of this study demonstrate that our newly developed wearable wrist band may provide a reliable and easy-to-use multimodal biometric solution for personal authentication. Full article
(This article belongs to the Special Issue Wearable Biomedical Sensors 2019)
Show Figures

Figure 1

Figure 1
<p>Illustrations of differences in penetration depth for Multispectral Skin Photomatrix (MSP) sensors by various wavelengths and distances between light source and detector.</p>
Full article ">Figure 2
<p>An Electrocardiogram (ECG) pulse measured by our in-house integrated wearable band with annotations for P-QRS complex-T.</p>
Full article ">Figure 3
<p>A block diagram of our in-house wearable security band (WSB).</p>
Full article ">Figure 4
<p>A block diagram of ECG signal processing module.</p>
Full article ">Figure 5
<p>A block diagram of MSP acquisition module.</p>
Full article ">Figure 6
<p>(<b>A</b>) The layout of LED arrays in an MSP module; (<b>B</b>) an MSP module equipped with eight red light LEDs and eight infrared light LEDs; and (<b>C</b>) an MSP module equipped with eight yellow LEDs and eight infrared LEDs. For both designs, 32 photodiodes (PD) are arranged between two layers of LEDs in a 2 × 16 matrix form.</p>
Full article ">Figure 7
<p>The operating process of the LED sources and Photodiode (PD) detectors for the single cycle of the MSP data acquisition. One of the 16 LEDs at the first visual channel (V1) begins to turn on and each of the 32 PDs detects optical signals in sequence. Then, after an interval, the next LED at the second visual LED channel (V2) turns on and the PDs detect the signals. This process is repeated for each of eight visual LEDs (V1-V8) and eight infrared LEDs (IR1-IR8). The entire process spans approximately 1 s.</p>
Full article ">Figure 8
<p>The optimized wrist surface type band by firmly attaching the PD array onto the wrist (<b>Left panel</b>). An examples of two bands with different curvatures optimized for: participants with small wrist (left in the <b>right panel</b>) and participants with normal or thick wrists (right in the <b>right panel</b>).</p>
Full article ">Figure 9
<p>An effect of the application of a user template guided filter for MSP signal. The blue line shows the user template which is enrolled in the device. The black line represents the test signal, and the red line is guided filtering result with using template as a guide image.</p>
Full article ">Figure 10
<p>A comparison of the histograms of distance values before (<b>left</b>) and after (<b>right</b>) the normalization by the maximum distance. A distance value refers to an Euclidean distance between a user template and a tested biometric signal (ECG or MSP) of the genuine user or an imposter.</p>
Full article ">Figure 11
<p>Snapshots of our in-house data acquisitions for ECG (<b>A</b>,<b>B</b>) and MSP (<b>C</b>). One wearable ECG sensor contacts the wrist of the left arm (<b>A</b>) and a subject must touch ECG sensors with their index finger of the right hand to acquire ECG data to measure the potential difference between left wrist and right finger (<b>B</b>). MSP sensor array is placed on the right wrist to measure the data (<b>C</b>).</p>
Full article ">Figure 12
<p>False Acceptance Rate (<math display="inline"><semantics> <mi>FAR</mi> </semantics></math>) and False Rejection Rate (<math display="inline"><semantics> <mi>FRR</mi> </semantics></math>) graphs for different threshold values for proposed multimodal biometrics methods: ECG + MSP integrated in Wearable Security Band (WSB); and ECG and MSP separately measured for better contact (ECG + separate MSP sensor), respectively.</p>
Full article ">
16 pages, 2763 KiB  
Article
Integration of Underwater Radioactivity and Acoustic Sensors into an Open Sea Near Real-Time Multi-Parametric Observation System
by Sara Pensieri, Dionisis Patiris, Stylianos Alexakis, Marios N. Anagnostou, Aristides Prospathopoulos, Christos Tsabaris and Roberto Bozzano
Sensors 2018, 18(8), 2737; https://doi.org/10.3390/s18082737 - 20 Aug 2018
Cited by 13 | Viewed by 4921
Abstract
This work deals with the installation of two smart in-situ sensors (for underwater radioactivity and underwater sound monitoring) on the Western 1-Mediterranean Moored Multisensor Array (W1-M3A) ocean observing system that is equipped with all appropriate modules for continuous, long-term and real-time operation. All [...] Read more.
This work deals with the installation of two smart in-situ sensors (for underwater radioactivity and underwater sound monitoring) on the Western 1-Mediterranean Moored Multisensor Array (W1-M3A) ocean observing system that is equipped with all appropriate modules for continuous, long-term and real-time operation. All necessary tasks for their integration are described such as, the upgrade of the sensors for interoperable and power-efficient operation, the conversion of data in homogeneous and standard format, the automated pre-process of the raw data, the real-time integration of data and metadata (related to data processing and calibration procedure) into the controller of the observing system, the test and debugging of the developed algorithms in the laboratory, and the obtained quality-controlled data. The integration allowed the transmission of the acquired data in near-real time along with a complete set of typical ocean and atmospheric parameters. Preliminary analysis of the data is presented, providing qualitative information during rainfall periods, and combine gamma-ray detection rates with passive acoustic data. The analysis exhibits a satisfactory identification of rainfall events by both sensors according to the estimates obtained by the rain gauge operating on the observatory and the remote observations collected by meteorological radars. Full article
Show Figures

Figure 1

Figure 1
<p>The surface buoy of the W1-M3A observing system and the map of the Ligurian basin. The square at the center of the basin marks the position of the W1-M3A observing system, the circle shows the operational range of the weather radars in the Liguria region used in the analysis.</p>
Full article ">Figure 2
<p>Scheme of the main components constituting the W1-M3A observatory.</p>
Full article ">Figure 3
<p>Sequence of processes implemented by the onboard real-time controller to integrate KATERINA II and UPAL in the data flow of the W1-M3A observatory. The duty cycle of one hour refers to a generic time instant HH:00. Reference times on the vertical temporal axis are not to scale.</p>
Full article ">Figure 4
<p>Sketch of the surface buoy of the W1-M3A observatory and images of the deployed sensors: the gamma-ray spectrometer at 6 m depth and the underwater passive aquatic listener close to the damping disk of the buoy at about 36 m depth.</p>
Full article ">Figure 5
<p>Gamma-ray spectra acquired by the KATERINA II system with and without rain. The gamma-ray rate is plotted versus channels (raw data, bottom axis) and keV (energy calibrated, top axis).</p>
Full article ">Figure 6
<p>(<b>a</b>) Counting rate measurements of gamma-rays attributed to the radon progenies <sup>214</sup>Pb and <sup>214</sup>Bi; (<b>b</b>) Total detected gamma-rays; (<b>c</b>) Rain measured by the rain-gauge on the observatory, the weather radars and estimated by UPAL.</p>
Full article ">
26 pages, 2849 KiB  
Article
Optimal Particle Filter Weight for Bayesian Direct Position Estimation in a GNSS Receiver
by Jürgen Dampf, Kathrin Frankl and Thomas Pany
Sensors 2018, 18(8), 2736; https://doi.org/10.3390/s18082736 - 20 Aug 2018
Cited by 14 | Viewed by 5094
Abstract
Direct Position Estimation (DPE) is a rather new Global Navigation Satellite System (GNSS) technique to estimate the user position, velocity and time (PVT) directly from correlation values of the received GNSS signal with receiver internal replica signals. If combined with Bayesian nonlinear filters—like [...] Read more.
Direct Position Estimation (DPE) is a rather new Global Navigation Satellite System (GNSS) technique to estimate the user position, velocity and time (PVT) directly from correlation values of the received GNSS signal with receiver internal replica signals. If combined with Bayesian nonlinear filters—like particle filters—the method allows for coping with multi-modal probability distributions and avoids the linearization step to convert correlation values into pseudoranges. The measurement update equation (particle weight update) is derived from a standard GNSS signal model, but we show that it cannot be used directly in a receiver implementation. The numerical evaluation of the formulas needs to be carried out in a logarithmic scale including various normalizations. Furthermore, the residual user range errors (coming from orbit, satellite clock, multipath or ionospheric errors) need to be included from the very beginning in the stochastic signal model. With these modifications, sensible probability functions can be derived from the GNSS multi-correlator values. The occurrence of multipath yields a natural widening of the probability density function. The approach is demonstrated with simulated and real-world Binary Phase Shift Keying signals with 1.023 MHz code rate (BPSK(1)) within the context of a real-time software based Bayesian DPE receiver. Full article
(This article belongs to the Special Issue GNSS and Fusion with Other Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Bayesian Direct Position Estimation (BDPE) processing scheme as implemented in the software based Global Navigation Satellite System (GNSS) receiver; (<b>1</b>) refers to the analogue to digital conversion, the first stage of the receiver; (<b>2</b>) refers to the GNSS signal processing, which produces correlation values for each tracked channel as symbolically shown in (<b>3</b>). The correlation values are mapped and weighted to a particle cloud of a particle filter as shown in (<b>4</b>).</p>
Full article ">Figure 2
<p>The normalized logarithmic particle weights <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>w</mi> <mo>˜</mo> </mover> <mrow> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> </semantics></math> from Equation (35), assuming a uniform distribution from the previous epoch, i.e., <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>w</mi> <mo>˜</mo> </mover> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and using the optimal particle weight update from Equation (34) are shown. The particles are equidistantly distributed over a grid in the northeast plane. The lines through the plot correspond to the weighted correlation function in the position domain and thus refer to a GNSS signal. In a proper case (correct user velocity, clock error and drift), the lines overlap at a distinct point in the position domain, which is in this case the northeast plane. The resulting peak represents the probability of the 2D position. Note that the plotted weights are normalized and in the logarithmic scale, thus the peak has the maximum value of 0. The coherent integration time for this plot was set to <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> ms. The processed data refers to open sky. It was recorded at latitude LAT = 47.06446263 deg, longitude LON = 15.40777110 deg on the rooftop of Reininghausstraße 13a, Graz, Austria.</p>
Full article ">Figure 3
<p>This plot depicts a case when clock error is improperly aligned. The GNSS signals do not overlap at a distinct position in the northeast plane. The plot shows the normalized logarithmic particle weights <math display="inline"><semantics> <msubsup> <mover accent="true"> <mi>w</mi> <mo>˜</mo> </mover> <mrow> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> </semantics></math> from Equation (35), assuming a uniform distribution from the previous epoch, i.e., <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>w</mi> <mo>˜</mo> </mover> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and using the optimal particle weight from Equation (34). The particles are equidistantly distributed over a grid. The lines through the plot correspond to the weighted correlation function in the position domain and thus refer to a distinct GNSS signal. The lines look very broad even after the weight update, which comes from the logarithmic scale given by <math display="inline"><semantics> <mrow> <mi>log</mi> <mo>(</mo> <msubsup> <mi>w</mi> <mrow> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> <mo>)</mo> </mrow> </semantics></math>. The coherent integration time for this plot was set to <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> ms. The processed data refers to open sky. It was recorded at LAT = 47.06446263 deg, LON = 15.40777110 deg on the rooftop of Reininghausstraße 13a, Graz, Austria.</p>
Full article ">Figure 4
<p>The plot is based on a MATLAB [<a href="#B31-sensors-18-02736" class="html-bibr">31</a>] simulation (Release 2015b, The MathWorks Inc., Natick, MA, USA). The upper plot shows the correlation function from Equation (<a href="#FD15-sensors-18-02736" class="html-disp-formula">15</a>) for different coherent integration times at a C/N0 = 45 dB-Hz. The simulated signal was generated without noise (i.e., the C/N0 merely defines the correlation amplitude). The lower plot shows the corresponding probability function after one weight update Equation (36), for one signal <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and assuming a uniform distribution from the previous epoch, i.e., <math display="inline"><semantics> <mrow> <msubsup> <mover accent="true"> <mi>w</mi> <mo>˜</mo> </mover> <mrow> <mi>k</mi> <mo>−</mo> <mn>1</mn> </mrow> <mi>i</mi> </msubsup> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> is considered in Equation (34). The statistics in the lower plot refer to the weighted mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> and weighted standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Impact of different GNSS signal amplitudes on the correlation values <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>P</mi> <mo>|</mo> </mrow> </semantics></math> and resulting probability function for a constant coherent integration time <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> </semantics></math> and constant code delay bias standard devication <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math>. Higher signal strengths result in a smaller weighted standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> of the probability function.</p>
Full article ">Figure 6
<p>Impact of different code delay bias standard deviations <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math> on the weights <math display="inline"><semantics> <msubsup> <mi>w</mi> <mi>k</mi> <mi>i</mi> </msubsup> </semantics></math> for a coherent integration time <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> ms. For the given conditions, only the weighted standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> seems to approach the theoretical lower limit of <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math>. In particular, <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> <mo>=</mo> <mn>0.1</mn> </mrow> </semantics></math> can not be reached due to the influence of the low correlation time <math display="inline"><semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> </semantics></math> at given <span class="html-italic">C</span>/<math display="inline"><semantics> <mrow> <mi>N</mi> <mn>0</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Impact of different code delay bias standard devications <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math> on the weights <math display="inline"><semantics> <msubsup> <mi>w</mi> <mi>k</mi> <mi>i</mi> </msubsup> </semantics></math> for a coherent integration time <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ms. It can be seen that <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> approaches the theoretical limits for <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Impact of different code delay bias standard devications <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math> on the weights <math display="inline"><semantics> <msubsup> <mi>w</mi> <mi>k</mi> <mi>i</mi> </msubsup> </semantics></math> for a coherent integration time <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> <mi>h</mi> </mrow> </msub> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math> ms. It can be seen that <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> now approaches the theoretical limits for all <math display="inline"><semantics> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> </semantics></math>. Note also the significantly changed amplitude on <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>P</mi> <mo>|</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Influence of constructive and destructive multipath on the probability function with a relative multipath offset of <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>τ</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>50</mn> </mrow> </semantics></math> m and an amplitude of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> with respect to the line-of-sight (LOS) signal. The black dotted line refers to the LOS signal. It can be seen that the multipath variants shift the weighted mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> in the opposite direction and that the destructive multipath significantly increases <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> due to the lower amplitude in <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>P</mi> <mo>|</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Impact of different multipath offsets <math display="inline"><semantics> <mrow> <mo>Δ</mo> <msub> <mi>τ</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> </mrow> </semantics></math> on the probability function. For a better visualization, a strong relative multipath to the LOS signal with <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math> is chosen. It can be seen that higher offsets increase the weighted mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> but do not necessarily increase the weighted standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math>. The dotted black line refers to the LOS signal.</p>
Full article ">Figure 11
<p>An increase of the multipath amplitude <math display="inline"><semantics> <msub> <mi>α</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> </semantics></math> shifts the weighted mean <math display="inline"><semantics> <mi>μ</mi> </semantics></math> and increases the weighted standard deviation <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math>. The probability function naturally covers the uncertainty also in the case of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mi>M</mi> <mi>P</mi> </mrow> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, when the multipath signal is as strong as the LOS signal. The dotted black line refers to the LOS signal.</p>
Full article ">Figure 12
<p>Frequency spectrum of the GPS L1 band with a root-mean-square (RMS) of the samples evaluating to <math display="inline"><semantics> <mrow> <msub> <mi>β</mi> <mrow> <msub> <mi>s</mi> <mi>μ</mi> </msub> <mo>,</mo> <mi>R</mi> <mi>M</mi> <mi>S</mi> </mrow> </msub> <mo>=</mo> <mn>1.71</mn> </mrow> </semantics></math>. At frequency offset of approximately 2.5 MHz a continuous wave (CW) interference peak is present. The dataset was recorded with a measurement van during a measurement campaign. Based on the CW interference being present during the complete measurement run, it is assumed that it was caused by one of the active on-board measurement instruments or radio connections. Additionally, it is assumed that the present CW interference does not influence the measurement because it is significantly outside the main lobe of the analysed GPS L1 C/A (Coarse/Acquisition) signal.</p>
Full article ">Figure 13
<p>Real-world open sky scenario of satellite GPS L1 C/A Pseudo-Random-Noise (PRN) number 12. The plots show from left to right three epochs referring to GPS Week/Second W/S. The weighted standard deviations <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>w</mi> </msub> </semantics></math> approach the theoretical lower limit of <math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mo>Δ</mo> <mi>τ</mi> </mrow> </msub> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math> m for this ideal case. The dataset was recorded on the roof at LAT = 47.06446263 deg, LON = 15.40777110 deg at the Reininghausstraße 13a, Graz, Austria. The red crosses in the upper plots show the correlation values at code offset <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. The black line in the upper plot shows the sinc interpolated correlation values, which are used to obtain the weights <math display="inline"><semantics> <msubsup> <mi>w</mi> <mrow> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> </semantics></math> shown in the lower plots.</p>
Full article ">Figure 14
<p>Real-world urban scenario of satellite GPS L1 C/A PRN 27. The plots show from left to right three epochs at different places in a urban environment, referring to the red dots from left to right in <a href="#sensors-18-02736-f017" class="html-fig">Figure 17</a>. It is expected that the correlation values <math display="inline"><semantics> <mrow> <mo>|</mo> <mi>P</mi> <mo>|</mo> </mrow> </semantics></math> vary significantly due to shadowing and multipath. The weighted mean is shifted for all positions and both the correlation and probability function of the second point (middle row) seem to be significantly affected by multipath. The red crosses in the upper plots show the correlation values at code offset <math display="inline"><semantics> <mi>τ</mi> </semantics></math>. The black line in the upper plot shows the sinc interpolated correlation values, which are used to obtain the weights <math display="inline"><semantics> <msubsup> <mi>w</mi> <mrow> <mi>k</mi> </mrow> <mi>i</mi> </msubsup> </semantics></math> shown in the lower plots.</p>
Full article ">Figure 15
<p>Real-world urban scenario of satellite GPS L1 C/A PRN 21. The plot content is analogical to <a href="#sensors-18-02736-f014" class="html-fig">Figure 14</a>. For this satellite, it is assumed that the signal is less affected by the environment because there are fewer variations of the signal amplitude. Only in the case of the first position is the probability function biased.</p>
Full article ">Figure 16
<p>Real-world urban scenario of satellite GPS L1 C/A PRN 18. The plot content is analogical to <a href="#sensors-18-02736-f014" class="html-fig">Figure 14</a>. This satellite signal is significantly affected by the environment. From the azimuth of satellite PRN 18 and the location of the buildings as shown in <a href="#sensors-18-02736-f017" class="html-fig">Figure 17</a>, it can be assumed that the GNSS signal is blocked at the first and last positions, which fits to the amplitude of the correlation values. Interestingly, in the case of the third position, the significantly small correlation value leads to an increase in variance estimate.</p>
Full article ">Figure 17
<p>Environment for the urban scenario with three red measurement points. The driving direction was to the east, thus the red measurement points refer from left to the right column in <a href="#sensors-18-02736-f014" class="html-fig">Figure 14</a>, <a href="#sensors-18-02736-f015" class="html-fig">Figure 15</a> and <a href="#sensors-18-02736-f016" class="html-fig">Figure 16</a>. The upper left plot shows the satellite constellation. The three analyzed satellites PRN 27, PRN 21 and PRN 18 are marked with a black circle. The measurement was taken in the Steyrergasse in Graz, Austria. The point at W/S 1901/317839.4 refers to LAT = 47.06430622 deg, LON = 15.45391867 deg. Map image © 2017 Google, Landsat/Copernicus</p>
Full article ">
15 pages, 883 KiB  
Article
Green Compressive Sampling Reconstruction in IoT Networks
by Stefania Colonnese, Mauro Biagi, Tiziana Cattai, Roberto Cusani, Fabrizio De Vico Fallani and Gaetano Scarano
Sensors 2018, 18(8), 2735; https://doi.org/10.3390/s18082735 - 20 Aug 2018
Cited by 4 | Viewed by 3743
Abstract
In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering [...] Read more.
In this paper, we address the problem of green Compressed Sensing (CS) reconstruction within Internet of Things (IoT) networks, both in terms of computing architecture and reconstruction algorithms. The approach is novel since, unlike most of the literature dealing with energy efficient gathering of the CS measurements, we focus on the energy efficiency of the signal reconstruction stage given the CS measurements. As a first novel contribution, we present an analysis of the energy consumption within the IoT network under two computing architectures. In the first one, reconstruction takes place within the IoT network and the reconstructed data are encoded and transmitted out of the IoT network; in the second one, all the CS measurements are forwarded to off-network devices for reconstruction and storage, i.e., reconstruction is off-loaded. Our analysis shows that the two architectures significantly differ in terms of consumed energy, and it outlines a theoretically motivated criterion to select a green CS reconstruction computing architecture. Specifically, we present a suitable decision function to determine which architecture outperforms the other in terms of energy efficiency. The presented decision function depends on a few IoT network features, such as the network size, the sink connectivity, and other systems’ parameters. As a second novel contribution, we show how to overcome classical performance comparison of different CS reconstruction algorithms usually carried out w.r.t. the achieved accuracy. Specifically, we consider the consumed energy and analyze the energy vs. accuracy trade-off. The herein presented approach, jointly considering signal processing and IoT network issues, is a relevant contribution for designing green compressive sampling architectures in IoT networks. Full article
(This article belongs to the Special Issue Green Communications and Networking for IoT)
Show Figures

Figure 1

Figure 1
<p>Computing architectures for IoT CS reconstruction.</p>
Full article ">Figure 2
<p><math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>in</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>E</mi> <mrow> <mi>off</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> </mrow> </semantics></math> versus <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> <mo>=</mo> <mn>0.2</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mn>0.02</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Ratio <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>E</mi> <mrow> <mi>in</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>E</mi> <mrow> <mi>off</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> </mrow> </semantics></math> versus the ratio <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>ρ</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </semantics></math> (<math display="inline"><semantics> <mrow> <mi>N</mi> <mspace width="3.33333pt"/> <mo>=</mo> <mspace width="3.33333pt"/> <mn>32</mn> <mo>×</mo> <mn>32</mn> </mrow> </semantics></math>, <span class="html-italic">L</span> = 8).</p>
Full article ">Figure 4
<p>Ratio <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mrow> <mi>in</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>E</mi> <mrow> <mi>off</mi> <mo>-</mo> <mi>net</mi> </mrow> </msub> </mrow> </semantics></math> versus the ratio <math display="inline"><semantics> <mrow> <msub> <mi>ρ</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> <mo>/</mo> <msub> <mi>ρ</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> </mrow> </semantics></math> , <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics></math>, <span class="html-italic">L</span> = 8).</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi mathvariant="script">R</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> versus <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> <mo>=</mo> <mn>0.25</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mn>0.25</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p><math display="inline"><semantics> <msub> <mi mathvariant="script">D</mi> <mrow> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> versus <math display="inline"><semantics> <mrow> <msub> <mi>E</mi> <mi>b</mi> </msub> <mo>/</mo> <msub> <mi>E</mi> <mi>p</mi> </msub> <mo>,</mo> <mi>N</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>8</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>C</mi> <mi>S</mi> </mrow> </msub> <mo>=</mo> <mn>0.25</mn> <mo>,</mo> <mspace width="0.277778em"/> <msub> <mi>ρ</mi> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> <mo>=</mo> <mn>0.025</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Oceanographic field (<math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>64</mn> <mo>×</mo> <mn>64</mn> </mrow> </semantics></math> ).</p>
Full article ">Figure 8
<p>Energy <math display="inline"><semantics> <mrow> <mi>E</mi> <mo>(</mo> <mi>J</mi> <mo>)</mo> </mrow> </semantics></math> vs. PSNR for in-network CS reconstruction algorithms (NLCoSaMP, CoSaMP, and [<a href="#B32-sensors-18-02735" class="html-bibr">32</a>]) and off-network one ([<a href="#B33-sensors-18-02735" class="html-bibr">33</a>]).</p>
Full article ">
20 pages, 4203 KiB  
Article
An Occlusion-Aware Framework for Real-Time 3D Pose Tracking
by Mingliang Fu, Yuquan Leng, Haitao Luo and Weijia Zhou
Sensors 2018, 18(8), 2734; https://doi.org/10.3390/s18082734 - 20 Aug 2018
Viewed by 4661
Abstract
Random forest-based methods for 3D temporal tracking over an image sequence have gained increasing prominence in recent years. They do not require object’s texture and only use the raw depth images and previous pose as input, which makes them especially suitable for textureless [...] Read more.
Random forest-based methods for 3D temporal tracking over an image sequence have gained increasing prominence in recent years. They do not require object’s texture and only use the raw depth images and previous pose as input, which makes them especially suitable for textureless objects. These methods learn a built-in occlusion handling from predetermined occlusion patterns, which are not always able to model the real case. Besides, the input of random forest is mixed with more and more outliers as the occlusion deepens. In this paper, we propose an occlusion-aware framework capable of real-time and robust 3D pose tracking from RGB-D images. To this end, the proposed framework is anchored in the random forest-based learning strategy, referred to as RFtracker. We aim to enhance its performance from two aspects: integrated local refinement of random forest on one side, and online rendering based occlusion handling on the other. In order to eliminate the inconsistency between learning and prediction of RFtracker, a local refinement step is embedded to guide random forest towards the optimal regression. Furthermore, we present an online rendering-based occlusion handling to improve the robustness against dynamic occlusion. Meanwhile, a lightweight convolutional neural network-based motion-compensated (CMC) module is designed to cope with fast motion and inevitable physical delay caused by imaging frequency and data transmission. Finally, experiments show that our proposed framework can cope better with heavily-occluded scenes than RFtracker and preserve the real-time performance. Full article
(This article belongs to the Collection Positioning and Navigation)
Show Figures

Figure 1

Figure 1
<p>Pipeline of the proposed occlusion-aware framework. CMC, convolutional neural network-based motion-compensated.</p>
Full article ">Figure 2
<p>View sphere in RFtracker. Each vertex on the view sphere represents a camera viewpoint. Trees inside the blue region and the green region are selected for the final prediction in the testing stage, respectively. The viewpoint corresponding to the red point located at the region’s center is determined by the previous pose.</p>
Full article ">Figure 3
<p>The process to determine elements of an indicator vector. The blue leaf or non-leaf nodes in forest represent data stream. Input data travel over the trees and are finally stored in the blue leaf nodes. After that, an indicator vector with the same dimension as the number of leaves is determined. Each element in the indicator vector depends on whether the corresponding leaf node contains the input data or not.</p>
Full article ">Figure 4
<p>Architecture (left) and component size (right) of the CMC module. The early layer consists of two collateral convolutional layers with 64 <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> filters. Fire modules (in crimson) [<a href="#B31-sensors-18-02734" class="html-bibr">31</a>] in the network architecture are employed to replace conventional convolutional layers. Three hyper-parameters of the fire module represent the number of filters in the squeeze layer, the number of <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <mn>1</mn> </mrow> </semantics></math> filters and the number of <math display="inline"><semantics> <mrow> <mn>3</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics></math> filters in the expand layer, respectively. For the input convolutional layers conv1-1 and conv1-2, the stride is set to 2.</p>
Full article ">Figure 5
<p>Exemplary training samples used in the CMC module. Color (up) and depth (down) image pairs (<b>a</b>) and (<b>c</b>) are synthesized by feeding a random pose to the rendering engine. (<b>b</b>) and (<b>d</b>) are the corresponding image pairs after augmentation operations and stacking relative transformations.</p>
Full article ">Figure 6
<p>An example of occlusion detection with the depth map. RGB images are provided for better visualization. The test sample from the Occluded LineModdataset [<a href="#B21-sensors-18-02734" class="html-bibr">21</a>] is employed to show the flow of occlusion detection. In (<b>a</b>), RGB image crop (up) and depth image crop (down) of an occluded scene are displayed. The synthetic depth maps (<b>b</b>) and projected mask (<b>c</b>) on the RGB image are generated with the compensated pose (up) and ground truth pose (down), respectively. The labeled region corresponds to low-quality area shown in (<b>d</b>).</p>
Full article ">Figure 7
<p>Overlapping ratio between the rendering template and tracked object in a real scene.</p>
Full article ">Figure 8
<p>Comparison of tracking accuracy against the number of iterations. All the test objects are from the LineMod dataset [<a href="#B23-sensors-18-02734" class="html-bibr">23</a>].</p>
Full article ">Figure 9
<p>Example image crops showing the test results of RFtracker* (white) and RFtracker-B (yellow).</p>
Full article ">
20 pages, 10232 KiB  
Article
Comparison of CBERS-04, GF-1, and GF-2 Satellite Panchromatic Images for Mapping Quasi-Circular Vegetation Patches in the Yellow River Delta, China
by Qingsheng Liu, Chong Huang, Gaohuan Liu and Bowei Yu
Sensors 2018, 18(8), 2733; https://doi.org/10.3390/s18082733 - 20 Aug 2018
Cited by 32 | Viewed by 4859
Abstract
Vegetation in arid and semi-arid regions frequently exists in patches, which can be effectively mapped by remote sensing. However, not all satellite images are suitable to detect the decametric-scale vegetation patches because of low spatial resolution. This study compared the capability of the [...] Read more.
Vegetation in arid and semi-arid regions frequently exists in patches, which can be effectively mapped by remote sensing. However, not all satellite images are suitable to detect the decametric-scale vegetation patches because of low spatial resolution. This study compared the capability of the first Gaofen Satellite (GF-1), the second Gaofen Satellite (GF-2), and China-Brazil Earth Resource Satellite 4 (CBERS-04) panchromatic images for mapping quasi-circular vegetation patches (QVPs) with K-Means (KM) and object-based example-based feature extraction with support vector machine classification (OEFE) in the Yellow River Delta, China. Both approaches provide relatively high classification accuracy with GF-2. For all five images, the root mean square errors (RMSEs) for area, perimeter, and perimeter/area ratio were smaller using the KM than the OEFE, indicating that the results from the KM are more similar to ground truth. Although the mapped results of the QVPs from finer-spatial resolution images appeared more accurate, accuracy improvement in terms of QVP area, perimeter, and perimeter/area ratio was limited, and most of the QVPs detected only by finer-spatial resolution imagery had a more than 40% difference with the actual QVPs in these three parameters. Compared with the KM approach, the OEFE approach performed better for vegetation patch shape description. Coupling the CBERS-04 with the OEFE approach could suitably map the QVPs (overall accuracy 75.3%). This is important for ecological protection managers concerned about cost-effectiveness between image spatial resolution and mapping the QVPs. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of the study area in the Yellow River Delta (YRD).</p>
Full article ">Figure 2
<p>CBERS-04 (acquired on 10 July 2016) and GF-1 (acquired on 10 August 2016) panchromatic images covering test areas (<b>a</b>) 1 and (<b>b</b>) 2. The lower right images in (<b>a</b>) and (<b>b</b>) are the subset of the original size of CBERS-04 and GF-1 panchromatic images, respectively.</p>
Full article ">Figure 3
<p>The flow chart of this research.</p>
Full article ">Figure 4
<p>Mapping results of the QVPs, illustrated in the same small portion of the test area 1. (<b>a</b>) mapped from CBERS-04 imagery using K-Means (KM) classifier, (<b>b</b>) mapped from CBERS-04 imagery using object-based example-based feature extraction with support vector machine classification (OEFE) approach, (<b>c</b>) mapped from GF-1 imagery using KM classifier, (<b>d</b>) mapped from GF-1 imagery using OEFE approach, (<b>e</b>) mapped from GF-2 imagery using KM classifier, and (<b>f</b>) mapped from GF-2 imagery using OEFE approach.</p>
Full article ">Figure 5
<p>Mapping results of the QVPs in the test area 2. (<b>a</b>) mapped from CBERS-04 imagery using K-Means (KM) classifier, (<b>b</b>) mapped from CBERS-04 imagery using object-based example-based feature extraction with support vector machine classification (OEFE) approach, (<b>c</b>) mapped from GF-1 imagery using KM classifier, (<b>d</b>) mapped from GF-1 imagery using OEFE approach, (<b>e</b>) mapped from GF-2 imagery using KM classifier, and (<b>f</b>) mapped from GF-2 imagery using OEFE approach.</p>
Full article ">Figure 6
<p>(<b>a</b>) CBERS-04 panchromatic image acquired on 19 May 2016 and (<b>b</b>) GF-2 panchromatic image acquired on 11 August 2016 for test area 2. Dark and bright quasi-circular objects are the QVPs.</p>
Full article ">
20 pages, 17920 KiB  
Article
A High Precision Quality Inspection System for Steel Bars Based on Machine Vision
by Xinman Zhang, Jiayu Zhang, Mei Ma, Zhiqi Chen, Shuangling Yue, Tingting He and Xuebin Xu
Sensors 2018, 18(8), 2732; https://doi.org/10.3390/s18082732 - 20 Aug 2018
Cited by 26 | Viewed by 5975
Abstract
Steel bars play an important role in modern construction projects and their quality enormously affects the safety of buildings. It is urgent to detect whether steel bars meet the specifications or not. However, the existing manual detection methods are costly, slow and offer [...] Read more.
Steel bars play an important role in modern construction projects and their quality enormously affects the safety of buildings. It is urgent to detect whether steel bars meet the specifications or not. However, the existing manual detection methods are costly, slow and offer poor precision. In order to solve these problems, a high precision quality inspection system for steel bars based on machine vision is developed. We propose two algorithms: the sub-pixel boundary location method (SPBLM) and fast stitch method (FSM). A total of five sensors, including a CMOS, a level sensor, a proximity switch, a voltage sensor, and a current sensor have been used to detect the device conditions and capture image or video. The device could capture abundant and high-definition images and video taken by a uniform and stable smartphone at the construction site. Then data could be processed in real-time on a smartphone. Furthermore, the detection results, including steel bar diameter, spacing, and quantity would be given by a practical APP. The system has a rather high accuracy (as low as 0.04 mm (absolute error) and 0.002% (relative error) of calculating diameter and spacing; zero error in counting numbers of steel bars) when doing inspection tasks, and three parameters can be detected at the same time. None of these features are available in existing systems and the device and method can be widely used to steel bar quality inspection at the construction site. Full article
(This article belongs to the Special Issue Sensors Signal Processing and Visual Computing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of video data acquisition system.</p>
Full article ">Figure 2
<p>Workflow of data acquisition.</p>
Full article ">Figure 3
<p>Two commonly edge detection state: (<b>a</b>) ideal state of steel bar; (<b>b</b>) apropos Hough transformation threshold; (<b>c</b>) right inner position acquired; (<b>d</b>) correct diameter obtained; (<b>e</b>) common state of steel bar; (<b>f</b>) another Hough threshold; (<b>g</b>) wrong inner position acquired; (<b>h</b>) erroneous result of diameter we may obtain.</p>
Full article ">Figure 3 Cont.
<p>Two commonly edge detection state: (<b>a</b>) ideal state of steel bar; (<b>b</b>) apropos Hough transformation threshold; (<b>c</b>) right inner position acquired; (<b>d</b>) correct diameter obtained; (<b>e</b>) common state of steel bar; (<b>f</b>) another Hough threshold; (<b>g</b>) wrong inner position acquired; (<b>h</b>) erroneous result of diameter we may obtain.</p>
Full article ">Figure 4
<p>Schematic diagram of sub-pixel boundary positioning.</p>
Full article ">Figure 5
<p>Recording the pixel numbers of edges by scanning the first row of pixels of a projection.</p>
Full article ">Figure 6
<p>Pixel value/actual size transformation.</p>
Full article ">Figure 7
<p>Image stitching treatment.</p>
Full article ">Figure 8
<p>Part of one stitching image.</p>
Full article ">Figure 9
<p>Overall structure diagram of the acquisition device.</p>
Full article ">Figure 10
<p>Main hardware components of the steel bars quality inspection system.</p>
Full article ">Figure 11
<p>Functional block diagram of the controller.</p>
Full article ">Figure 12
<p>Physical appearance of sensors and controllers: (<b>a</b>) Panel of the controller; (<b>b</b>) fixed parts of the controller; (<b>c</b>) Panel of the controller and level sensor; (<b>d</b>) Guide rail and level sensor.</p>
Full article ">Figure 13
<p>Overall system framework design block diagram.</p>
Full article ">Figure 14
<p>Android application interface: (<b>a</b>) Main interface; (<b>b</b>) Parameter setting interface; (<b>c</b>) Current processing results showing on main interface; (<b>d</b>) Final results interface.</p>
Full article ">
27 pages, 3220 KiB  
Article
Distributed Egocentric Betweenness Measure as a Vehicle Selection Mechanism in VANETs: A Performance Evaluation Study
by Ademar T. Akabane, Roger Immich, Richard W. Pazzi, Edmundo R. M. Madeira and Leandro A. Villas
Sensors 2018, 18(8), 2731; https://doi.org/10.3390/s18082731 - 20 Aug 2018
Cited by 16 | Viewed by 5216
Abstract
In the traditional approach for centrality measures, also known as sociocentric, a network node usually requires global knowledge of the network topology in order to evaluate its importance. Therefore, it becomes difficult to deploy such an approach in large-scale or highly dynamic networks. [...] Read more.
In the traditional approach for centrality measures, also known as sociocentric, a network node usually requires global knowledge of the network topology in order to evaluate its importance. Therefore, it becomes difficult to deploy such an approach in large-scale or highly dynamic networks. For this reason, another concept known as egocentric has been introduced, which analyses the social environment surrounding individuals (through the ego-network). In other words, this type of network has the benefit of using only locally available knowledge of the topology to evaluate the importance of a node. It is worth emphasizing that in this approach, each network node will have a sub-optimal accuracy. However, such accuracy may be enough for a given purpose, for instance, the vehicle selection mechanism (VSM) that is applied to find, in a distributed fashion, the best-ranked vehicles in the network after each topology change. In order to confirm that egocentric measures can be a viable alternative for implementing a VSM, in particular, a case study was carried out to validate the effectiveness and viability of that mechanism for a distributed information management system. To this end, we used the egocentric betweenness measure as a selection mechanism of the most appropriate vehicle to carry out the tasks of information aggregation and knowledge generation. Based on the analysis of the performance results, it was confirmed that a VSM is extremely useful for VANET applications, and two major contributions of this mechanism can be highlighted: (i) reduction of bandwidth consumption; and (ii) overcoming the issue of highly dynamic topologies. Another contribution of this work is a thorough study by implementing and evaluating how well egocentric betweenness performs in comparison to the sociocentric measure in VANETs. Evaluation results show that the use of the egocentric betweenness measure in highly dynamic topologies has demonstrated a high degree of similarity compared to the sociocentric approach. Full article
(This article belongs to the Special Issue Algorithm and Distributed Computing for the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>An illustration of the ego-network (local subgraph), where <span class="html-italic">n</span> represents the ego and the nodes (1, 2, 3, 4 and 5) denote the alters.</p>
Full article ">Figure 2
<p>A classical graph example [<a href="#B5-sensors-18-02731" class="html-bibr">5</a>].</p>
Full article ">Figure 3
<p>The betweenness centrality score of each node is displayed as a temporal graph, according to the evaluation scenario (traffic density of 150 vehicles/km<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>).</p>
Full article ">Figure 4
<p>An illustrative example of the beacon packets’ exchange among the vehicles to calculate the egocentric betweenness score. In this case, the grey vehicle, labelled as 1, is doing the calculation.</p>
Full article ">Figure 5
<p>The simulation setup layers.</p>
Full article ">Figure 6
<p>Map clipping from Erlangen, Germany. The figure on the left was imported from OSM and on the right represents the road topology used in our simulations.</p>
Full article ">Figure 7
<p>Scatterplot of sociocentric vs. egocentric betweenness for each vehicle traffic density.</p>
Full article ">Figure 8
<p>CDF of the egocentric betweenness scores in relation to the vehicle traffic densities.</p>
Full article ">Figure 9
<p>CDF of the number of one-hop neighbours in relation to the vehicle traffic densities.</p>
Full article ">Figure 10
<p>The relationship between the egocentric betweenness score and the number of one-hop neighbours.</p>
Full article ">Figure 11
<p>CDF of the time window duration in which there were no changes to the egocentric betweenness score in relation to the vehicle traffic densities.</p>
Full article ">Figure 12
<p>Average time window duration in which there were no changes to the egocentric betweenness scores.</p>
Full article ">Figure 13
<p>Performance evaluation of the network under different traffic densities.</p>
Full article ">Figure 14
<p>Average trip time of vehicles vs. densities.</p>
Full article ">Figure 15
<p>Impact on channel busy ratio vs. densities.</p>
Full article ">Figure 16
<p>Operation flowchart of the proposed solution.</p>
Full article ">Figure 17
<p>Knowledge generation results. EBM, egocentric betweenness measure.</p>
Full article ">
21 pages, 7360 KiB  
Article
Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots
by Varuna De Silva, Jamie Roche and Ahmet Kondoz
Sensors 2018, 18(8), 2730; https://doi.org/10.3390/s18082730 - 20 Aug 2018
Cited by 78 | Viewed by 11256
Abstract
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors [...] Read more.
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm. Full article
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Show Figures

Figure 1

Figure 1
<p>Examples of sensor data to be fused. (<b>a</b>) Image from the wide angle camera; (<b>b</b>) 3D point cloud from LiDAR.</p>
Full article ">Figure 2
<p>Side view of the sensor setup.</p>
Full article ">Figure 3
<p>Top view of the sensor setup.</p>
Full article ">Figure 4
<p>The sequence of steps in Gaussian Process (GP)-based resolution matching algorithm.</p>
Full article ">Figure 5
<p>The experimental test bed used for data capture.</p>
Full article ">Figure 6
<p>Steps in extrinsic calibration process. (<b>a</b>) Camera setup and calibration target; (<b>b</b>) The corresponding LiDAR capture; (<b>c</b>) Red points on the edges of the circle; (<b>d</b>) Identification of the circle centers.</p>
Full article ">Figure 7
<p>Projection of LiDAR data on to the spherical video frame.</p>
Full article ">Figure 8
<p>Screen shots of different scenarios captured by the quadbike in motion. (<b>a</b>) Scenario 1: Indoors tiled floor with cluttered environment; (<b>b</b>) Scenario 2: Outdoor environment on artificial grass; (<b>c</b>) Scenario 3: Indoor environment with carpet floor and extreme shadows; (<b>d</b>) Scenario 4: Indoor environment with less clutter; (<b>e</b>) Scenario 5: Outdoor environment with change of surface; (<b>f</b>) Scenario 6: Indoor environment with moving object in LiDAR blind spot.</p>
Full article ">Figure 8 Cont.
<p>Screen shots of different scenarios captured by the quadbike in motion. (<b>a</b>) Scenario 1: Indoors tiled floor with cluttered environment; (<b>b</b>) Scenario 2: Outdoor environment on artificial grass; (<b>c</b>) Scenario 3: Indoor environment with carpet floor and extreme shadows; (<b>d</b>) Scenario 4: Indoor environment with less clutter; (<b>e</b>) Scenario 5: Outdoor environment with change of surface; (<b>f</b>) Scenario 6: Indoor environment with moving object in LiDAR blind spot.</p>
Full article ">Figure 9
<p>Visual illustration of the outputs of the Gaussian Process based resolution matching. (<b>a</b>) Aligned camera data and LiDAR points; (<b>b</b>) Output of Resolution matching step; (<b>c</b>) Uncertainty associated with depth estimation.</p>
Full article ">Figure 9 Cont.
<p>Visual illustration of the outputs of the Gaussian Process based resolution matching. (<b>a</b>) Aligned camera data and LiDAR points; (<b>b</b>) Output of Resolution matching step; (<b>c</b>) Uncertainty associated with depth estimation.</p>
Full article ">Figure 10
<p>Comparison of free-space maps for different algorithms for three different scenarios. (<b>a</b>) LiDAR only; (<b>b</b>) Image classifier only; (<b>c</b>) Fusion-based approach.</p>
Full article ">Figure 11
<p>Comparison of free-space maps for different algorithms for three different scenarios. (<b>a</b>) LiDAR only; (<b>b</b>) Image classifier only; (<b>c</b>) Fusion based approach.</p>
Full article ">Figure 12
<p>Occupancy Grid Map based on temporally combined LiDAR data. (<b>a</b>) Scenario 1; (<b>b</b>) Scenario 5.</p>
Full article ">Figure 13
<p>The effect of temporal window width for LiDAR frame accumulation. As the temporal window width is changed from 1 s to 2 s, the spread of values increases. (<b>a</b>) When temporal window width is 1 s; (<b>b</b>) when temporal window width is 2 s.</p>
Full article ">Figure 14
<p>The effect of motion artefacts on video capture. (<b>a</b>) When the vehicle in motion; (<b>b</b>) When the vehicle is still.</p>
Full article ">
13 pages, 4324 KiB  
Article
Analysis and Modeling Methodologies for Heat Exchanges of Deep-Sea In Situ Spectroscopy Detection System Based on ROV
by Xiaorui Liu, Fujun Qi, Wangquan Ye, Kai Cheng, Jinjia Guo and Ronger Zheng
Sensors 2018, 18(8), 2729; https://doi.org/10.3390/s18082729 - 20 Aug 2018
Cited by 10 | Viewed by 4161
Abstract
In recent years, cabled ocean observation technology has been increasingly used for deep sea in situ research. As sophisticated sensor or measurement system starts to be applied on a remotely operated vehicle (ROV), it presents the requirement to maintain a stable condition of [...] Read more.
In recent years, cabled ocean observation technology has been increasingly used for deep sea in situ research. As sophisticated sensor or measurement system starts to be applied on a remotely operated vehicle (ROV), it presents the requirement to maintain a stable condition of measurement system cabin. In this paper, we introduce one kind of ROV-based Raman spectroscopy measurement system (DOCARS) and discuss the development characteristics of its cabin condition during profile measurement process. An available and straightforward modeling methodology is proposed to realize predictive control for this trend. This methodology is based on the Autoregressive Exogenous (ARX) model and is optimized through a series of sea-going test data. The fitting result demonstrates that during profile measurement processes this model can availably predict the development trends of DORCAS’s cabin condition during the profile measurement process. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic Diagram of DOCARS system.</p>
Full article ">Figure 2
<p>ROV installation solutions for DOCARS system. (<b>a</b>) Back mounting case. (<b>b</b>) Side mounting case.</p>
Full article ">Figure 3
<p>Map location of the sea-going area constructed with GeoMapApp (<a href="http://www.geomapapp.org" target="_blank">www.geomapapp.org</a>). (<b>a</b>) Cold seep field at Formosa Ridge in the South China Sea [<a href="#B3-sensors-18-02729" class="html-bibr">3</a>]. (<b>b</b>) PACManus hydrothermal vent area at Paul Ridge in Bismarck Sea [<a href="#B1-sensors-18-02729" class="html-bibr">1</a>].</p>
Full article ">Figure 4
<p>Effect of cabin temperature on Raman spectrum data. (<b>a</b>) Typical Raman spectrum and substance peaks. (<b>b</b>) Development trend of cabin temperature and SO<sub>4</sub><sup>2−</sup> Raman peak frequency (DOCARS system, #52 floating process). (<b>c</b>) Development trend among cabin temperature, Raman peak wavelength of sulfate ion and sapphire (OUC-Raman instrumental node).</p>
Full article ">Figure 4 Cont.
<p>Effect of cabin temperature on Raman spectrum data. (<b>a</b>) Typical Raman spectrum and substance peaks. (<b>b</b>) Development trend of cabin temperature and SO<sub>4</sub><sup>2−</sup> Raman peak frequency (DOCARS system, #52 floating process). (<b>c</b>) Development trend among cabin temperature, Raman peak wavelength of sulfate ion and sapphire (OUC-Raman instrumental node).</p>
Full article ">Figure 5
<p>Comparison between real measurement and prediction data group (<span class="html-italic">T</span><sub>c</sub>-<span class="html-italic">T</span><sub>s</sub>). (<b>a</b>) Comparison for floating profile process of #31 ROV task. (<b>b</b>) Comparison for diving profile process of #52 ROV task. (<b>c</b>) Comparison for floating profile process of #35 ROV task.</p>
Full article ">Figure 6
<p>Comparison between real measurement and prediction data group (<span class="html-italic">T</span><sub>c</sub>-<span class="html-italic">T</span><sub>s</sub>). (<b>a</b>) Comparison for diving profile process of #31 ROV task. (<b>b</b>) Comparison for floating profile process of #52 ROV task.</p>
Full article ">Figure 7
<p>Development of cabin atmosphere during diving profile process of #52 ROV task. (<b>a</b>) Temperature (<span class="html-italic">T</span><sub>c</sub>) and pressure (<span class="html-italic">P</span>) development trend inside cabin across time sequence <span class="html-italic">t</span>. (<b>b</b>) The ratio of <span class="html-italic">T</span><sub>c</sub> to <span class="html-italic">P</span>.</p>
Full article ">Figure 8
<p>Data validation between real measurement group and predicted simulation group (<span class="html-italic">T</span><sub>c</sub>-<span class="html-italic">P</span>). (<b>a</b>) Comparison for diving profile measurement process of #52 ROV task. (<b>b</b>) Comparison for floating profile measurement process of #31 ROV task. (<b>c</b>) Comparison for floating profile measurement process of #35 ROV task. (<b>d</b>) Trend of <span class="html-italic">P</span>/<span class="html-italic">T</span><sub>c</sub> during diving profile process of #52 ROV task. (<b>e</b>) Trend of <span class="html-italic">P</span>/<span class="html-italic">T</span><sub>c</sub> during floating profile process of #31 ROV task. (<b>f</b>) Trend of <span class="html-italic">P</span>/<span class="html-italic">T</span><sub>c</sub> during floating profile process of #35 ROV task.</p>
Full article ">Figure 9
<p>Data validation between real measurement group and predicted simulation group (<span class="html-italic">T</span><sub>c</sub>-<span class="html-italic">P</span>). (<b>a</b>) Comparison for diving profile measurement process of #31 ROV task. (<b>b</b>) Comparison for floating profile measurement process of #52 ROV task. (<b>c</b>) Trend of <span class="html-italic">P</span>/<span class="html-italic">T</span><sub>c</sub> during diving profile process of #31 ROV task. (<b>d</b>) Trend of <span class="html-italic">P</span>/<span class="html-italic">T</span><sub>c</sub> during floating profile process of #52 ROV task.</p>
Full article ">
18 pages, 4636 KiB  
Article
Absolute Position Coding Method for Angular Sensor—Single-Track Gray Codes
by Fan Zhang, Hengjun Zhu, Kan Bian, Pengcheng Liu and Jianhui Zhang
Sensors 2018, 18(8), 2728; https://doi.org/10.3390/s18082728 - 19 Aug 2018
Cited by 6 | Viewed by 6934
Abstract
Single-track Gray codes (STGCs) is a type of absolute position coding method for novel angular sensors, because it has single-track property over traditional Gray codes and mono-difference over linear feedback shift register codes. However, given that the coding theory of STGCs is incomplete, [...] Read more.
Single-track Gray codes (STGCs) is a type of absolute position coding method for novel angular sensors, because it has single-track property over traditional Gray codes and mono-difference over linear feedback shift register codes. However, given that the coding theory of STGCs is incomplete, STGC construction is still a challenging task even though it has been defined for more than 20 years. Published coding theories and results on STGCs are about two types of STGC, namely, necklace and self-dual necklace ordering, which are collectively called as k-spaced head STGCs. To find a new code, three constraints on generating sequences are proposed to accelerate the searching algorithm, and the complete searching result of length-6 STGCs is initially obtained. Among the entire 132 length-6 STGCs, two novel types of STGCs with non-k-spaced heads are found, and the basic structures of these codes with the general length n are proposed and defined as twin-necklace and triplet-necklace ordering STGCs. Furthermore, d-plet-necklace ordering STGC, which unifies all the known STGCs by changing the value of d, is also defined. Finally, a single-track absolute encoder prototype is designed to prove that STGCs are as convenient as the traditional position coding methods. Full article
(This article belongs to the Special Issue Smart Sensors and Smart Structures)
Show Figures

Figure 1

Figure 1
<p>Disc pattern and reading head distribution of absolute encoder using a length-4 Gray code: (<b>a</b>) Schematic of the coding disc, where the white area indicates “0”, and the black area indicates ”1”; (<b>b</b>) Schematic of the reading disc, where the four small circles denote the four reading heads corresponding to the four coding tracks.</p>
Full article ">Figure 2
<p>Disc pattern and reading head distribution of absolute encoder using a length-11 period-2046 STGC. (<b>a</b>) Schematic of the coding disc, where the white area indicates “0”, and the black area indicates “1”; (<b>b</b>) Schematic of the reading disc, where the 11 small circles denote the 11 reading heads and are evenly distributed around the coding track.</p>
Full article ">Figure 3
<p>Disc pattern and reading head distribution of absolute encoder using a length-6 period-36 necklace ordering STGC. (<b>a</b>) Schematic of the coding disc, where white the area indicates “0”, and the black area indicates “1”; (<b>b</b>) Schematic of the reading disc, where the six small circles denote the six reading heads and are evenly distributed around the whole coding track.</p>
Full article ">Figure 4
<p>Disc pattern and reading head distribution of absolute encoder using a length-6 period-36 necklace ordering STGC. (<b>a</b>) Schematic of the coding disc, where the white area indicates “0”, and the black area indicates “1; (<b>b</b>) Schematic of the reading disc, where the six small circles denote the six reading heads and are evenly distributed around the half coding track.</p>
Full article ">Figure 5
<p>Disc pattern and reading head distribution of absolute encoder using a length-6 period-48 twin-necklace ordering STGC: (<b>a</b>) Schematic of the coding disc, where white area indicates “0”, and the black area indicates “1”; (<b>b</b>) Schematic of the reading disc, where the six small circles denote the six reading heads, and the sub-cycle of the head interval is two.</p>
Full article ">Figure 6
<p>Disc pattern and reading head distribution of absolute encoder using a length-6 period-48 triplet-necklace ordering STGC: (<b>a</b>) Schematic of the coding disc, where the white area indicates “0”, and the black area indicates “1”; (<b>b</b>) Schematic of the reading disc, where the six small circles denote the six reading heads, and the sub-cycle of the head interval is three.</p>
Full article ">Figure 7
<p>Disc pattern and slit disc of the prototype using a length-8 period-128 STGC: (<b>a</b>) Schematic of the coding disc, where the white area indicates “0”, and the black area indicates “1; (<b>b</b>) Schematic of the slit disc, where the eight slits are arranged right over the eight reading heads. This disc except the eight slits should be black, but to show the slits clearly we use white instead.</p>
Full article ">Figure 8
<p>The structural schematic of the prototype.</p>
Full article ">Figure 9
<p>Experimental system.</p>
Full article ">Figure 10
<p>Outputs of the eight reading heads.</p>
Full article ">Figure 11
<p>Error of the outputs of the prototype.</p>
Full article ">
19 pages, 1157 KiB  
Article
AOA-Based Three-Dimensional Multi-Target Localization in Industrial WSNs for LOS Conditions
by Ruonan Zhang, Jiawei Liu, Xiaojiang Du, Bin Li and Mohsen Guizani
Sensors 2018, 18(8), 2727; https://doi.org/10.3390/s18082727 - 19 Aug 2018
Cited by 19 | Viewed by 5996
Abstract
High-precision and fast relative positioning of a large number of mobile sensor nodes (MSNs) is crucial for smart industrial wireless sensor networks (SIWSNs). However, positioning multiple targets simultaneously in three-dimensional (3D) space has been less explored. In this paper, we propose a new [...] Read more.
High-precision and fast relative positioning of a large number of mobile sensor nodes (MSNs) is crucial for smart industrial wireless sensor networks (SIWSNs). However, positioning multiple targets simultaneously in three-dimensional (3D) space has been less explored. In this paper, we propose a new approach, called Angle-of-Arrival (AOA) based Three-dimensional Multi-target Localization (ATML). The approach utilizes two anchor nodes (ANs) with antenna arrays to receive the spread spectrum signals broadcast by MSNs. We design a multi-target single-input-multiple-output (MT-SIMO) signal transmission scheme and a simple iterative maximum likelihood estimator (MLE) to estimate the 2D AOAs of multiple MSNs simultaneously. We further adopt the skew line theorem of 3D geometry to mitigate the AOA estimation errors in determining locations. We have conducted extensive simulations and also developed a testbed of the proposed ATML. The numerical and field experiment results have verified that the proposed ATML can locate multiple MSNs simultaneously with high accuracy and efficiency by exploiting the spread spectrum gain and antenna array gain. The ATML scheme does not require extra hardware or synchronization among nodes, and has good capability in mitigating interference and multipath effect in complicated industrial environments. Full article
(This article belongs to the Collection Smart Industrial Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The network model of the ATML scheme for node localization in a SIWSN.</p>
Full article ">Figure 2
<p>The MT-SIMO signal flow on the MSNs and ANs in the ATML scheme for 3D node localization.</p>
Full article ">Figure 3
<p>Power delay profile and angular power spectra of the five MPCs between the MSN and AN.</p>
Full article ">Figure 4
<p>Iterative ML estimation of azimuth and elevation AOAs of multiple propagation paths.</p>
Full article ">Figure 5
<p>The network coordinate system and topology when there are <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> MSNs.</p>
Full article ">Figure 6
<p>AOA estimation results for various number of MSNs, SNR, and antenna array dimension.</p>
Full article ">Figure 7
<p>Multi-target position estimation for various numbers of MSNs, SNR, and antenna array dimension.</p>
Full article ">Figure 8
<p>Multi-target position estimation RMSs for various SNR and antenna array dimension.</p>
Full article ">Figure 9
<p>Network deployment and CDFs of the localization errors of different methods.</p>
Full article ">Figure 10
<p>Testbed of the ATML scheme and field experiment setup.</p>
Full article ">Figure 11
<p>The pattern diagrams of the two cross-polarized dipoles on the <math display="inline"><semantics> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics></math> patch in the antenna array.</p>
Full article ">Figure 12
<p>Field experiment results of AOA estimation. (A: azimuth; E: elevation).</p>
Full article ">
11 pages, 2360 KiB  
Article
Direct Detection of Toxic Contaminants in Minimally Processed Food Products Using Dendritic Surface-Enhanced Raman Scattering Substrates
by Hannah Dies, Maria Siampani, Carlos Escobedo and Aristides Docoslis
Sensors 2018, 18(8), 2726; https://doi.org/10.3390/s18082726 - 19 Aug 2018
Cited by 39 | Viewed by 7051
Abstract
We present a method for the surface-enhanced Raman scattering (SERS)-based detection of toxic contaminants in minimally processed liquid food products, through the use of a dendritic silver nanostructure, produced through electrokinetic assembly of nanoparticles from solution. The dendritic nanostructure is produced on the [...] Read more.
We present a method for the surface-enhanced Raman scattering (SERS)-based detection of toxic contaminants in minimally processed liquid food products, through the use of a dendritic silver nanostructure, produced through electrokinetic assembly of nanoparticles from solution. The dendritic nanostructure is produced on the surface of a microelectrode chip, connected to an AC field with an imposed DC bias. We apply this chip for the detection of thiram, a toxic fruit pesticide, in apple juice, to a limit of detection of 115 ppb, with no sample preprocessing. We also apply the chip for the detection of melamine, a toxic contaminant/food additive, to a limit of detection of 1.5 ppm in milk and 105 ppb in infant formula. All the reported limits of detection are below the recommended safe limits in food products, rendering this technique useful as a screening method to identify liquid food with hazardous amounts of toxic contaminants. Full article
(This article belongs to the Special Issue Applications of Raman Spectroscopy in Sensors)
Show Figures

Figure 1

Figure 1
<p>Preparation of the surface-enhanced Raman scattering (SERS) substrate used for detection of food contaminants in liquid food. (<b>a</b>) With the nanoparticle solution sitting at the tips of the microelectrodes, the gold microelectrodes are connected to the mixed AC/DC voltage supply to form the SERS-active nanostructures. (<b>b</b>) The nanoparticle suspension is removed, and the analyte solution is dropcast upon the surface of the microelectrodes. (<b>c</b>) The Raman microscope is used to analyze the gap region between the two microelectrodes, where both the SERS active structures and the analyte have been deposited. (<b>d</b>) Between the microelectrode tips, dendritic silver nanostructures grow from the positively biased tip. (<b>e</b>) A scanning electron microscopy (SEM) image of the dendritic silver nanostructures.</p>
Full article ">Figure 2
<p>Longevity analysis of the dendritic SERS substrates. Four spectra of R6G were taken every week from simultaneously prepared substrates, for a period of 4 weeks. Week 1 corresponds to time zero (time of dendrite preparation) + 1 week. (<b>a</b>) The R6G spectrum was distinctly identifiable every week for 4 weeks. These spectra were taken at a concentration 10<sup>−5</sup> M, with 10 s acquisition time. (<b>b</b>) The average intensity of the key peak at 1360 cm<sup>−1</sup> was plotted over 4 weeks. The error bars represent the standard deviation of the data.</p>
Full article ">Figure 3
<p>The results for thiram detection in apple juice. (<b>a</b>) SERS spectra of thiram at varying concentrations, in unprocessed apple juice. The key peak at 1384 cm<sup>−1</sup> is used for identification and quantification. (<b>b</b>) A log–log calibration curve of thiram in apple juice. Regression analysis yields <span class="html-italic">y</span> = 2520⋅<span class="html-italic">x</span><sup>0.5209</sup>, with an R<sup>2</sup> value of 0.9223. (<b>c</b>) A comparison between the SERS spectra of thiram in apple juice on our dendritic SERS surface, and the SERS spectra on Ocean Optics silver SERS substrates.</p>
Full article ">Figure 4
<p>Preparation of the sensing chip and the analyte solutions for detection in milk and infant formula. (<b>a</b>) The procedure for silicon surface modification with poly-L-lysine, to anchor the dendrites. (i) The starting silicon surface with a thermally grown oxide layer is plasma cleaned for 3 min, (ii) leaving negatively charged hydroxyl groups at the surface. (iii) The chip is then soaked in poly-L-lysine. (iv) The nanoparticles, coated with a layer of citrate for stabilization in solution, adhere to the poly-L-lysine coated surface. (<b>b</b>) The sample processing method applied for milk and infant formula; (i) Spiked solutions of milk/infant formula were thoroughly mixed with a vortex mixer. (ii) Acetonitrile was added to the spiked milk, followed by vortex mixing and centrifugation. (iii) The supernatant (aqueous layer) was removed for analysis, and the protein pellet was discarded.</p>
Full article ">Figure 5
<p>SERS detection of melamine in milk and infant formula. (<b>a</b>) The SERS spectra of melamine in milk. The key peak at 686–703 cm<sup>−1</sup> was used for identification and quantification. (<b>b</b>) Log–log plot for calibration of melamine concentration in milk. Points represent an average of n = 5 spectra. Linear regression yields <span class="html-italic">y</span> = 106.41⋅<span class="html-italic">x</span><sup>0.4656</sup>, with an R² value of 0.9828. (<b>c</b>) The SERS spectra of melamine in infant formula. The key peak at 686–703 cm<sup>−1</sup> was used for identification and quantification. (<b>d</b>) Log–log plot for calibration of melamine concentration in infant formula. Linear regression yields <span class="html-italic">y</span> = 1555.8⋅<span class="html-italic">x</span><sup>0.1544</sup>, with an R<sup>2</sup> value of 0.8883.</p>
Full article ">
27 pages, 12620 KiB  
Article
Activity Recognition Invariant to Wearable Sensor Unit Orientation Using Differential Rotational Transformations Represented by Quaternions
by Aras Yurtman, Billur Barshan and Barış Fidan
Sensors 2018, 18(8), 2725; https://doi.org/10.3390/s18082725 - 19 Aug 2018
Cited by 17 | Viewed by 5963
Abstract
Wearable motion sensors are assumed to be correctly positioned and oriented in most of the existing studies. However, generic wireless sensor units, patient health and state monitoring sensors, and smart phones and watches that contain sensors can be differently oriented on the body. [...] Read more.
Wearable motion sensors are assumed to be correctly positioned and oriented in most of the existing studies. However, generic wireless sensor units, patient health and state monitoring sensors, and smart phones and watches that contain sensors can be differently oriented on the body. The vast majority of the existing algorithms are not robust against placing the sensor units at variable orientations. We propose a method that transforms the recorded motion sensor sequences invariantly to sensor unit orientation. The method is based on estimating the sensor unit orientation and representing the sensor data with respect to the Earth frame. We also calculate the sensor rotations between consecutive time samples and represent them by quaternions in the Earth frame. We incorporate our method in the pre-processing stage of the standard activity recognition scheme and provide a comparative evaluation with the existing methods based on seven state-of-the-art classifiers and a publicly available dataset. The standard system with fixed sensor unit orientations cannot handle incorrectly oriented sensors, resulting in an average accuracy reduction of 31.8%. Our method results in an accuracy drop of only 4.7% on average compared to the standard system, outperforming the existing approaches that cause an accuracy degradation between 8.4 and 18.8%. We also consider stationary and non-stationary activities separately and evaluate the performance of each method for these two groups of activities. All of the methods perform significantly better in distinguishing non-stationary activities, our method resulting in an accuracy drop of 2.1% in this case. Our method clearly surpasses the remaining methods in classifying stationary activities where some of the methods noticeably fail. The proposed method is applicable to a wide range of wearable systems to make them robust against variable sensor unit orientations by transforming the sensor data at the pre-processing stage. Full article
(This article belongs to the Special Issue Data Analytics and Applications of the Wearable Sensors in Healthcare)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>An overview of the proposed method for sensor unit orientation invariance.</p>
Full article ">Figure 2
<p>(<b>a</b>) With only the acquired acceleration field vector <b>a</b>, there exist infinitely many solutions to the sensor unit orientation (two are shown); (<b>b</b>) the acquired magnetic field vector <b>m</b> uniquely identifies the sensor unit orientation.</p>
Full article ">Figure 3
<p>The Earth frame illustrated on an Earth model with the acquired reference vectors.</p>
Full article ">Figure 4
<p>The Earth and the sensor coordinate frames at two consecutive time samples with the rotational transformations relating them.</p>
Full article ">Figure 5
<p>The Xsens MTx unit [<a href="#B44-sensors-18-02725" class="html-bibr">44</a>].</p>
Full article ">Figure 6
<p>(<b>a</b>) Positioning of the MTx units on the body; (<b>b</b>) connection diagram of the units (the body drawing in the figure is from <a href="http://www.clker.com/clipart-male-figure-outline.html" target="_blank">http://www.clker.com/clipart-male-figure-outline.html</a>; the cables, Xbus Master, and sensor units were added by the authors).</p>
Full article ">Figure 7
<p>Original and orientation-invariant sequences from a walking activity plotted over time. (<b>a</b>) Original sensor sequences; (<b>b</b>) sensor sequences; elements of (<b>c</b>) the differential rotation matrix and (<b>d</b>) the differential quaternion. Sequences in (<b>b</b>–<b>d</b>) are represented in the Earth frame and are invariant to sensor orientation.</p>
Full article ">Figure 8
<p>The first 100 eigenvalues of the covariance matrix of the feature vectors sorted in descending order, calculated based on the features extracted from the data transformed according to the seven approaches.</p>
Full article ">Figure 9
<p>Activity recognition performance for all the data transformation techniques and classifiers over all activities. The lengths of the bars represent the accuracies and the thin horizontal sticks indicate plus/minus one standard deviation over the cross-validation iterations.</p>
Full article ">Figure 10
<p>Activity recognition performance for all the data transformation techniques and classifiers for (<b>a</b>) stationary and (<b>b</b>) non-stationary activities. The lengths of the bars represent the accuracies and the thin horizontal sticks indicate plus/minus one standard deviation over the cross-validation iterations.</p>
Full article ">
21 pages, 6046 KiB  
Article
Optimal Deployment of FiWi Networks Using Heuristic Method for Integration Microgrids with Smart Metering
by Esteban Inga, Miguel Campaña, Roberto Hincapié and Oswaldo Moscoso-Zea
Sensors 2018, 18(8), 2724; https://doi.org/10.3390/s18082724 - 19 Aug 2018
Cited by 7 | Viewed by 5406
Abstract
The unpredictable increase in electrical demand affects the quality of the energy throughout the network. A solution to the problem is the increase of distributed generation units, which burn fossil fuels. While this is an immediate solution to the problem, the ecosystem is [...] Read more.
The unpredictable increase in electrical demand affects the quality of the energy throughout the network. A solution to the problem is the increase of distributed generation units, which burn fossil fuels. While this is an immediate solution to the problem, the ecosystem is affected by the emission of CO2. A promising solution is the integration of Distributed Renewable Energy Sources (DRES) with the conventional electrical system, thus introducing the concept of Smart Microgrids (SMG). These SMGs require a safe, reliable and technically planned two-way communication system. This paper presents a heuristic based on planning capable of providing a bidirectional communication that is near optimal. The model follows the structure of a hybrid Fiber-Wireless (FiWi) network with the purpose of obtaining information of electrical parameters that help us to manage the use of energy by integrating conventional electrical system with SMG. The optimization model is based on clustering techniques, through the construction of balanced conglomerates. The method is used for the development of the clusters along with the Nearest-Neighbor Spanning Tree algorithm (N-NST). Additionally, the Optimal Delay Balancing (ODB) model will be used to minimize the end to end delay of each grouping. In addition, the heuristic observes real design parameters such as: capacity and coverage. Using the Dijkstra algorithm, the routes are built following the shortest path. Therefore, this paper presents a heuristic able to plan the deployment of Smart Meters (SMs) through a tree-like hierarchical topology for the integration of SMG at the lowest cost. Full article
(This article belongs to the Collection Smart Industrial Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>FiWi network architecture for the efficient integration of smart meters. Source: the authors.</p>
Full article ">Figure 2
<p>Near optimal deployment of SMs using Fiber-Wireless (FiWi) network. Source: the authors.</p>
Full article ">Figure 3
<p>WiFi neighbor adjacency matrix <span class="html-italic">n</span> = 512. (<b>a</b>) and (<b>b</b>) preliminary deployment, (<b>a</b>) route map and (<b>b</b>) representation of the adjacency matrix; (<b>c</b>) and (<b>d</b>) correspond to the scenario, minimizing the delays. Source: the authors.</p>
Full article ">Figure 4
<p>End to end delay generated by each population increase by varying the capacity of each cluster with traffic 0.1 package/s, <span class="html-italic">L</span> = 200 bits. Source: the authors.</p>
Full article ">Figure 5
<p>Delay in different scenarios. (<b>a</b>) Delay vs increase of users; (<b>b</b>) Delay vs increase packet rate. Source: the authors.</p>
Full article ">Figure 6
<p>Average links crossed by a data packet <span class="html-italic">L</span> = 800-bit, Lambda = 0.1 package/s. Source: the authors.</p>
Full article ">
21 pages, 11109 KiB  
Article
Tracking Ground Targets with a Road Constraint Using a GMPHD Filter
by Jihong Zheng and Meiguo Gao
Sensors 2018, 18(8), 2723; https://doi.org/10.3390/s18082723 - 18 Aug 2018
Cited by 16 | Viewed by 4297
Abstract
The Gaussian mixture probability hypothesis density (GMPHD) filter is applied to the problem of tracking ground moving targets in clutter due to its excellent multitarget tracking performance, such as avoiding measurement-to-track association, and its easy implementation. For the existing GMPHD-based ground target tracking [...] Read more.
The Gaussian mixture probability hypothesis density (GMPHD) filter is applied to the problem of tracking ground moving targets in clutter due to its excellent multitarget tracking performance, such as avoiding measurement-to-track association, and its easy implementation. For the existing GMPHD-based ground target tracking algorithm (the GMPHD filter incorporating map information using a coordinate transforming method, CT-GMPHD), the predicted probability density of its target state is given in road coordinates, while its target state update needs to be performed in Cartesian ground coordinates. Although the algorithm can improve the filtering performance to a certain extent, the coordinate transformation process increases the complexity of the algorithm and reduces its computational efficiency. To address this issue, this paper proposes two non-coordinate transformation roadmap fusion algorithms: directional process noise fusion (DNP-GMPHD) and state constraint fusion (SC-GMPHD). The simulation results show that, compared with the existing algorithms, the two proposed roadmap fusion algorithms are more accurate and efficient for target estimation performance on straight and curved roads in a cluttered environment. The proposed methods are additionally applied using a cardinalized PHD (CPHD) filter and a labeled multi-Bernoulli (LMB) filter. It is found that the PHD filter performs less well than the CPHD and LMB filters, but that it is also computationally cheaper. Full article
Show Figures

Figure 1

Figure 1
<p>Framework of the directional process noise Gaussian mixture probability hypothesis density (DPN-GMPHD) filter.</p>
Full article ">Figure 2
<p>Framework of the state constraint (SC)-GMPHD filter.</p>
Full article ">Figure 3
<p>Schematic diagram of a curved road in a digital roadmap.</p>
Full article ">Figure 4
<p>Road information (green dotted line) and target real trajectory (black solid line). “○”: location at which the target starts; “△”: location at which the target stops.</p>
Full article ">Figure 5
<p>Measurements.</p>
Full article ">Figure 6
<p>Comparison of tracking performance versus clutter rate: (<b>a</b>) average optimal subpattern assignment (OSPA) of the tracking algorithms; (<b>b</b>) running time of the tracking algorithms.</p>
Full article ">Figure 7
<p>Tracking performance of the filters versus detection probability: (<b>a</b>) average OSPA of the filters; (<b>b</b>) running time of the filters.</p>
Full article ">Figure 8
<p>True trajectory of the target moving on a curved road: “○”: location at which the target starts; “△”: location at which the target stops.</p>
Full article ">Figure 9
<p>Filtering performance comparison of four filters for a target on a curved road: (<b>a</b>) standard GMPHD filter; (<b>b</b>) CT-GMPHD filter; (<b>c</b>) DPN-GMPHD filter; (<b>d</b>) SC-GMPHD filter.</p>
Full article ">Figure 10
<p>Comparison of tracking performance versus clutter rate for a target on a curved road: (<b>a</b>) average OSPA of the tracking algorithms; (<b>b</b>) running time of the tracking algorithms.</p>
Full article ">Figure 11
<p>Tracking performance of the filters versus the detection probability for a target on a curved road: (<b>a</b>) average OSPA of the filters; (<b>b</b>) running time of the filters.</p>
Full article ">Figure 12
<p>Comparison of tracking performance versus clutter rate for a target on a curved road: (<b>a</b>) average OSPA of the tracking algorithms; (<b>b</b>) running time of the tracking algorithms.</p>
Full article ">Figure 13
<p>Tracking performance of the filters versus detection probability for a target on curved road: (<b>a</b>) average OSPA of the filters (<b>b</b>) running time of the filters.</p>
Full article ">
15 pages, 4778 KiB  
Article
Joint Design of Space-Time Transmit and Receive Weights for Colocated MIMO Radar
by Ze Yu, Shusen Wang, Wei Liu and Chunsheng Li
Sensors 2018, 18(8), 2722; https://doi.org/10.3390/s18082722 - 18 Aug 2018
Cited by 4 | Viewed by 4666
Abstract
Compared with single-input multiple-output (SIMO) radar, colocated multiple-input multiple-output (MIMO) radar can detect moving targets better by adopting waveform diversity. When the colocated MIMO radar transmits a set of orthogonal waveforms, the transmit weights are usually set equal to one, and the receive [...] Read more.
Compared with single-input multiple-output (SIMO) radar, colocated multiple-input multiple-output (MIMO) radar can detect moving targets better by adopting waveform diversity. When the colocated MIMO radar transmits a set of orthogonal waveforms, the transmit weights are usually set equal to one, and the receive weights are adaptively adjusted to suppress clutter based on space-time adaptive processing technology. This paper proposes the joint design of space-time transmit and receive weights for colocated MIMO radar. The approach is based on the premise that all possible moving targets are detected by setting a lower threshold. In each direction where there may be moving targets, the space-time transmit and receive weights can be iteratively updated by using the proposed approach to improve the output signal-to-interference-plus-noise ratio (SINR), which is helpful to improve the precision of target detection. Simulation results demonstrate that the proposed method improves the output SINR by greater than 13 dB. Full article
(This article belongs to the Special Issue Sensor Signal and Information Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the multiple-input multiple-output (MIMO) radar with uniform linear arrays. (<b>a</b>) shows the observation geometry, and (<b>b</b>) describes the transmit and receive processes.</p>
Full article ">Figure 1 Cont.
<p>Illustration of the multiple-input multiple-output (MIMO) radar with uniform linear arrays. (<b>a</b>) shows the observation geometry, and (<b>b</b>) describes the transmit and receive processes.</p>
Full article ">Figure 2
<p>Joint design of space-time transmit and receive weights.</p>
Full article ">Figure 3
<p>Variations of improvement factor (IF) with normalized frequencies. Results achieved by the proposed and conventional methods are shown in (<b>a</b>) and (<b>b</b>), respectively.</p>
Full article ">Figure 3 Cont.
<p>Variations of improvement factor (IF) with normalized frequencies. Results achieved by the proposed and conventional methods are shown in (<b>a</b>) and (<b>b</b>), respectively.</p>
Full article ">Figure 4
<p>IF curves of the proposed and conventional methods. IF curves with respect to the normalized Doppler frequency, space transmit frequency, and space receive frequency are shown in (<b>a</b>), (<b>b</b>), and (<b>c</b>), respectively. In each figure, two other normalized frequencies are zero.</p>
Full article ">Figure 4 Cont.
<p>IF curves of the proposed and conventional methods. IF curves with respect to the normalized Doppler frequency, space transmit frequency, and space receive frequency are shown in (<b>a</b>), (<b>b</b>), and (<b>c</b>), respectively. In each figure, two other normalized frequencies are zero.</p>
Full article ">Figure 5
<p>IF curves with respect to the normalized Doppler frequency.</p>
Full article ">
18 pages, 7953 KiB  
Article
IMU Signal Generator Based on Dual Quaternion Interpolation for Integration Simulation
by Ke Liu, Wenqi Wu, Kanghua Tang and Lei He
Sensors 2018, 18(8), 2721; https://doi.org/10.3390/s18082721 - 18 Aug 2018
Cited by 8 | Viewed by 4560
Abstract
This paper focuses on the problem of high-update-rate and high accuracy inertial measurement unit signal generation. In order to be in accordance with the vehicle’s kinematic and dynamic characteristics as well as the characteristics of pseudorange of post-processed global navigation satellite system and [...] Read more.
This paper focuses on the problem of high-update-rate and high accuracy inertial measurement unit signal generation. In order to be in accordance with the vehicle’s kinematic and dynamic characteristics as well as the characteristics of pseudorange of post-processed global navigation satellite system and their rate measurements, a novel dual quaternion interpolation and analytic integration algorithm based on actual flight data is proposed. The proposed method can simplify the piecewise analytical expressions of angular rates, angular increments and specific force integral increments. Norm corrections are adopted as constraint conditions to guarantee the accuracy of the signals. Numerical simulations are conducted to validate the method’s performance. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Algorithm flow chart.</p>
Full article ">Figure 2
<p>Actual flight trajectory.</p>
Full article ">Figure 3
<p>Curve of the quaternion error (based on DQ).</p>
Full article ">Figure 4
<p>Curve of the position error (based on DQ).</p>
Full article ">Figure 5
<p>Curve of the quaternion error (based on the inverse SINS).</p>
Full article ">Figure 6
<p>Curve of the position error (based on the inverse SINS).</p>
Full article ">Figure 7
<p>Curve of the quaternion error (based on the quaternion).</p>
Full article ">Figure 8
<p>Curve of position error (based on quaternion).</p>
Full article ">
39 pages, 28046 KiB  
Article
Sensitivity-Based Fault Detection and Isolation Algorithm for Road Vehicle Chassis Sensors
by Wonbin Na, Changwoo Park, Seokjoo Lee, Seongo Yu and Hyeongcheol Lee
Sensors 2018, 18(8), 2720; https://doi.org/10.3390/s18082720 - 18 Aug 2018
Cited by 10 | Viewed by 6127
Abstract
Vehicle control systems such as ESC (electronic stability control), MDPS (motor-driven power steering), and ECS (electronically controlled suspension) improve vehicle stability, driver comfort, and safety. Vehicle control systems such as ACC (adaptive cruise control), LKA (lane-keeping assistance), and AEB (autonomous emergency braking) have [...] Read more.
Vehicle control systems such as ESC (electronic stability control), MDPS (motor-driven power steering), and ECS (electronically controlled suspension) improve vehicle stability, driver comfort, and safety. Vehicle control systems such as ACC (adaptive cruise control), LKA (lane-keeping assistance), and AEB (autonomous emergency braking) have also been actively studied in recent years as functions that assist drivers to a higher level. These DASs (driver assistance systems) are implemented using vehicle sensors that observe vehicle status and send signals to the ECU (electronic control unit). Therefore, the failure of each system sensor affects the function of the system, which not only causes discomfort to the driver but also increases the risk of accidents. In this paper, we propose a new method to detect and isolate faults in a vehicle control system. The proposed method calculates the constraints and residuals of 12 systems by applying the model-based fault diagnosis method to the sensor of the chassis system. To solve the inaccuracy in detecting and isolating sensor failure, we applied residual sensitivity to a threshold that determines whether faults occur. Moreover, we applied a sensitivity analysis to the parameters semi-correlation table to derive a fault isolation table. To validate the FDI (fault detection and isolation) algorithm developed in this study, fault signals were injected and verified in the HILS (hardware-in-the-loop simulation) environment using an RCP (rapid control prototyping) device. Full article
(This article belongs to the Special Issue Sensors for Fault Detection)
Show Figures

Figure 1

Figure 1
<p>Hardware and analytical redundancy scheme. (<b>a</b>) Hardware redundancy scheme; (<b>b</b>) analytical redundancy scheme.</p>
Full article ">Figure 2
<p>Residual calculation with output error and polynomial method. (<b>a</b>) Output error method residual calculation; (<b>b</b>) polynomial error method residual calculation.</p>
Full article ">Figure 2 Cont.
<p>Residual calculation with output error and polynomial method. (<b>a</b>) Output error method residual calculation; (<b>b</b>) polynomial error method residual calculation.</p>
Full article ">Figure 3
<p>Fault detectability (strong/weak).</p>
Full article ">Figure 4
<p>Sensitivity-based fault detection and isolation algorithm scheme.</p>
Full article ">Figure 5
<p>Vehicle wheel scheme.</p>
Full article ">Figure 6
<p>Estimation simulation scenario 1. (<b>a</b>) Vehicle speed; (<b>b</b>) steering wheel angle.</p>
Full article ">Figure 6 Cont.
<p>Estimation simulation scenario 1. (<b>a</b>) Vehicle speed; (<b>b</b>) steering wheel angle.</p>
Full article ">Figure 7
<p>Estimation simulation result for wheel angular speed. (<b>a</b>) Wheel angular speed—fl (front left); (<b>b</b>) wheel angular speed—fr (front right); (<b>c</b>) wheel angular speed—rl (rear left); (<b>d</b>) wheel angular speed—rr (rear right).</p>
Full article ">Figure 7 Cont.
<p>Estimation simulation result for wheel angular speed. (<b>a</b>) Wheel angular speed—fl (front left); (<b>b</b>) wheel angular speed—fr (front right); (<b>c</b>) wheel angular speed—rl (rear left); (<b>d</b>) wheel angular speed—rr (rear right).</p>
Full article ">Figure 8
<p>Estimation simulation result for lateral velocity.</p>
Full article ">Figure 9
<p>Estimation simulation scenario 2. (<b>a</b>) Vehicle speed; (<b>b</b>) vehicle steering wheel angle.</p>
Full article ">Figure 9 Cont.
<p>Estimation simulation scenario 2. (<b>a</b>) Vehicle speed; (<b>b</b>) vehicle steering wheel angle.</p>
Full article ">Figure 10
<p>Estimation simulation result for the steering wheel angle.</p>
Full article ">Figure 11
<p>Vehicle normal force scheme. (<b>a</b>) Vehicle side view; (<b>b</b>) vehicle front view.</p>
Full article ">Figure 12
<p>Quarter car model for normal force calculation.</p>
Full article ">Figure 13
<p>Estimation simulation result for the normal force. (<b>a</b>) Normal force—fl; (<b>b</b>) normal force—fr; (<b>c</b>) normal force—rl; (<b>d</b>) normal force—rr.</p>
Full article ">Figure 13 Cont.
<p>Estimation simulation result for the normal force. (<b>a</b>) Normal force—fl; (<b>b</b>) normal force—fr; (<b>c</b>) normal force—rl; (<b>d</b>) normal force—rr.</p>
Full article ">Figure 14
<p>Vehicle roll angle scheme.</p>
Full article ">Figure 15
<p>Estimation simulation result for the roll angle derivative of the roll rate. (<b>a</b>) Roll angle; (<b>b</b>) a derivative of the roll rate.</p>
Full article ">Figure 16
<p>Sensitivity simulation result. (<b>a</b>) The sensitivity of yaw rate (residuals 1–4); (<b>b</b>) sensitivity of lateral acceleration (residuals 1, 2, 7, 8, 9, 10, 11); (<b>c</b>) sensitivity of lateral acceleration (residuals 1, 2, 11); (<b>d</b>) sensitivity of wheel angular speed—fl (residuals 1, 5); (<b>e</b>) sensitivity of body vertical acceleration—fl (residuals 1, 5).</p>
Full article ">Figure 16 Cont.
<p>Sensitivity simulation result. (<b>a</b>) The sensitivity of yaw rate (residuals 1–4); (<b>b</b>) sensitivity of lateral acceleration (residuals 1, 2, 7, 8, 9, 10, 11); (<b>c</b>) sensitivity of lateral acceleration (residuals 1, 2, 11); (<b>d</b>) sensitivity of wheel angular speed—fl (residuals 1, 5); (<b>e</b>) sensitivity of body vertical acceleration—fl (residuals 1, 5).</p>
Full article ">Figure 17
<p>FDI (fault detection and isolation) simulation result for yaw rate sensor (normal). (<b>a</b>) Residual 1 and threshold; (<b>b</b>) residual 2 and threshold; (<b>c</b>) residual 3 and threshold; (<b>d</b>) residual 4 and threshold.</p>
Full article ">Figure 17 Cont.
<p>FDI (fault detection and isolation) simulation result for yaw rate sensor (normal). (<b>a</b>) Residual 1 and threshold; (<b>b</b>) residual 2 and threshold; (<b>c</b>) residual 3 and threshold; (<b>d</b>) residual 4 and threshold.</p>
Full article ">Figure 18
<p>FDI simulation result for yaw rate sensor fault (fault). (<b>a</b>) Residual 1 and threshold; (<b>b</b>) residual 2 and threshold; (<b>c</b>) residual 3 and threshold; (<b>d</b>) residual 4 and threshold.</p>
Full article ">Figure 18 Cont.
<p>FDI simulation result for yaw rate sensor fault (fault). (<b>a</b>) Residual 1 and threshold; (<b>b</b>) residual 2 and threshold; (<b>c</b>) residual 3 and threshold; (<b>d</b>) residual 4 and threshold.</p>
Full article ">Figure 19
<p>FDI simulation result for longitudinal acceleration sensor (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold.</p>
Full article ">Figure 19 Cont.
<p>FDI simulation result for longitudinal acceleration sensor (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold.</p>
Full article ">Figure 20
<p>FDI simulation result for longitudinal acceleration sensor (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold.</p>
Full article ">Figure 20 Cont.
<p>FDI simulation result for longitudinal acceleration sensor (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold.</p>
Full article ">Figure 21
<p>FDI simulation result for lateral acceleration sensor (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold; (<b>e</b>) residual 11 and threshold.</p>
Full article ">Figure 21 Cont.
<p>FDI simulation result for lateral acceleration sensor (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold; (<b>e</b>) residual 11 and threshold.</p>
Full article ">Figure 21 Cont.
<p>FDI simulation result for lateral acceleration sensor (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold; (<b>e</b>) residual 11 and threshold.</p>
Full article ">Figure 22
<p>FDI simulation result for lateral acceleration sensor (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold; (<b>e</b>) residual 11 and threshold.</p>
Full article ">Figure 22 Cont.
<p>FDI simulation result for lateral acceleration sensor (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 8 and threshold; (<b>c</b>) residual 9 and threshold; (<b>d</b>) residual 10 and threshold; (<b>e</b>) residual 11 and threshold.</p>
Full article ">Figure 23
<p>FDI simulation result for wheel angular speed sensor—fr (normal). (<b>a</b>) Residual 2 and threshold; (<b>b</b>) residual 5 and threshold.</p>
Full article ">Figure 24
<p>FDI simulation result for wheel angular speed sensor—fr (fault). (<b>a</b>) Residual 2 and threshold; (<b>b</b>) residual 5 and threshold.</p>
Full article ">Figure 24 Cont.
<p>FDI simulation result for wheel angular speed sensor—fr (fault). (<b>a</b>) Residual 2 and threshold; (<b>b</b>) residual 5 and threshold.</p>
Full article ">Figure 25
<p>FDI simulation result for wheel angular speed sensor—rr (normal). (<b>a</b>) Residual 4 and threshold; (<b>b</b>) residual 5 and threshold.</p>
Full article ">Figure 26
<p>FDI simulation result for wheel angular speed sensor—rr (fault). (<b>a</b>) Residual 4 and threshold; (<b>b</b>) residual 5 and threshold.</p>
Full article ">Figure 27
<p>FDI simulation result for a steering wheel angle sensor (normal). (<b>a</b>) Residual 5 and threshold; (<b>b</b>) residual 6 and threshold.</p>
Full article ">Figure 27 Cont.
<p>FDI simulation result for a steering wheel angle sensor (normal). (<b>a</b>) Residual 5 and threshold; (<b>b</b>) residual 6 and threshold.</p>
Full article ">Figure 28
<p>Simulation result for a steering wheel angle sensor (fault). (<b>a</b>) Residual 5 and threshold; (<b>b</b>) residual 6 and threshold.</p>
Full article ">Figure 29
<p>Simulation result for body vertical acceleration sensor—fl (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 9 and threshold; (<b>c</b>) residual 12 and threshold.</p>
Full article ">Figure 30
<p>Simulation result for body vertical acceleration sensor—fl (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 9 and threshold; (<b>c</b>) residual 12 and threshold.</p>
Full article ">Figure 31
<p>Simulation result for wheel vertical acceleration sensor—fl (normal). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 9 and threshold.</p>
Full article ">Figure 32
<p>Simulation result for wheel vertical acceleration sensor—fl (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 9 and threshold.</p>
Full article ">Figure 32 Cont.
<p>Simulation result for wheel vertical acceleration sensor—fl (fault). (<b>a</b>) Residual 7 and threshold; (<b>b</b>) residual 9 and threshold.</p>
Full article ">
20 pages, 7344 KiB  
Article
Point Pair Feature-Based Pose Estimation with Multiple Edge Appearance Models (PPF-MEAM) for Robotic Bin Picking
by Diyi Liu, Shogo Arai, Jiaqi Miao, Jun Kinugawa, Zhao Wang and Kazuhiro Kosuge
Sensors 2018, 18(8), 2719; https://doi.org/10.3390/s18082719 - 18 Aug 2018
Cited by 52 | Viewed by 8036
Abstract
Automation of the bin picking task with robots entails the key step of pose estimation, which identifies and locates objects so that the robot can pick and manipulate the object in an accurate and reliable way. This paper proposes a novel point pair [...] Read more.
Automation of the bin picking task with robots entails the key step of pose estimation, which identifies and locates objects so that the robot can pick and manipulate the object in an accurate and reliable way. This paper proposes a novel point pair feature-based descriptor named Boundary-to-Boundary-using-Tangent-Line (B2B-TL) to estimate the pose of industrial parts including some parts whose point clouds lack key details, for example, the point cloud of the ridges of a part. The proposed descriptor utilizes the 3D point cloud data and 2D image data of the scene simultaneously, and the 2D image data could compensate the missing key details of the point cloud. Based on the descriptor B2B-TL, Multiple Edge Appearance Models (MEAM), a method using multiple models to describe the target object, is proposed to increase the recognition rate and reduce the computation time. A novel pipeline of an online computation process is presented to take advantage of B2B-TL and MEAM. Our algorithm is evaluated against synthetic and real scenes and implemented in a bin picking system. The experimental results show that our method is sufficiently accurate for a robot to grasp industrial parts and is fast enough to be used in a real factory environment. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Defective point cloud of Part A (Left: The captured point cloud of the part; Right: The appearacne of the part). The detailed point cloud of the ridges in the part cannot be captured with the embedded 3D sensor algorithm. Thus, some previous methods fail to estimate the rotation around the center.</p>
Full article ">Figure 2
<p>The full Point Pair Feature (PPF)-MEAM pipeline. This algorithm could be divided into two phases. In the offline phase, a database is constructed using the target model. In the online phase, the pose of target part is estimated using the organized point cloud of the scene.</p>
Full article ">Figure 3
<p>Visible points extracted from six view points. Points <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">b</mi> </msub> </semantics></math> are located at two different sides of the part, which cannot be seen simultaneously. However, this point pair (<math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">f</mi> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">b</mi> </msub> </semantics></math>) is calculated, and the PPF is stored in the hash table in other PPF-based methods.</p>
Full article ">Figure 4
<p>The definition of the B2B-TL feature. This feature is different from the other PPF because it is using the points not only on the line segments, but also on the curves. A line cannot be fitted for the point on a curve, but the tangent line can be calculated and its direction used as the direction of the point.</p>
Full article ">Figure 5
<p>Comparison between the boundary extraction using the original model and multiple appearance models. (<b>a</b>) shows extracted boundary points using the original model, that is, the model with all points around the part. (<b>b</b>,<b>c</b>,<b>d</b>,<b>e</b>,<b>f</b>,<b>g</b>) show extracted boundary points using multiple appearance models. Multiple appearance models outperform the original model in terms of boundary extraction.</p>
Full article ">Figure 6
<p>Save the same point pairs only once in the hash table. The appearance edge models in green and blue are extracted from different viewpoints. These two appearance edge models share some of the same points, as shown in the red box. The point pair <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">m</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">m</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi mathvariant="bold">m</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">m</mi> <mn>4</mn> </msub> <mo>)</mo> </mrow> </semantics></math> is the same point pair in reality, but their points have different indices because they belong to different appearance models. Using the proposed encoding method, we recognized that these two point pairs are located at the same position and that only one pair needed to be saved in the hash table.</p>
Full article ">Figure 7
<p>Transformation between the model and scene coordinates. <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">T</mi> </mrow> <mrow> <mi mathvariant="normal">s</mi> <mo>→</mo> <mi mathvariant="normal">g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math> transforms the scene reference point <math display="inline"><semantics> <msub> <mi mathvariant="bold">s</mi> <mi>r</mi> </msub> </semantics></math> to the origin and aligns its direction <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">n</mi> </mrow> <mi>r</mi> <mi mathvariant="normal">s</mi> </msubsup> </semantics></math> to the <span class="html-italic">x</span>-axis of the intermediate coordinate system. The model reference point <math display="inline"><semantics> <msub> <mi mathvariant="bold">m</mi> <mi>r</mi> </msub> </semantics></math> and its direction <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">n</mi> </mrow> <mi>r</mi> <mi mathvariant="normal">m</mi> </msubsup> </semantics></math> are transformed similarly by <math display="inline"><semantics> <msubsup> <mrow> <mi mathvariant="bold">T</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mo>→</mo> <mi mathvariant="normal">g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> </semantics></math>. Rotating the transformed referred scene point <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi mathvariant="bold">T</mi> </mrow> <mrow> <mi mathvariant="normal">s</mi> <mo>→</mo> <mi mathvariant="normal">g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi mathvariant="bold">s</mi> <mi>i</mi> </msub> </mrow> </semantics></math> with angle <math display="inline"><semantics> <mi>α</mi> </semantics></math> around the <span class="html-italic">x</span>-axis aligns it with the transformed referred model point <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi mathvariant="bold">T</mi> </mrow> <mrow> <mi mathvariant="normal">m</mi> <mo>→</mo> <mi mathvariant="normal">g</mi> </mrow> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <msub> <mi mathvariant="bold">m</mi> <mi>i</mi> </msub> </mrow> </semantics></math>. (<math display="inline"><semantics> <msub> <mi mathvariant="bold">m</mi> <mi>r</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mi>α</mi> </semantics></math>) is then used to cast a vote in the 2D space.</p>
Full article ">Figure 8
<p>Industrial parts used to verify the proposed method. We named these parts as Part A (<b>a</b>), Part B (<b>b</b>), Part C (<b>c</b>) and Part D (<b>d</b>). They are made of resin and used in a real car air-conditioning system. The appearance of these parts is complex, making the pose estimation more difficult compared to cases in which the parts have primitive shapes.</p>
Full article ">Figure 9
<p>(<b>a</b>,<b>b</b>,<b>c</b>,<b>d</b>) are the simulated scenes of the example parts. We estimated 5 poses in each scene, and (<b>e</b>,<b>f</b>,<b>g</b>,<b>h</b>) are the results of the pose estimation. The model point cloud is transformed to the scene space using the pose estimation results and rendered with different colors. These colors indicate the recommendation rank to grasp the part after considering occlusion. Models 1–5 are rendered in red, green, blue, yellow and pink, respectively.</p>
Full article ">Figure 10
<p>The experimental system (<b>a</b>) was used to verify the proposed method. A color camera and 3D sensor were mounted on top (above the parts). To mitigate the effect of shadow on edge extraction, we installed two Light-Emitting Diodes (LEDs) on both sides of the box. A robot was used to perform the picking task with the gripper, as shown in (<b>b</b>).</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>,<b>c</b>,<b>d</b>) are the real scenes of the example parts. (<b>e</b>,<b>f</b>,<b>g</b>,<b>h</b>) are the boundary points of the scene cloud. (<b>i</b>,<b>j</b>,<b>k</b>,<b>l</b>) are the results of pose estimation. The model point clouds are transformed to the scene space using pose results and rendered with different colors. These colors indicate the recommendation rank to grasp the part after considering the occlusion. Models 1–5 are rendered as red, green, blue, yellow and pink, respectively.</p>
Full article ">Figure 12
<p>The multi-axis stage unit is used to evaluate the relative error of pose estimation. After fixing the part on the top of the stage, we could move the part along each axis with a precision of 0.01 mm and rotate it by <math display="inline"><semantics> <msub> <mi>θ</mi> <mi>x</mi> </msub> </semantics></math> with a precision of <math display="inline"><semantics> <mrow> <msup> <mn>0.1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>(<b>a</b>,<b>b</b>) are the results of the relative error experiment. The part is moved by 5 mm, 10 mm, 15 mm and 20 mm using the stage. Pose estimation was performed each time we moved it. The moved distance was calculated by comparing the differences in the pose results. We conducted 10 trials for one part, and the average distance error is shown in (<b>a</b>). Similarly, we rotated the part by <math display="inline"><semantics> <msup> <mn>5</mn> <mo>∘</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>10</mn> <mo>∘</mo> </msup> </semantics></math>, <math display="inline"><semantics> <msup> <mn>15</mn> <mo>∘</mo> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mn>20</mn> <mo>∘</mo> </msup> </semantics></math>, and the corresponding results are shown in (<b>b</b>).</p>
Full article ">Figure 14
<p>Industrial parts in the Tohoku University 6D Pose Estimation Dataset. (<b>a</b>,<b>b</b>,<b>c</b>) are named as Part 1, Part 2 and Part 3, respectively. The example synthetic scenes are shown in (<b>d</b>,<b>e</b>,<b>f</b>). (<b>g</b>,<b>h</b>,<b>i</b>) are the results of the pose estimation.</p>
Full article ">
18 pages, 695 KiB  
Article
Node-Identification-Based Secure Time Synchronization in Industrial Wireless Sensor Networks
by Zhaowei Wang, Peng Zeng, Linghe Kong, Dong Li and Xi Jin
Sensors 2018, 18(8), 2718; https://doi.org/10.3390/s18082718 - 18 Aug 2018
Cited by 16 | Viewed by 3758
Abstract
Time synchronization is critical for wireless sensors networks in industrial automation, e.g., event detection and process control of industrial plants and equipment need a common time reference. However, cyber-physical attacks are enormous threats causing synchronization protocols to fail. This paper studies the algorithm [...] Read more.
Time synchronization is critical for wireless sensors networks in industrial automation, e.g., event detection and process control of industrial plants and equipment need a common time reference. However, cyber-physical attacks are enormous threats causing synchronization protocols to fail. This paper studies the algorithm design and analysis in secure time synchronization for resource-constrained industrial wireless sensor networks under Sybil attacks, which cannot be well addressed by existing methods. A node-identification-based secure time synchronization (NiSTS) protocol is proposed. The main idea of this protocol is to utilize the timestamp correlation among different nodes and the uniqueness of a node’s clock skew to detect invalid information rather than isolating suspicious nodes. In the detection process, each node takes the relative skew with respect to its public neighbor as the basis to determine whether the information is reliable and to filter invalid information. The information filtering mechanism renders NiSTS resistant to Sybil attacks and message manipulation attacks. As a completely distributed protocol, NiSTS is not sensitive to the number of Sybil attackers. Extensive simulations were conducted to demonstrate the efficiency of NiSTS and compare it with existing protocols. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>IWSNs with a Sybil attacker.</p>
Full article ">Figure 2
<p>Illustration of the detection process. Attacker <span class="html-italic">A</span> pretends to be Node 2.</p>
Full article ">Figure 3
<p>Performance of the detection process under the Sybil attack.</p>
Full article ">Figure 4
<p>Illustration of the effectiveness of the detection process: (<b>a</b>) area with common neighbors; and (<b>b</b>) area with Sybil attackers that may disable the detection process.</p>
Full article ">Figure 5
<p>Illustration of the network scenarios for SMTS under Sybil attacks: (<b>a</b>) a network with two Sybil attackers; (<b>b</b>) at least one isolated node exists; and (<b>c</b>) no isolated nodes exist.</p>
Full article ">Figure 6
<p>Maximum difference of logical skew in a reliable environment.</p>
Full article ">Figure 7
<p>Maximum difference of logical offset in a reliable environment.</p>
Full article ">Figure 8
<p>Maximum difference of logical skew under message manipulation attacks.</p>
Full article ">Figure 9
<p>Performance of SMTS under Sybil attacks.</p>
Full article ">Figure 10
<p>Performance of NiSTS under Sybil attacks.</p>
Full article ">Figure 11
<p>The performance of NiSTS against different numbers of Sybil attackers that are generated from the inside network.</p>
Full article ">Figure 12
<p>Maximum difference of logical skew for NiSTS against different numbers of Sybil attackers for a fixed network.</p>
Full article ">Figure 13
<p>Maximum difference of logical offset for NiSTS against different numbers of Sybil attackers for a fixed network.</p>
Full article ">
14 pages, 4361 KiB  
Article
Bragg-Grating-Based Photonic Strain and Temperature Sensor Foils Realized Using Imprinting and Operating at Very Near Infrared Wavelengths
by Jeroen Missinne, Nuria Teigell Benéitez, Marie-Aline Mattelin, Alfredo Lamberti, Geert Luyckx, Wim Van Paepegem and Geert Van Steenberge
Sensors 2018, 18(8), 2717; https://doi.org/10.3390/s18082717 - 18 Aug 2018
Cited by 19 | Viewed by 4738
Abstract
Thin and flexible sensor foils are very suitable for unobtrusive integration with mechanical structures and allow monitoring for example strain and temperature while minimally interfering with the operation of those structures. Electrical strain gages have long been used for this purpose, but optical [...] Read more.
Thin and flexible sensor foils are very suitable for unobtrusive integration with mechanical structures and allow monitoring for example strain and temperature while minimally interfering with the operation of those structures. Electrical strain gages have long been used for this purpose, but optical strain sensors based on Bragg gratings are gaining importance because of their improved accuracy, insusceptibility to electromagnetic interference, and multiplexing capability, thereby drastically reducing the amount of interconnection cables required. This paper reports on thin polymer sensor foils that can be used as photonic strain gage or temperature sensors, using several Bragg grating sensors multiplexed in a single polymer waveguide. Compared to commercially available optical fibers with Bragg grating sensors, our planar approach allows fabricating multiple, closely spaced sensors in well-defined directions in the same plane realizing photonic strain gage rosettes. While most of the reported Bragg grating sensors operate around a wavelength of 1550 nm, the sensors in the current paper operate around a wavelength of 850 nm, where the material losses are the lowest. This was accomplished by imprinting gratings with pitches 280 nm, 285 nm, and 290 nm at the core-cladding interface of an imprinted single mode waveguide with cross-sectional dimensions 3 × 3 µm2. We show that it is possible to realize high-quality imprinted single mode waveguides, with gratings, having only a very thin residual layer which is important to limit bend losses or cross-talk with neighboring waveguides. The strain and temperature sensitivity of the Bragg grating sensors was found to be 0.85 pm/µε and −150 pm/°C, respectively. These values correspond well with those of previously reported sensors based on the same materials but operating around 1550 nm, taking into account that sensitivity scales with the wavelength. Full article
(This article belongs to the Special Issue Printed Sensors 2018)
Show Figures

Figure 1

Figure 1
<p>Layout of the strain sensor rosette showing the orientation of the grating sensors and the waveguide running perpendicularly over the gratings.</p>
Full article ">Figure 2
<p>Calculated reflection spectrum for grating sensor 2 with dimensions as shown in the inset.</p>
Full article ">Figure 3
<p>Process flow for the imprinting of waveguides with grating sensors.</p>
Full article ">Figure 4
<p>System for reading out Bragg grating sensor foils.</p>
Full article ">Figure 5
<p>(<b>a</b>) Tensile test setup and (<b>b</b>) close-up view on the mounted sensor foil and extensometer.</p>
Full article ">Figure 6
<p>Cross-sectional microscope images of 3 µm thick imprinted waveguides having different target widths (as mentioned in the column header) and corresponding mode field profiles imaged at the waveguide end-face (<span class="html-italic">λ</span> = 850 nm).</p>
Full article ">Figure 7
<p>(<b>a</b>) Gratings imprinted in the waveguide core layer, visualized before applying the top cladding layer; (<b>b</b>) a magnified view on the diagonally oriented grating; (<b>c</b>) Imprinted OrmoCore grating cross-section.</p>
Full article ">Figure 8
<p>Reflection spectrum of the sensor foil showing the Bragg wavelength of the three sensors multiplexed in the same waveguide.</p>
Full article ">Figure 9
<p>(<b>a</b>) Typical reflection spectrum recorded by the readout system. (<b>b</b>) Resulting Bragg wavelength as a function of applied strain.</p>
Full article ">Figure 10
<p>Bragg wavelength shift recorded for sensor 3, in tension and in compression during the cantilever loading experiment.</p>
Full article ">Figure 11
<p>Bragg wavelength shift as a function of temperature for sensor 1 and sensor 2.</p>
Full article ">
12 pages, 3212 KiB  
Article
Discrimination of Milks with a Multisensor System Based on Layer-by-Layer Films
by Coral Salvo-Comino, Celia García-Hernández, Cristina García-Cabezón and Maria Luz Rodríguez-Méndez
Sensors 2018, 18(8), 2716; https://doi.org/10.3390/s18082716 - 18 Aug 2018
Cited by 20 | Viewed by 4769
Abstract
A nanostructured electrochemical bi-sensor system for the analysis of milks has been developed using the layer-by-layer technique. The non-enzymatic sensor [CHI+IL/CuPcS]2, is a layered material containing a negative film of the anionic sulfonated copper phthalocyanine (CuPcS) acting [...] Read more.
A nanostructured electrochemical bi-sensor system for the analysis of milks has been developed using the layer-by-layer technique. The non-enzymatic sensor [CHI+IL/CuPcS]2, is a layered material containing a negative film of the anionic sulfonated copper phthalocyanine (CuPcS) acting as electrocatalytic material, and a cationic layer containing a mixture of an ionic liquid (IL) (1-butyl-3-methylimidazolium tetrafluoroborate) that enhances the conductivity, and chitosan (CHI), that facilitates the enzyme immobilization. The biosensor ([CHI+IL/CuPcS]2-GAO) results from the immobilization of galactose oxidase on the top of the LbL layers. FTIR, UV–vis, and AFM have confirmed the proposed structure and cyclic voltammetry has demonstrated the amplification caused by the combination of materials in the film. Sensors have been combined to form an electronic tongue for milk analysis. Principal component analysis has revealed the ability of the sensor system to discriminate between milk samples with different lactose content. Using a PLS-1 calibration models, correlations have been found between the voltammetric signals and chemical parameters measured by classical methods. PLS-1 models provide excellent correlations with lactose content. Additional information about other components, such as fats, proteins, and acidity, can also be obtained. The method developed is simple, and the short response time permits its use in assaying milk samples online. Full article
(This article belongs to the Special Issue Supramolecular Chemistry for Sensors Application)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Structure of the [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub>-GAO biosensor. Scheme of the reaction.</p>
Full article ">Figure 2
<p>(<b>a</b>) UV–vis spectra of [CHI+IL/CuPc<sup>S</sup>]<sub>n</sub> films with increasing number of layers (<span class="html-italic">n</span> = 2, 4, 8, 12, 16, 20, 24, 28, 32). The inset shows the correlation between absorbance at 617 nm and the number of layers; (<b>b</b>) comparison of the spectra of [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub> (solid red line) and of [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub>-GAO (dotted black line).</p>
Full article ">Figure 3
<p>FTIR spectra of LbL for (<b>a</b>) [CHI+IL/CuPc<sup>S</sup>]<sub>8</sub> films and (<b>b</b>) [CHI+IL/CuPc<sup>S</sup>]<sub>8</sub>-GAO films.</p>
Full article ">Figure 4
<p>AFM topographic 2D images of a LbL film formed by [CHI-IL/CuPc<sup>S</sup>]<sub>12</sub> film.</p>
Full article ">Figure 5
<p>Cyclic voltammograms of (<b>a</b>) bare ITO (solid black line) and [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub> film (dashed blue line), and (<b>b</b>) ITO-GAO (solid red line) and [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub>-GAO (dotted black line), immersed in galactose 10<sup>−4</sup> mol·L<sup>−1</sup> in 0.01 mol·L<sup>−1</sup> phosphate buffer, pH 7. Scan rate 100 mV·s<sup>−1</sup>.</p>
Full article ">Figure 6
<p>Cyclic voltammograms of [CHI-IL/CuPc<sup>S</sup>]<sub>2</sub> (dotted black line), and for [CHI-IL/CuPc<sup>S</sup>]<sub>2</sub>-GAO (solid blue line) sensor immersed in sample SSnolac<sub>2</sub>. Scan rate 100 mV·s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>PCA plots for milk samples obtained from voltammetric responses in milks of sample Wlac<sub>1</sub>: first day (●), fourth day (★); sample Wlac<sub>2</sub>: first day (<span style="color:red">●</span>), fourth day (<span style="color:red">★</span>); sample SSnolac<sub>1</sub>: first day (<span style="color:#00BFFF;">●</span>), fourth day (<span style="color:#00BFFF;">★</span>); and sample SSnolac<sub>2</sub>: first day (<span style="color:green">●</span>), fourth day (<span style="color:green">★</span>).</p>
Full article ">Figure 8
<p>Loading plot of PCA performed from milk samples using the sensor [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub> and the biosensor [CHI+IL/CuPc<sup>S</sup>]<sub>2</sub>-GAO.</p>
Full article ">Figure 9
<p>(<b>a</b>) Explained variance of lactose content in function of the number of latent variables; (<b>b</b>) linear correlation between lactose measured and lactose predicted.</p>
Full article ">
19 pages, 4156 KiB  
Article
A Contactless Sensor for Pacemaker Pulse Detection: Design Hints and Performance Assessment
by Emilio Andreozzi, Gaetano D. Gargiulo, Antonio Fratini, Daniele Esposito and Paolo Bifulco
Sensors 2018, 18(8), 2715; https://doi.org/10.3390/s18082715 - 18 Aug 2018
Cited by 18 | Viewed by 7923
Abstract
Continuous monitoring of pacemaker activity can provide valuable information to improve patients’ follow-up. Concise information is stored in some types of pacemakers, whereas ECG can provide more detailed information, but requires electrodes and cannot be used for continuous monitoring. This study highlights the [...] Read more.
Continuous monitoring of pacemaker activity can provide valuable information to improve patients’ follow-up. Concise information is stored in some types of pacemakers, whereas ECG can provide more detailed information, but requires electrodes and cannot be used for continuous monitoring. This study highlights the possibility of a continuous monitoring of pacemaker pulses by sensing magnetic field variations due to the current pulses. This can be achieved by means of a sensor coil positioned near the patient’s thorax without any need for physical contact. A simplified model of coil response to pacemaker pulses is presented in this paper, along with circuits suitable for pulse detection. In vitro tests were carried out using real pacemakers immersed in saline solution; experimental data were used to assess the accuracy of the model and to evaluate the sensor performance. It was found that the coil signal amplitude decreases with increasing distance from the pacemaker lead wire. The sensor was able to easily perform pacemaker spike detection up to a distance of 12 cm from the pacemaker leads. The stimulation rate can be measured in real time with high accuracy. Since any electromagnetic pulse triggers the same coil response, EMI may corrupt sensor measurements and thus should be discriminated. Full article
(This article belongs to the Section Biosensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic representation of the sensor operating principle.</p>
Full article ">Figure 2
<p>Magnetic coupling of an infinitely long current wire with a coil.</p>
Full article ">Figure 3
<p>(<b>a</b>) Waveform of the stimulation current; (<b>b</b>) waveform of the derivative of the stimulation current.</p>
Full article ">Figure 4
<p>Electromotive force (EMF) pulse (blue line) and coil response (red line) obtained from a simulation of the equivalent RLC circuit model of the coil used for the experimental tests.</p>
Full article ">Figure 5
<p>Measurement setup. (<b>a</b>) Top view; (<b>b</b>) side view.</p>
Full article ">Figure 6
<p>Pick-up coil.</p>
Full article ">Figure 7
<p>Amplifier circuit based on the INA217 instrumentation amplifier. The input loop is the RLC series circuit model of the coil, with a shunt capacitor to adjust the resonance frequency and a resistor to ground to bias the input of the amplifier. R<sub>G</sub> is the gain regulation resistor.</p>
Full article ">Figure 8
<p>Schematic representation of the architecture proposed for the sensor.</p>
Full article ">Figure 9
<p>Complete analog front-end schematic: the amplifier circuit feeds the amplified signal to the TLC3702, which compares it with the reference voltage, adjusted by the user with a potentiometer.</p>
Full article ">Figure 10
<p>The assembled prototype of the sensor.</p>
Full article ">Figure 11
<p>(<b>a</b>) Pulse waveform of St. Jude Medical™ Accent MRI pacemaker. It is possible to see a pair of negative falling edges. (<b>b</b>) Detail of pulse rising edge and relative coil response. (<b>c</b>) Detail of pulse falling edge and relative coil response. As expected, there is a pair of negative damped sine pulses corresponding to the pair of falling edges, i.e., two EMF negative pulses.</p>
Full article ">Figure 12
<p>In the upper panel (blue line), the coil response acquired during the tests is shown. In the lower panel (red line), the coil response computed with the simulation is shown.</p>
Full article ">Figure 13
<p>Comparison between experimental points and regression function curve.</p>
Full article ">Figure A1
<p>Geometrical representation of the infinitely long current wire with a coil as seen from the top.</p>
Full article ">
17 pages, 3536 KiB  
Article
Reliability Modeling for Humidity Sensors Subject to Multiple Dependent Competing Failure Processes with Self-Recovery
by Jia Qi, Zhen Zhou, Chenchen Niu, Chunyu Wang and Juan Wu
Sensors 2018, 18(8), 2714; https://doi.org/10.3390/s18082714 - 18 Aug 2018
Cited by 16 | Viewed by 4420
Abstract
Recent developments in humidity sensors have heightened the need for reliability. Seeing as many products such as humidity sensors experience multiple dependent competing failure processes (MDCFPs) with self-recovery, this paper proposes a new general reliability model. Previous research into MDCFPs has primarily focused [...] Read more.
Recent developments in humidity sensors have heightened the need for reliability. Seeing as many products such as humidity sensors experience multiple dependent competing failure processes (MDCFPs) with self-recovery, this paper proposes a new general reliability model. Previous research into MDCFPs has primarily focused on the processes of degradation and random shocks, which are appropriate for most products. However, the existing reliability models for MDCFPs cannot fully characterize the failure processes of products such as humidity sensors with significant self-recovery, leading to an underestimation of reliability. In this paper, the effect of self-recovery on degradation was analyzed using a conditional probability. A reliability model for soft failure with self-recovery was obtained. Then, combined with the model of hard failure due to random shocks, a general reliability model with self-recovery was established. Finally, reliability tests of the humidity sensors were presented to verify the proposed reliability model. Reliability modeling for products subject to MDCFPs with considering self-recovery can provide a better understanding of the mechanism of failure and offer an alternative method to predict the reliability of products. Full article
Show Figures

Figure 1

Figure 1
<p>MDCFPs of humidity sensors. (<b>a</b>) Soft failure process of humidity sensors; (<b>b</b>) Hard failure process of humidity sensors.</p>
Full article ">Figure 2
<p>Comparison of <span class="html-italic">R</span>(<span class="html-italic">t</span>) for different models for example I.</p>
Full article ">Figure 3
<p>Comparison of <span class="html-italic">f</span>(<span class="html-italic">t</span>) for different models for example I.</p>
Full article ">Figure 4
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">τ</span> for example I.</p>
Full article ">Figure 5
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">D</span> for example I.</p>
Full article ">Figure 6
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">H</span> for example I.</p>
Full article ">Figure 7
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">λ</span> for example I.</p>
Full article ">Figure 8
<p>Comparison of <span class="html-italic">R</span>(<span class="html-italic">t</span>) for different models for example II.</p>
Full article ">Figure 9
<p>Comparison of <span class="html-italic">f</span>(<span class="html-italic">t</span>) for different models for example II.</p>
Full article ">Figure 10
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">τ</span> for example II.</p>
Full article ">Figure 11
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">D</span> for example II.</p>
Full article ">Figure 12
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">H</span> for example II.</p>
Full article ">Figure 13
<p>Sensitivity analysis of <span class="html-italic">R</span>(<span class="html-italic">t</span>) on <span class="html-italic">λ</span> for example II.</p>
Full article ">
21 pages, 4040 KiB  
Article
Automatic Groove Measurement and Evaluation with High Resolution Laser Profiling Data
by Lin Li, Wenting Luo, Kelvin C. P. Wang, Guangdong Liu and Chao Zhang
Sensors 2018, 18(8), 2713; https://doi.org/10.3390/s18082713 - 17 Aug 2018
Cited by 6 | Viewed by 5540
Abstract
Grooving is widely used to improve airport runway pavement skid resistance during wet weather. However, runway grooves deteriorate over time due to the combined effects of traffic loading, climate, and weather, which brings about a potential safety risk at the time of the [...] Read more.
Grooving is widely used to improve airport runway pavement skid resistance during wet weather. However, runway grooves deteriorate over time due to the combined effects of traffic loading, climate, and weather, which brings about a potential safety risk at the time of the aircraft takeoff and landing. Accordingly, periodic measurement and evaluation of groove performance are critical for runways to maintain adequate skid resistance. Nevertheless, such evaluation is difficult to implement due to the lack of sufficient technologies to identify shallow or worn grooves and slab joints. This paper proposes a new strategy to automatically identify airport runway grooves and slab joints using high resolution laser profiling data. First, K-means clustering based filter and moving window traversal algorithm are developed to locate the deepest point of the potential dips (including noises, true grooves, and slab joints). Subsequently the improved moving average filter and traversal algorithms are used to determine the left and right endpoint positions of each identified dip. Finally, the modified heuristic method is used to separate out slab joints from the identified dips, and then the polynomial support vector machine is introduced to distinguish out noises from the candidate grooves (including noises and true grooves), so that PCC slab-based runway safety evaluation can be performed. The performance of the proposed strategy is compared with that of the other two methods, and findings indicate that the new method is more powerful in runway groove and joint identification, with the F-measure score of 0.98. This study would be beneficial in airport runway groove safety evaluation and the subsequent maintenance and rehabilitation of airport runway. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Photos of (<b>a</b>) FAA laser-based profiling system (Courtesy of photo in [<a href="#B12-sensors-18-02713" class="html-bibr">12</a>]); (<b>b</b>) Point laser profiling principle.</p>
Full article ">Figure 2
<p>Schematic of the new methodology for concrete slab-based groove identification and performance evaluation.</p>
Full article ">Figure 3
<p>Development and implementation of K-means clustering based filter (<b>a</b>) grooving data; (<b>b</b>) three clusters; (<b>c</b>) data above dashed line used for smoothing; (<b>d</b>) the smoothing effect of grooving data.</p>
Full article ">Figure 4
<p>Comparison of smoothing results with moving average filter and the new filter.</p>
Full article ">Figure 5
<p>Determination of the deepest point inside dips.</p>
Full article ">Figure 6
<p>Diagram of determination of two endpoints of a dip.</p>
Full article ">Figure 7
<p>Diagram of calculation of a dip dimension.</p>
Full article ">Figure 8
<p>Diagram to separate out slab joints from the identified dips with the modified heuristic method.</p>
Full article ">Figure 9
<p>Example for separation of joints and noises: (<b>a</b>) raw grooving data; (<b>b</b>) identified dips within PLRoGN; (<b>c</b>) identified joint after noise removal; (<b>d</b>) flowchart for the separation of joints and noises.</p>
Full article ">Figure 10
<p>Comparison of separation of noises and true grooves with three feature vectors: (<b>a</b>) linear model; (<b>b</b>) polynomial model.</p>
Full article ">Figure 11
<p>Test grooving data and its close-up view.</p>
Full article ">Figure 12
<p>Identification result comparison: (<b>a</b>) Original grooving data; (<b>b</b>) Missed groove with Li’s Method; (<b>c</b>) Missed and fake groove with ProGroove software; (<b>d</b>) identified result with the new method.</p>
Full article ">Figure 13
<p>Plots of the severely worn grooves on slab 17 and their identification results.</p>
Full article ">Figure 14
<p>Plots of the groove depths along runway slabs and the corresponding percentage of grooves that need to be maintained.</p>
Full article ">Figure 15
<p>Plot of the groove volume distribution along runway test.</p>
Full article ">
9 pages, 3198 KiB  
Article
Adhesive-Free Bonding of Monolithic Sapphire for Pressure Sensing in Extreme Environments
by Jihaeng Yi
Sensors 2018, 18(8), 2712; https://doi.org/10.3390/s18082712 - 17 Aug 2018
Cited by 13 | Viewed by 4441
Abstract
This paper presents a monolithic sapphire pressure sensor that is constructed from two commercially available sapphire wafers through a combination of reactive-ion etching and wafer bonding. A Fabry–Perot (FP) cavity is sealed fully between the adhesive-free bonded sapphire wafers and thus acts as [...] Read more.
This paper presents a monolithic sapphire pressure sensor that is constructed from two commercially available sapphire wafers through a combination of reactive-ion etching and wafer bonding. A Fabry–Perot (FP) cavity is sealed fully between the adhesive-free bonded sapphire wafers and thus acts as a pressure transducer. A combination of standard silica fiber, bonded sapphire wafers and free-space optics is proposed to couple the optical signal to the FP cavity of the sensor. The pressure in the FP cavity is measured by applying both white-light interferometry and diaphragm deflection theory over a range of 0.03 to 3.45 MPa at room temperature. With an all-sapphire configuration, the adhesive-free bonded sapphire sensor is expected to be suitable for in-situ pressure measurements in extreme harsh environments. Full article
(This article belongs to the Special Issue Sensors and Materials for Harsh Environments)
Show Figures

Figure 1

Figure 1
<p>Schematic of pressure sensor test system. Broadband light from a halogen lamp is delivered through a Multi-Mode optical Fiber (MMF) through a 3-dB coupler and into the sapphire wafer.</p>
Full article ">Figure 2
<p>Bonded sapphire sample loaded in test chamber. Broadband light is focused on the center of the bonded sapphire wafer.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic of bonded sapphire wafer. (<b>b</b>) Photo of prototype sapphire sensor.</p>
Full article ">Figure 4
<p>(<b>a</b>) Schematic of the small diaphragm deflection. (<b>b</b>) Schematic of the small deflection in the sensor cavity under uniform pressure: <span class="html-italic">b</span> is the thickness of the flat sapphire wafer, <span class="html-italic">a</span> is the maximum radius at which the deflection is measured, <span class="html-italic">h</span> is the thickness of the sapphire wafer containing the cavity, <span class="html-italic">r</span> is the radius from the center of the cavity at the measurement point, and <span class="html-italic">w</span><sub>0</sub> is the maximum deflection.</p>
Full article ">Figure 5
<p>Schematic of the Fabry–Perot (FP) interferometric sensor: <span class="html-italic">I</span> is the normal incident light; the light reflects from two reflectors, <span class="html-italic">R</span><sub>1</sub> and <span class="html-italic">R</span><sub>2</sub>; <span class="html-italic">n</span> is the refractive index of the cavity medium; and <span class="html-italic">d</span> is the depth of the air gap in the cavity.</p>
Full article ">Figure 6
<p>Three cycles of dynamic pressure testing and calibration of sensor prototype.</p>
Full article ">Figure 7
<p>Sensor resolution measurement: data taken in 1-min intervals at constant pressure with the chamber pressure maintained at 1.39 MPa.</p>
Full article ">Figure 8
<p>Sensing cavity leakage test: data taken at constant pressure over 12 h.</p>
Full article ">
Previous Issue
Back to TopTop