[go: up one dir, main page]

Next Issue
Volume 16, December
Previous Issue
Volume 16, October
 
 
sensors-logo

Journal Browser

Journal Browser
From the start of 2016, the journal uses article numbers instead of page numbers to identify articles. If you are required to add page numbers to a citation, you can do with using a colon in the format [article number]:1–[last page], e.g. 10:1–20.

Sensors, Volume 16, Issue 11 (November 2016) – 213 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2031 KiB  
Article
Service Demand Discovery Mechanism for Mobile Social Networks
by Dapeng Wu, Junjie Yan, Honggang Wang and Ruyan Wang
Sensors 2016, 16(11), 1982; https://doi.org/10.3390/s16111982 - 23 Nov 2016
Cited by 4 | Viewed by 4795
Abstract
In the last few years, the service demand for wireless data over mobile networks has continually been soaring at a rapid pace. Thereinto, in Mobile Social Networks (MSNs), users can discover adjacent users for establishing temporary local connection and thus sharing already downloaded [...] Read more.
In the last few years, the service demand for wireless data over mobile networks has continually been soaring at a rapid pace. Thereinto, in Mobile Social Networks (MSNs), users can discover adjacent users for establishing temporary local connection and thus sharing already downloaded contents with each other to offload the service demand. Due to the partitioned topology, intermittent connection and social feature in such a network, the service demand discovery is challenging. In particular, the service demand discovery is exploited to identify the best relay user through the service registration, service selection and service activation. In order to maximize the utilization of limited network resources, a hybrid service demand discovery architecture, such as a Virtual Dictionary User (VDU) is proposed in this paper. Based on the historical data of movement, users can discover their relationships with others. Subsequently, according to the users activity, VDU is selected to facilitate the service registration procedure. Further, the service information outside of a home community can be obtained through the Global Active User (GAU) to support the service selection. To provide the Quality of Service (QoS), the Service Providing User (SPU) is chosen among multiple candidates. Numerical results show that, when compared with other classical service algorithms, the proposed scheme can improve the successful service demand discovery ratio by 25% under reduced overheads. Full article
Show Figures

Figure 1

Figure 1
<p>Service registration procedures.</p>
Full article ">Figure 2
<p>Service selection procedures.</p>
Full article ">Figure 3
<p>Service activation procedures.</p>
Full article ">Figure 4
<p>Service Register Success Probability (SRSProb). SASD, Social Attribute-aware Service demand Discovery mechanism.</p>
Full article ">Figure 5
<p>Service Query Success Probability (SQSProb).</p>
Full article ">Figure 6
<p>Service Discovery Success Probability (SDSProb).</p>
Full article ">Figure 7
<p>Overhead ratio.</p>
Full article ">
10302 KiB  
Article
Radius and Orientation Measurement for Cylindrical Objects by a Light Section Sensor
by Youdong Chen and Chongxu Liu
Sensors 2016, 16(11), 1981; https://doi.org/10.3390/s16111981 - 23 Nov 2016
Cited by 9 | Viewed by 7912
Abstract
In this paper, an efficient method based on a light section sensor is presented for measuring cylindrical objects’ radii and orientations in a robotic application. By this method, the cylindrical objects can be measured under some special conditions, such as when the cylindrical [...] Read more.
In this paper, an efficient method based on a light section sensor is presented for measuring cylindrical objects’ radii and orientations in a robotic application. By this method, the cylindrical objects can be measured under some special conditions, such as when the cylindrical objects are welded with others, or in the presence of interferences. Firstly, the measurement data are roughly identified and accurately screened to effectively recognize ellipses. Secondly, the data are smoothed and homogenized to eliminate the effect of laser line loss or jump and reduce the influence of the inhomogeneity of measurement data on the ellipse fitting to a minimum. Finally, the ellipse fitting is carried out to obtain the radii and orientations of the cylindrical objects. Measuring experiments and results demonstrate the effective of the proposed radius and orientation measurement method for cylindrical object. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Method of detection.</p>
Full article ">Figure 2
<p>Measurement by the light section sensor: (<b>a</b>) Schematic diagram of the overall measurement; (<b>b</b>) View along CA direction.</p>
Full article ">Figure 3
<p>Absolute Euclidean distance between points and fitting ellipse.</p>
Full article ">Figure 4
<p>The flowchart of the measurements.</p>
Full article ">Figure 5
<p>Ellipse recognition.</p>
Full article ">Figure 6
<p>The rough identification method.</p>
Full article ">Figure 7
<p>An example of a rough identification method.</p>
Full article ">Figure 8
<p>The approach of the accurate screening.</p>
Full article ">Figure 9
<p>The flow of the accurate screening process.</p>
Full article ">Figure 10
<p>Smoothing and homogenization.</p>
Full article ">Figure 11
<p>Measurement of single cylindrical objects with radius: (<b>a</b>) 15 mm; (<b>b</b>) 16 mm; (<b>c</b>) 18.5 mm; (<b>d</b>) 29 mm.</p>
Full article ">Figure 12
<p>Measurement of cylindrical objects welded together.</p>
Full article ">Figure 13
<p>The measurement scene in the case of: (<b>a</b>) Single cylindrical object with planar interferences; (<b>b</b>) Single cylindrical object with complex interferences; (<b>c</b>) Cylindrical objects welded together with complex interferences.</p>
Full article ">Figure 14
<p>Actual measurement results.</p>
Full article ">Figure 15
<p>The measurement result in the case of: (<b>a</b>) Single cylindrical object with planar interferences; (<b>b</b>) Single cylindrical object with complex interferences; (<b>c</b>) Cylindrical objects welded together with complex interferences.</p>
Full article ">Figure 16
<p>The ellipse center repeatability.</p>
Full article ">
1290 KiB  
Article
A Testbed to Evaluate the FIWARE-Based IoT Platform in the Domain of Precision Agriculture
by Ramón Martínez, Juan Ángel Pastor, Bárbara Álvarez and Andrés Iborra
Sensors 2016, 16(11), 1979; https://doi.org/10.3390/s16111979 - 23 Nov 2016
Cited by 53 | Viewed by 14815
Abstract
Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, [...] Read more.
Wireless sensor networks (WSNs) represent one of the most promising technologies for precision farming. Over the next few years, a significant increase in the use of such systems on commercial farms is expected. WSNs present a number of problems, regarding scalability, interoperability, communications, connectivity with databases and data processing. Different Internet of Things middleware is appearing to overcome these challenges. This paper checks whether one of these middleware, FIWARE, is suitable for the development of agricultural applications. To the authors’ knowledge, there are no works that show how to use FIWARE in precision agriculture and study its appropriateness, its scalability and its efficiency for this kind of applications. To do this, a testbed has been designed and implemented to simulate different deployments and load conditions. The testbed is a typical FIWARE application, complete, yet simple and comprehensible enough to show the main features and components of FIWARE, as well as the complexity of using this technology. Although the testbed has been deployed in a laboratory environment, its design is based on the analysis of an Internet of Things use case scenario in the domain of precision agriculture. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Context element conceptual diagram.</p>
Full article ">Figure 2
<p>FIWARE IoT architecture.</p>
Full article ">Figure 3
<p>FMS envisioned based on FIWARE IoT architecture (lower part drawings are from [<a href="#B57-sensors-16-01979" class="html-bibr">57</a>]).</p>
Full article ">Figure 4
<p>A testbed to evaluate the performance of the FIWARE platform.</p>
Full article ">Figure 5
<p>Results obtained using the blocking method: (<b>a</b>) Higher data transfer rate at high loads by few nodes; (<b>b</b>) The system exhibits increased latency as load and concurrency (number of active entities) increase; (<b>c</b>) Better performance with respect to the number of requests served and worse with respect to payload.</p>
Full article ">Figure 6
<p>Throughput obtained using the blocking and non-blocking methods.</p>
Full article ">
11341 KiB  
Article
Representation Method for Spectrally Overlapping Signals in Flow Cytometry Based on Fluorescence Pulse Time-Delay Estimation
by Wenchang Zhang, Xiaoping Lou, Xiaochen Meng and Lianqing Zhu
Sensors 2016, 16(11), 1978; https://doi.org/10.3390/s16111978 - 23 Nov 2016
Cited by 3 | Viewed by 7139
Abstract
Flow cytometry is being applied more extensively because of the outstanding advantages of multicolor fluorescence analysis. However, the intensity measurement is susceptible to the nonlinearity of the detection method. Moreover, in multicolor analysis, it is impossible to discriminate between fluorophores that spectrally overlap; [...] Read more.
Flow cytometry is being applied more extensively because of the outstanding advantages of multicolor fluorescence analysis. However, the intensity measurement is susceptible to the nonlinearity of the detection method. Moreover, in multicolor analysis, it is impossible to discriminate between fluorophores that spectrally overlap; this influences the accuracy of the fluorescence pulse signal representation. Here, we focus on spectral overlap in two-color analysis, and assume that the fluorescence follows the single exponential decay model. We overcome these problems by analyzing the influence of the spectral overlap quantitatively, which enables us to propose a method of fluorescence pulse signal representation based on time-delay estimation (between fluorescence and scattered pulse signals). First, the time delays are estimated using a modified chirp Z-transform (MCZT) algorithm and a fine interpolation of the correlation peak (FICP) algorithm. Second, the influence of hardware is removed via calibration, in order to acquire the original fluorescence lifetimes. Finally, modulated signals containing phase shifts associated with these lifetimes are created artificially, using a digital signal processing method, and reference signals are introduced in order to eliminate the influence of spectral overlap. Time-delay estimation simulation and fluorescence signal representation experiments are conducted on fluorescently labeled cells. With taking the potentially overlap of autofluorescence as part of the observed fluorescence spectrum, rather than distinguishing the individual influence, the results show that the calculated lifetimes with spectral overlap can be rectified from 8.28 and 4.86 ns to 8.51 and 4.63 ns, respectively, using the comprehensive approach presented in this work. These values agree well with the lifetimes (8.48 and 4.67 ns) acquired for cells stained with single-color fluorochrome. Further, these results indicate that the influence of spectral overlap can be eliminated effectively. Moreover, modulation, mixing with reference signals, and low-pass filtering are performed with a digital signal processing method, thereby obviating the need for a high-speed analog device and complex circuit system. Finally, the flexibility of the comprehensive method presented in this work is significantly higher than that of existing methods. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Generation of forward-scattered and fluorescence pulses while microsphere flows through excitation area (<span class="html-italic">fs</span>: forward-scattered pulse, <span class="html-italic">fl</span><sub>1</sub> and <span class="html-italic">fl</span><sub>2</sub>: fluorescence pulses of fluorochromes 1 and 2, respectively; <span class="html-italic">L</span><sub>0</sub> and <span class="html-italic">L</span><sub>1</sub>: upper and lower limbs of excitation area, respectively).</p>
Full article ">Figure 2
<p>Schematic of spectral overlap of two fluorescence signals. (<b>a</b>) The green and red areas represent the spectra of the fluorescence light emitted from fluorochromes 1 and 2, respectively. The spectrum of each fluorochrome is detected by a different detector channel having a fixed bandwidth, and the spectra within the bandwidth of each detector channel are detected as one sample point of the time-intensity cytometric pulse; (<b>b</b>) <span class="html-italic">fl</span><sub>1</sub> and <span class="html-italic">fl</span><sub>2</sub> represent the original fluorescence signals (time-intensity pulses); <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span> represent the observed fluorescence signals after signal crossover; <span class="html-italic">k</span><sub>11</sub> and <span class="html-italic">k</span><sub>12</sub> are components of signal <span class="html-italic">fl</span><sub>1</sub>, which contribute to <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span>; <span class="html-italic">k</span><sub>21</sub> and <span class="html-italic">k</span><sub>22</sub> are components of signal <span class="html-italic">fl</span><sub>2</sub>, which contribute to <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span>; (<b>c</b>) The green and red pulse (time-intensity cytometric pulse) curves are the respective <span class="html-italic">fl</span><sub>1</sub> and <span class="html-italic">fl</span><sub>2</sub> contributions to each detector channel ((<span class="html-italic">k</span><sub>11</sub> × <span class="html-italic">fl</span><sub>1</sub>, <span class="html-italic">k</span><sub>12</sub> × <span class="html-italic">fl</span><sub>1</sub>) and (<span class="html-italic">k</span><sub>21</sub> × <span class="html-italic">fl</span><sub>2</sub>, <span class="html-italic">k</span><sub>22</sub> × <span class="html-italic">fl</span><sub>2</sub>), respectively).</p>
Full article ">Figure 3
<p>Peak-value errors and pulse time delays introduced by three-signal crossover. Δ<span class="html-italic">V</span><sub>1</sub> and Δ<span class="html-italic">V</span><sub>2</sub> represent the differences in the peak values between <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl</span><sub>1</sub>, and between <span class="html-italic">fl<sub>b</sub></span> and <span class="html-italic">fl</span><sub>2</sub>, respectively. The pulse time-delay details are provided in the yellow region. Δ<span class="html-italic">t<sub>a</sub></span> and Δ<span class="html-italic">t<sub>b</sub></span> are the pulse time delays of <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span> (time delays of peak locations between <span class="html-italic">fs</span> and <span class="html-italic">fl<sub>a</sub></span>, <span class="html-italic">fl<sub>b</sub></span>) respectively; Δ<span class="html-italic">t</span><sub>1</sub> and Δ<span class="html-italic">t</span><sub>2</sub> are the pulse time delays of <span class="html-italic">fl</span><sub>1</sub> and <span class="html-italic">fl</span><sub>2</sub> (time delays of peak locations between <span class="html-italic">fs</span> and <span class="html-italic">fl</span><sub>1</sub>, <span class="html-italic">fl</span><sub>2</sub>), respectively.</p>
Full article ">Figure 4
<p>Time-delay calibration. (<b>a</b>) Light-pulse generation. The LED1 and LED2 emission lights are blue and red, respectively, and <span class="html-italic">L_fs</span> and <span class="html-italic">L_fl</span><sub>1</sub> are the respective forward-scattered light pulses; (<b>b</b>) <span class="html-italic">L_fs</span> is detected by a photodiode and processed with electric system 0. <span class="html-italic">H</span><sub>0</sub>(ω) is the transfer function of electric system 0 in the frequency domain. <span class="html-italic">L_fl</span><sub>1</sub> is detected by a photomultiplier tube (PMT) and processed with electric system 1. <span class="html-italic">H</span><sub>1</sub>(ω) is the transfer function of electric system 1 in the frequency domain. Some light intensity attenuation modules are omitted in the detection of <span class="html-italic">L_fl</span><sub>1</sub> with a PMT; (<b>c</b>) Data acquisition. The analog electrical signals (<span class="html-italic">V_fs</span>, <span class="html-italic">V_fl</span><sub>1</sub>) are simultaneously converted to digital signals by ADC0 and ADC1, respectively.</p>
Full article ">Figure 5
<p>Conceptual diagram of signal detection and processing for PTDE-based fluorescence signal representation. The stages against the dark background are processed using the digital signal processing method. The cell stream is excited by the laser beam in the flow chamber, and the forward-scattered (blue arrows) and fluorescence (green and red arrows) signals from the two fluorochromes are separated with a dichroscope and filter in series. <span class="html-italic">fs</span> (blue line) is the forward scattered pulse signal; <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span> are the observed summation signals from the first and second channels, respectively; Δ<span class="html-italic">t<sub>a</sub></span> and Δ<span class="html-italic">t<sub>b</sub></span> are the pulse time delays of <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span>, respectively; τ<sub>1</sub> and τ<sub>2</sub> are the individual lifetime components of fluorochromes 1 and 2, respectively; ϕ<sub>1</sub> and ϕ<sub>2</sub> are the phase shifts introduced by τ<sub>1</sub> and τ<sub>2</sub>, respectively; τ<span class="html-italic"><sub>a</sub></span> and τ<span class="html-italic"><sub>b</sub></span> are the calculated lifetimes of <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span>, respectively; and ϕ<span class="html-italic"><sub>a</sub></span> and ϕ<span class="html-italic"><sub>b</sub></span> are the respective phase shifts introduced by τ<span class="html-italic"><sub>a</sub></span> and τ<span class="html-italic"><sub>b</sub></span>.</p>
Full article ">Figure 6
<p>Simulation results at each stage during PTDE. (<b>a</b>) Waveforms of fluorescence and forward-scattered pulse signals. The colorized solid and dashed lines are the observed fluorescence signals after crossover and the original fluorescence signals, respectively; (<b>b</b>) The frequency spectra are zoom-analyzed 10 times (<span class="html-italic">N</span><sub>1</sub>/<span class="html-italic">N</span> = 10 in Equation (6)) using MCZT; (<b>c</b>) Time domain cross-correlation functions calculated using FICP (<span class="html-italic">N</span><sub>2</sub>/<span class="html-italic">N</span><sub>1</sub> = 10 in Equations (8) and (9)).</p>
Full article ">Figure 7
<p>Gauss fitting results for pulse signals (<span class="html-italic">fs</span>, <span class="html-italic">fl<sub>a</sub></span>, and <span class="html-italic">fl<sub>b</sub></span>). (<b>a</b>) The blue and black curves are the pulse signal <span class="html-italic">fs</span> and the Gauss fitting result, respectively; (<b>b</b>) The green and black curves are the observed fluorescence pulse signal <span class="html-italic">fl<sub>a</sub></span> and the Gauss fitting result, respectively; (<b>c</b>) The red and black curves are the observed fluorescence pulse signal <span class="html-italic">fl<sub>b</sub></span> and the Gauss fitting result, respectively.</p>
Full article ">Figure 8
<p>Results of PTDE-based fluorescence signal representation at each stage (see also <a href="#sensors-16-01978-f003" class="html-fig">Figure 3</a> for each stage of the digital signal processing). (<b>a</b>) Waveforms of forward-scattered pulse signal (<span class="html-italic">fs</span>) and observed fluorescence pulse signals (<span class="html-italic">fl<sub>a</sub></span>, <span class="html-italic">fl<sub>b</sub></span>) from a mixture of two spectrally overlapping signals; (<b>b</b>) <span class="html-italic">fl<sub>a</sub></span> and <span class="html-italic">fl<sub>b</sub></span> are modulated by a cosine function with different phase shifts (ϕ<span class="html-italic"><sub>a</sub></span>, ϕ<span class="html-italic"><sub>b</sub></span>); (<b>c</b>) The modulated fluorescence pulse signal is mixed with a reference signal having the same angular frequency as the modulating signal, after which low-pass filtering of the signals in (<b>c</b>) results in (<b>d</b>).</p>
Full article ">Figure 9
<p>Lifetime histograms of fluorescence pulse signals at 10-MHz modulating frequency. (<b>a</b>) Histograms of (<b>a</b>) <span class="html-italic">fl</span><sub>1</sub> and (<b>b</b>) <span class="html-italic">fl</span><sub>2</sub> lifetimes (SD: Standard deviation).</p>
Full article ">
11254 KiB  
Article
Ultra-Low Power Optical Sensor for Xylophagous Insect Detection in Wood
by Angel Perles, Ricardo Mercado, Juan V. Capella and Juan José Serrano
Sensors 2016, 16(11), 1977; https://doi.org/10.3390/s16111977 - 23 Nov 2016
Cited by 12 | Viewed by 6435
Abstract
The early detection of pests is key for the maintenance of high-value masterpieces and historical buildings made of wood. In this work, we the present detailed design of an ultra-low power sensor device that permits the continuous monitoring of the presence of termites [...] Read more.
The early detection of pests is key for the maintenance of high-value masterpieces and historical buildings made of wood. In this work, we the present detailed design of an ultra-low power sensor device that permits the continuous monitoring of the presence of termites and other xylophagous insects. The operating principle of the sensor is based on the variations of reflected light induced by the presence of termites, and specific processing algorithms that deal with the behavior of the electronics and the natural ageing of components. With a typical CR2032 lithium battery, the device lasts more than nine years, and is ideal for incorporation in more complex monitoring systems where maintenance tasks should be minimized. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Proposed sensor diagram.</p>
Full article ">Figure 2
<p>First operative experimental prototype of the proposed sensor.</p>
Full article ">Figure 3
<p>Proposed sensor’s final appearance.</p>
Full article ">Figure 4
<p>Temperature and noise effects in the detection system.</p>
Full article ">Figure 5
<p>Termite being introduced into the detection sensor.</p>
Full article ">Figure 6
<p>Effect of termite movements within the cylinder.</p>
Full article ">Figure 7
<p>Proposed algorithm operation on a real trace.</p>
Full article ">Figure 8
<p>Experimental setup in the climatic chamber (<b>a</b>); Reference pot (<b>b</b>).</p>
Full article ">Figure 9
<p>Cumulated registered detections.</p>
Full article ">Figure 10
<p>Developed node with the proposed sensor.</p>
Full article ">Figure 11
<p>Detection counter evolution.</p>
Full article ">Figure 12
<p>Instantaneous detection numbers.</p>
Full article ">Figure 13
<p>Recorded values using the light detector.</p>
Full article ">Figure 14
<p>Sensor 07 appearance after the experiment.</p>
Full article ">Figure 15
<p>Sensor 04 appearance after experimentation.</p>
Full article ">Figure 16
<p>Sensor 03 appearance after experimentation.</p>
Full article ">Figure 17
<p>Electronic schematic of the sensor part (<b>a</b>) and electronic board implementation results (<b>b</b>).</p>
Full article ">
3264 KiB  
Article
Improved Goldstein Interferogram Filter Based on Local Fringe Frequency Estimation
by Qingqing Feng, Huaping Xu, Zhefeng Wu, Yanan You, Wei Liu and Shiqi Ge
Sensors 2016, 16(11), 1976; https://doi.org/10.3390/s16111976 - 23 Nov 2016
Cited by 33 | Viewed by 6466
Abstract
The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements’, such as height or displacement, phase filtering is therefore an essential step. In this [...] Read more.
The quality of an interferogram, which is limited by various phase noise, will greatly affect the further processes of InSAR, such as phase unwrapping. Interferometric SAR (InSAR) geophysical measurements’, such as height or displacement, phase filtering is therefore an essential step. In this work, an improved Goldstein interferogram filter is proposed to suppress the phase noise while preserving the fringe edges. First, the proposed adaptive filter step, performed before frequency estimation, is employed to improve the estimation accuracy. Subsequently, to preserve the fringe characteristics, the estimated fringe frequency in each fixed filtering patch is removed from the original noisy phase. Then, the residual phase is smoothed based on the modified Goldstein filter with its parameter alpha dependent on both the coherence map and the residual phase frequency. Finally, the filtered residual phase and the removed fringe frequency are combined to generate the filtered interferogram, with the loss of signal minimized while reducing the noise level. The effectiveness of the proposed method is verified by experimental results based on both simulated and real data. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the improved Goldstein filtering method based on local frequency estimation. PSD, phase standard deviation.</p>
Full article ">Figure 2
<p>The example for the procedure of our method by using a window with 31 × 31 pixels. (<b>a</b>) Noisy phase window; (<b>b</b>) corresponding simulated true phase; (<b>c</b>) phase after prefiltering; (<b>d</b>) principle power spectral density of the prefiltered phase; (<b>e</b>) the removed principal phase component; (<b>f</b>) residual noisy phase; (<b>g</b>) filtered residual phase; (<b>h</b>) final processed phase patch by our method.</p>
Full article ">Figure 3
<p>Simulated data. (<b>a</b>) Simulated noisy phase (Cross-sections A and B, respectively representing the transitional region in azimuth and the phase jumping region in range, will be further analysed in <a href="#sec3dot2-sensors-16-01976" class="html-sec">Section 3.2</a>); (<b>b</b>) coherence map; (<b>c</b>) simulated true phase; (<b>d</b>) phase error image.</p>
Full article ">Figure 4
<p>Simulated interferograms and corresponding error images using different modifications. (<b>a</b>) Reference Goldstein filter; (<b>b</b>) improved filter with Modification 1 only; (<b>c</b>) improved filter with Modification 2 only; (<b>d</b>) improved filter with Modification 3 only; (<b>e</b>) our method.</p>
Full article ">Figure 4 Cont.
<p>Simulated interferograms and corresponding error images using different modifications. (<b>a</b>) Reference Goldstein filter; (<b>b</b>) improved filter with Modification 1 only; (<b>c</b>) improved filter with Modification 2 only; (<b>d</b>) improved filter with Modification 3 only; (<b>e</b>) our method.</p>
Full article ">Figure 5
<p>Filtered results and corresponding error images using different methods. (<b>a</b>) Reference Goldstein filter; (<b>b</b>) reference topography adaptive filter; (<b>c</b>) Lee filter; (<b>d</b>) our method.</p>
Full article ">Figure 5 Cont.
<p>Filtered results and corresponding error images using different methods. (<b>a</b>) Reference Goldstein filter; (<b>b</b>) reference topography adaptive filter; (<b>c</b>) Lee filter; (<b>d</b>) our method.</p>
Full article ">Figure 6
<p>Cross-sections over the simulated interferogram, where “Taf” represents the “topography adaptive filter”. (<b>a</b>) Cross-section for A in the azimuth direction; (<b>b</b>) cross-section for B in the range direction.</p>
Full article ">Figure 6 Cont.
<p>Cross-sections over the simulated interferogram, where “Taf” represents the “topography adaptive filter”. (<b>a</b>) Cross-section for A in the azimuth direction; (<b>b</b>) cross-section for B in the range direction.</p>
Full article ">Figure 7
<p>Density functions of phase error (filtered phase minus true phase). Phase error is wrapped to the range <math display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <mo>−</mo> <mi>π</mi> <mo>,</mo> <mi>π</mi> <mo stretchy="false">)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Filtered results using real data. The figures in the left column show the entire interferogram; the figures in the right column are enlarged areas corresponding to the white rectangles of the left column. (From left to right and top to bottom) (<b>a</b>–<b>c</b>) noisy interferogram; (<b>d</b>–<b>f</b>) reference Goldstein filter; (<b>g</b>–<b>i</b>) topography adaptive filter; (<b>j</b>–<b>l</b>) Lee filter; (<b>m</b>–<b>o</b>) proposed method.</p>
Full article ">Figure 8 Cont.
<p>Filtered results using real data. The figures in the left column show the entire interferogram; the figures in the right column are enlarged areas corresponding to the white rectangles of the left column. (From left to right and top to bottom) (<b>a</b>–<b>c</b>) noisy interferogram; (<b>d</b>–<b>f</b>) reference Goldstein filter; (<b>g</b>–<b>i</b>) topography adaptive filter; (<b>j</b>–<b>l</b>) Lee filter; (<b>m</b>–<b>o</b>) proposed method.</p>
Full article ">
5617 KiB  
Article
Peroxynitrite Sensor Based on a Screen Printed Carbon Electrode Modified with a Poly(2,6-dihydroxynaphthalene) Film
by Ioana Silvia Hosu, Diana Constantinescu-Aruxandei, Maria-Luiza Jecu, Florin Oancea and Mihaela Badea Doni
Sensors 2016, 16(11), 1975; https://doi.org/10.3390/s16111975 - 23 Nov 2016
Cited by 8 | Viewed by 6273
Abstract
For the first time the electropolymerization of 2,6-dihydroxynaphthalene (2,6-DHN) on a screen printed carbon electrode (SPCE) was investigated and evaluated for peroxynitrite (PON) detection. Cyclic voltammetry was used to electrodeposit the poly(2,6-DHN) on the carbon electrode surface. The surface morphology and structure of [...] Read more.
For the first time the electropolymerization of 2,6-dihydroxynaphthalene (2,6-DHN) on a screen printed carbon electrode (SPCE) was investigated and evaluated for peroxynitrite (PON) detection. Cyclic voltammetry was used to electrodeposit the poly(2,6-DHN) on the carbon electrode surface. The surface morphology and structure of poly(2,6-DHN) film were investigated by SEM and FTIR analysis, and the electrochemical features by cyclic voltammetry. The poly(2,6-DHN)/SPCE sensor showed excellent electrocatalytic activity for PON oxidation in alkaline solutions at very low potentials (0–100 mV vs. Ag/AgCl pseudoreference). An amperometric FIA (flow injection analysis) system based on the developed sensor was optimized for PON measurements and a linear concentration range from 2 to 300 ?M PON, with a LOD of 0.2 ?M, was achieved. The optimized sensor inserted in the FIA system exhibited good sensitivity (4.12 nA·?M?1), selectivity, stability and intra-/inter-electrode reproducibility for PON determination. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) First CVs of SPCE in 2 mM 2,6-DHN prepared in 0.1 M PBS pH 7.4 recorded at the scan rates 5, 10 and 20 mV·s<sup>−1</sup>; (<b>b</b>) 10 successive CVs of SPCE in 2 mM 2,6-DHN at scan rate 5 mV·s<sup>−1</sup>.</p>
Full article ">Figure 2
<p>Aspect of the modified SPCE with poly(2,6-DHN).</p>
Full article ">Figure 3
<p>Frontal (<b>a</b>,<b>b</b>) and transversal (<b>c</b>,<b>d</b>) SEM images of the unmodified (<b>a</b>,<b>c</b>) and modified (<b>b</b>,<b>d</b>) SPCE.</p>
Full article ">Figure 4
<p>FTIR-ATR spectra for monomer 2,6-DHN (red line) and poly(2,6-DHN)/SPCE (black line).</p>
Full article ">Figure 5
<p>(<b>a</b>) CVs for poly(2,6-DHN)/SPCE and unmodified SPCE in 0.1 M PBS pH 9.0 . Scan rate 100 mV·s<sup>−1</sup>; potential range: −0.3 V to +0.5 V; sensor modified by electropolymerization of 2,6-DHN with the scan rate = 5 mV·s<sup>−1</sup>. (<b>b</b>) Dependence of the anodic and cathodic peak currents (I<sub>p</sub>) on scan rate (ν) for poly(2,6-DHN)/SPCE in 0.1 M PBS pH 9.0. Scan rates: 9–144 mV·s<sup>−1</sup>; sensor modified by electropolymerization of 2,6-DHN with the scan rate = 5 mV·s<sup>−1</sup>.</p>
Full article ">Figure 6
<p>The influence of electrolyte solution pH on CVs of poly(2,6-DHN)/SPCE. PBS pH: 7.4, 9.0, 9.4, 10.0, 11.0, 12.0. Potential range: −0.5 V to +0.5 V; scan rate: 100 mv·s<sup>−1</sup>; sensor modified by electropolymerization of 2,6-DHN with the scan rate of 20 mV·s<sup>−1</sup>.</p>
Full article ">Figure 7
<p>Cyclic voltammograms of poly(2,6-DHN)/SPCE in 0.1 M PBS pH 9.0, in the presence (bue line) and absence (black line) of 50 μM PON. Potential range: −0.3 V to +0.5 V; scan rate: 100 mv·s<sup>−1</sup>.</p>
Full article ">Figure 8
<p>(<b>a</b>) CVs of poly(2,6-DHN)/SPCE in 0.1 M PBS pH 9.0 containing 50 μM PON at scan rates from 9 to 250 mV·s<sup>−1</sup>. Potential range: −0.3 V to +0.5 V; sensor modified by electropolymerization of 2,6-DHN with the scan rate of 5 mV·s<sup>−1</sup>. (<b>b</b>) Plot of the anodic and cathodic peak currents vs. square root of scan rate (ν<sup>1/2</sup>)</p>
Full article ">Figure 9
<p>(<b>a</b>) FIA amperogram for different PON concentration solutions. E = +75 mV; flow rate = 0.38 mL·min<sup>−1</sup>; (<b>b</b>) Dependence of FIA height on the applied potential for different concentration of PON; flow rate = 0.38 mL·min<sup>−1</sup>; (<b>c</b>) Calibration graph for PON determination by FIA amperometry in optimized conditions; E = +75 mV; flow rate = 0.38 mL·min<sup>−1</sup>; (<b>d</b>) FIA signals for PON 50 μM, ascorbic acid 1 mM and sodium nitrite 0.1 M; E = +75 mV; flow rate = 0.38 mL·min<sup>−1</sup>.</p>
Full article ">Figure 10
<p>FIA amperogram for PON added in meat extract (diluted 1/10 with PBS pH 9.0). Inside: calibration graph for PON in meat extract. E = +75 mV; flow rate = 0.38 mL·min<sup>−1</sup></p>
Full article ">Scheme 1
<p>Suggested electropolymerisation mechanism and structure for poly(2,6-DHN).</p>
Full article ">
2723 KiB  
Article
Molecularly Imprinted Filtering Adsorbents for Odor Sensing
by Sho Shinohara, You Chiyomaru, Fumihiro Sassa, Chuanjun Liu and Kenshi Hayashi
Sensors 2016, 16(11), 1974; https://doi.org/10.3390/s16111974 - 23 Nov 2016
Cited by 7 | Viewed by 6744
Abstract
Versatile odor sensors that can discriminate among huge numbers of environmental odorants are desired in many fields, including robotics, environmental monitoring, and food production. However, odor sensors comparable to an animal’s nose have not yet been developed. An animal’s olfactory system recognizes odor [...] Read more.
Versatile odor sensors that can discriminate among huge numbers of environmental odorants are desired in many fields, including robotics, environmental monitoring, and food production. However, odor sensors comparable to an animal’s nose have not yet been developed. An animal’s olfactory system recognizes odor clusters with specific molecular properties and uses this combinatorial information in odor discrimination. This suggests that measurement and clustering of odor molecular properties (e.g., polarity, size) using an artificial sensor is a promising approach to odor sensing. Here, adsorbents composed of composite materials with molecular recognition properties were developed for odor sensing. The selectivity of the sensor depends on the adsorbent materials, so specific polymeric materials with particular solubility parameters were chosen to adsorb odorants with various properties. The adsorption properties of the adsorbents could be modified by mixing adsorbent materials. Moreover, a novel molecularly imprinted filtering adsorbent (MIFA), composed of an adsorbent substrate covered with a molecularly imprinted polymer (MIP) layer, was developed to improve the odor molecular recognition ability. The combination of the adsorbent and MIP layer provided a higher specificity toward target molecules. The MIFA thus provides a useful technique for the design and control of adsorbents with adsorption properties specific to particular odor molecules. Full article
(This article belongs to the Special Issue Olfactory and Gustatory Sensors)
Show Figures

Figure 1

Figure 1
<p>Conceptual approach for a bio-inspired odor sensing system. (<b>a</b>) Adsorption-separation odor sensing system; (<b>b</b>) resulting odor cluster map for odor evaluation [<a href="#B13-sensors-16-01974" class="html-bibr">13</a>].</p>
Full article ">Figure 2
<p>Structure of a MIFA.</p>
Full article ">Figure 3
<p>Procedure for MIFA fabrication.</p>
Full article ">Figure 4
<p>Adsorption experiment system. CMS: carbon molecular sieve column for obtaining clean air. MFC: mass flow controller. Sample adsorbents were placed in the sample chamber.</p>
Full article ">Figure 5
<p>Adsorbed odorant amount (TIC: total ion current obtained from gas chromatography–mass spectrometry/solid-phase microextraction) of polyvinyl chloride–dioctyl phthalate (PVC-DOP) adsorbents as a function of DOP content.</p>
Full article ">Figure 6
<p>Adsorption properties of polyvinyl chloride–dioctyl phthalate (PVC-DOP) adsorbents as a function of DOP content, relative to those of polydimethylsiloxane (PDMS) adsorbents. Adsorption of (<b>a</b>) 2-hexanone; (<b>b</b>) propanoic acid; (<b>c</b>) benzene; (<b>d</b>) <span class="html-italic">o</span>-cresol.</p>
Full article ">Figure 7
<p>Amounts of gas adsorbed by a molecularly imprinted filtering adsorbent fabricated using hexanoic acid as the template. The amounts are normalized relative to the amounts of gas adsorbed by a polyvinyl chloride–dioctyl phthalate (PVC-DOP) adsorbent without a molecularly imprinted filter.</p>
Full article ">Figure 8
<p>Adsorption specificity of a molecularly imprinted filtering adsorbent on a polydimethylsiloxane–divinylbenzene (PDMS-DVB) adsorbent layer. The template odorant was heptanoic acid.</p>
Full article ">Figure 9
<p>(<b>a</b>) Adsorption specificity of a molecularly imprinted filtering adsorbent (MIFA) for fatty acids; (<b>b</b>) Adsorption specificity of a MIFA for alcohols. Adsorbent samples are named as in the following example: MIFA<sub>MAA</sub> (heptanol) = MIFA with a methacrylic acid (MAA)-MIF layer prepared using heptanol as the template.</p>
Full article ">
3093 KiB  
Article
Design of Fresnel Lens-Type Multi-Trapping Acoustic Tweezers
by You-Lin Tu, Shih-Jui Chen and Yean-Ren Hwang
Sensors 2016, 16(11), 1973; https://doi.org/10.3390/s16111973 - 23 Nov 2016
Cited by 11 | Viewed by 8331
Abstract
In this paper, acoustic tweezers which use beam forming performed by a Fresnel zone plate are proposed. The performance has been demonstrated by finite element analysis, including the acoustic intensity, acoustic pressure, acoustic potential energy, gradient force, and particle distribution. The acoustic tweezers [...] Read more.
In this paper, acoustic tweezers which use beam forming performed by a Fresnel zone plate are proposed. The performance has been demonstrated by finite element analysis, including the acoustic intensity, acoustic pressure, acoustic potential energy, gradient force, and particle distribution. The acoustic tweezers use an ultrasound beam produced by a lead zirconate titanate (PZT) transducer operating at 2.4 MHz and 100 Vpeak-to-peak in a water medium. The design of the Fresnel lens (zone plate) is based on air reflection, acoustic impedance matching, and the Fresnel half-wave band (FHWB) theory. This acoustic Fresnel lens can produce gradient force and acoustic potential wells that allow the capture and manipulation of single particles or clusters of particles. Simulation results strongly indicate a good trapping ability, for particles under 150 µm in diameter, in the minimum energy location. This can be useful for cell or microorganism manipulation. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the acoustic tweezers with a Fresnel lens. Parylene is deposited on a PZT transducer and patterned according to the FHWB theory. Based on the acoustic impedance theory, the acoustic wave from the PZT transducer transmits through the parylene region to the water and reflects mostly at the air region. The patterned parylene membrane is oriented on the x-axis of the coordinate system.</p>
Full article ">Figure 2
<p>Simulated acoustic intensity of the acoustic tweezers. The highest intensity of 1.15 MW/M<sup>2</sup> is at about 10 λ focal length in the y-direction.</p>
Full article ">Figure 3
<p>Simulated acoustic pressure of the acoustic tweezers. The maximum acoustic pressure is 1.95 MPa.</p>
Full article ">Figure 4
<p>Simulated acoustic potential energy of the acoustic tweezers. The blue regions correspond to high-potential-energy regions, while the green regions correspond to zero-energy regions.</p>
Full article ">Figure 5
<p>The two-dimensional simulation results of the micro-particle distribution of Fresnel lens acoustic tweezers, where Φ is the particle diameter and <span class="html-italic">t</span> is the simulated time.</p>
Full article ">Figure 6
<p>Simulated acoustic potential energy along the central axis of the acoustic tweezers.</p>
Full article ">Figure 7
<p>Simulated gradient force along the central axis of the acoustic tweezers.</p>
Full article ">
2065 KiB  
Article
Identification-While-Scanning of a Multi-Aircraft Formation Based on Sparse Recovery for Narrowband Radar
by Yuan Jiang, Jia Xu, Shi-Bao Peng, Er-Ke Mao, Teng Long and Ying-Ning Peng
Sensors 2016, 16(11), 1972; https://doi.org/10.3390/s16111972 - 23 Nov 2016
Cited by 3 | Viewed by 5163
Abstract
It is known that the identification performance of a multi-aircraft formation (MAF) of narrowband radar mainly depends on the time on target (TOT). To realize the identification task in one rotated scan with limited TOT, the paper proposes a novel identification-while-scanning (IWS) method [...] Read more.
It is known that the identification performance of a multi-aircraft formation (MAF) of narrowband radar mainly depends on the time on target (TOT). To realize the identification task in one rotated scan with limited TOT, the paper proposes a novel identification-while-scanning (IWS) method based on sparse recovery to maintain high rotating speed and super-resolution for MAF identification, simultaneously. First, a multiple chirp signal model is established for MAF in a single scan, where different aircraft may have different Doppler centers and Doppler rates. Second, based on the sparsity of MAF in the Doppler parameter space, a novel hierarchical basis pursuit (HBP) method is proposed to obtain satisfactory sparse recovery performance as well as high computational efficiency. Furthermore, the parameter estimation performance of the proposed IWS identification method is analyzed with respect to recovery condition, signal-to-noise ratio and TOT. It is shown that an MAF can be effectively identified via HBP with a TOT of only about one hundred microseconds for IWS applications. Finally, some numerical experiment results are provided to demonstrate the effectiveness of the proposed method based on both simulated and real measured data. Full article
(This article belongs to the Special Issue Non-Contact Sensing)
Show Figures

Figure 1

Figure 1
<p>MAF geometry for the narrowband radar.</p>
Full article ">Figure 2
<p>The illustration of the HBP strategy.</p>
Full article ">Figure 3
<p>The identification correct probability of aircraft number versus SNR.</p>
Full article ">Figure 4
<p>Estimation results of HBP compared to CRLB. (<b>a</b>) The Doppler center estimation; (<b>b</b>) Doppler rate estimation.</p>
Full article ">Figure 5
<p>The MAF identification rate versus TOT.</p>
Full article ">Figure 6
<p>The correct identification probability versus MAF aircraft number. (<b>a</b>) <span class="html-italic">T</span> = 100 ms; (<b>b</b>) <span class="html-italic">T</span> = 200 ms.</p>
Full article ">Figure 7
<p>The second-order PFT result in <math display="inline"> <semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>−</mo> <msub> <mi>f</mi> <mn>1</mn> </msub> </mrow> </semantics> </math> plane. (<b>a</b>) Coherent time <span class="html-italic">T</span> = 0.1 s; (<b>b</b>) Coherent time <span class="html-italic">T</span> = 0.8 s.</p>
Full article ">Figure 8
<p>The two-aircraft MAF identification result. (<b>a</b>) Frequency spectrum of the MAF formation; (<b>b</b>) The MAF identification result by HBP.</p>
Full article ">Figure 9
<p>The four-aircraft MAF identification result. (<b>a</b>) Frequency spectrum of the MAF formation; (<b>b</b>) The MAF identification result by HBP.</p>
Full article ">
6661 KiB  
Article
Co-Creating the Cities of the Future
by Verónica Gutiérrez, Evangelos Theodoridis, Georgios Mylonas, Fengrui Shi, Usman Adeel, Luis Diez, Dimitrios Amaxilatis, Johnny Choque, Guillem Camprodom, Julie McCann and Luis Muñoz
Sensors 2016, 16(11), 1971; https://doi.org/10.3390/s16111971 - 23 Nov 2016
Cited by 40 | Viewed by 9719
Abstract
In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population [...] Read more.
In recent years, the evolution of urban environments, jointly with the progress of the Information and Communication sector, have enabled the rapid adoption of new solutions that contribute to the growth in popularity of Smart Cities. Currently, the majority of the world population lives in cities encouraging different stakeholders within these innovative ecosystems to seek new solutions guaranteeing the sustainability and efficiency of such complex environments. In this work, it is discussed how the experimentation with IoT technologies and other data sources form the cities can be utilized to co-create in the OrganiCity project, where key actors like citizens, researchers and other stakeholders shape smart city services and applications in a collaborative fashion. Furthermore, a novel architecture is proposed that enables this organic growth of the future cities, facilitating the experimentation that tailors the adoption of new technologies and services for a better quality of life, as well as agile and dynamic mechanisms for managing cities. In this work, the different components and enablers of the OrganiCity platform are presented and discussed in detail and include, among others, a portal to manage the experiment life cycle, an Urban Data Observatory to explore data assets, and an annotations component to indicate quality of data, with a particular focus on the city-scale opportunistic data collection service operating as an alternative to traditional communications. Full article
(This article belongs to the Special Issue Smart City: Vision and Reality)
Show Figures

Figure 1

Figure 1
<p>OrganiCity facility high level architecture.</p>
Full article ">Figure 2
<p>OrganiCity EaaS facility architecture instantiation.</p>
Full article ">Figure 3
<p>Facility management framework.</p>
Full article ">Figure 4
<p>Experimentation Management framework.</p>
Full article ">Figure 5
<p>Urban Data Observatory architecture.</p>
Full article ">Figure 6
<p>UDO geographic browser view.</p>
Full article ">Figure 7
<p>Assets visualization within the UDO UI: (<b>A</b>) Data Location; (<b>B</b>,<b>C</b>) Data Visualization; (<b>D</b>) Assets details and metadata; (<b>E</b>) Provider details; (<b>F</b>) Comments.</p>
Full article ">Figure 8
<p>Interactions between various system components and the Community Management &amp; Incentivisation component.</p>
Full article ">Figure 9
<p>Overview of Opportunistic Communication integration with OC platform.</p>
Full article ">Figure 10
<p>Experiment list within the main view of the EP.</p>
Full article ">Figure 11
<p>Opportunistic communication experiment details.</p>
Full article ">Figure 12
<p>Screenshots from the OppNet smartphone application, (<b>a</b>) Authentication; (<b>b</b>) Experiments List; (<b>c</b>) Network; (<b>d</b>) Settings.</p>
Full article ">Figure 13
<p>Geographical distribution of participants.</p>
Full article ">Figure 14
<p>Screenshots of the OppNet visualization server.</p>
Full article ">Figure 15
<p>Time delay: single hop relay (blue) and end-to-end (green).</p>
Full article ">Figure 16
<p>Contact graph of participating devices.</p>
Full article ">
2389 KiB  
Article
A Laminar Flow-Based Microfluidic Tesla Pump via Lithography Enabled 3D Printing
by Mohammed-Baker Habhab, Tania Ismail and Joe Fujiou Lo
Sensors 2016, 16(11), 1970; https://doi.org/10.3390/s16111970 - 23 Nov 2016
Cited by 28 | Viewed by 12550
Abstract
Tesla turbine and its applications in power generation and fluid flow were demonstrated by Nicholas Tesla in 1913. However, its real-world implementations were limited by the difficulty to maintain laminar flow between rotor disks, transient efficiencies during rotor acceleration, and the lack of [...] Read more.
Tesla turbine and its applications in power generation and fluid flow were demonstrated by Nicholas Tesla in 1913. However, its real-world implementations were limited by the difficulty to maintain laminar flow between rotor disks, transient efficiencies during rotor acceleration, and the lack of other applications that fully utilize the continuous flow outputs. All of the aforementioned limits of Tesla turbines can be addressed by scaling to the microfluidic flow regime. Demonstrated here is a microscale Tesla pump designed and fabricated using a Digital Light Processing (DLP) based 3D printer with 43 µm lateral and 30 µm thickness resolutions. The miniaturized pump is characterized by low Reynolds number of 1000 and a flow rate of up to 12.6 mL/min at 1200 rpm, unloaded. It is capable of driving a mixer network to generate microfluidic gradient. The continuous, laminar flow from Tesla turbines is well-suited to the needs of flow-sensitive microfluidics, where the integrated pump will enable numerous compact lab-on-a-chip applications. Full article
(This article belongs to the Special Issue Microfluidics-Based Microsystem Integration Research)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>3D printed µTesla pump. (<b>A</b>) The assembled pump is around an US quarter in diameter, printed in high resolution as demonstrated by the miniature engineering building of UM Dearborn; (<b>B</b>) Two µTesla pumps were coupled magnetically to the same stir plate to ensure exact pressure delivered to a microfluidic device, and visualized under a stereomicroscope; (<b>C</b>) Using this setup, gradients can be generated by µTesla pump to drive a microfluidic mixer network.</p>
Full article ">Figure 2
<p>Computer design and 3D printing of µTesla. (<b>A</b>) The µTesla pump was comprised of three parts: cap with pivot, the rotor with vias and pockets for magnets, and the enclosed housing with fluidic spout and bottom pivot. The computer design was then printed in a DLP 3D printer. The print was upside down as pictured in (<b>B</b>,<b>C</b>) and subjected to gravity pull; (<b>B</b>) To reduce surface tension pulling down large areas of the enclosed housing’s bottom, substantial clearance was required along with numerous supporting posts to prevent excessive warping and detachment; (<b>C</b>) The rotor required extra support to keep the disks straight. Additional resin platform was added to ensure adhesion to the metal print stage.</p>
Full article ">Figure 3
<p>Hydraulic head. The height above the µTesla pump to which the fluid was raised was characterized for rotor speeds from 60 to 1200 rpm (a 15 cm, 1/16 ID Tygon tubing was coupled for this measurement). With a pair of single magnet to couple the stir plate rotation, the rotor stalled around 900 rpm. With a pair of double magnets, stalling occurred around 1150 rpm. The total hydraulic head at 1200 rpm corresponds to 253 Pa of pressure. Error bar denotes standard deviation from 6 measurements.</p>
Full article ">Figure 4
<p>Flow output at different loads. The flow characteristics of the µTesla pump under load was measured with different lengths of Tygon tubing (1/16″ ID), and through a microfluidic mixer device. The flow rate across the mixer device is considerably lower at 0–14.6 µL/min, and plotted in a different unit. Inset: Flow and fluidic resistance is inversely proportional. By extrapolating the microfluidic flow rates, its resistance is equivalent to a tubing length of 90 cm, or 1.04 MPa·s/m<sup>3</sup>.</p>
Full article ">Figure 5
<p>Pump power at different loads. The hydraulic power can be calculated from the product of hydraulic head and flow rates. Power drops after different fluidic loads were also calculated, including load across a microfluidic mixer. The µTesla produced 53 µW of power at 1200 rpm across the lowest load of 15 cm. The power reduced to 37 nW across the mixer microfluidic load. The 53 µW of µTesla’s nominal power output was more than adequate to drive the mixer device at flow rates of 0–14.6 µL/min, as shown in <a href="#sensors-16-01970-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 6
<p>Flow profile at different pump rates. To demonstrate that the pump is able to create laminar flow in the microfluidic mixer, a 500 µm wide section of a microchannel was imaged with gold nanoparticles in the flow stream (lines in insert). With the camera integration times fixed at 100–200 ms, the streak length was calibrated and calculated to yield the flow rate across the width of the channel. Parabolic flow profiles were observed at the three rotor speeds tested.</p>
Full article ">Figure 7
<p>µTesla driven, Tesla-inspired mixer microfluidics. The all-Tesla-fluidic system generated gradients using blue and red dyes at flow rates as low as 3 nL/min. The mixer incorporates flow folding structures similar to Tesla valves (CAD and enlarged insert on left). The widest gradient achieved spanned 4 mm across (20%–80% intensity), as analyzed from their RGB channels in ImageJ. Errors bars denote maximum and minimum values from image analysis.</p>
Full article ">
7101 KiB  
Article
Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram
by Chin-Sheng Chen, Po-Chun Chen and Chih-Ming Hsu
Sensors 2016, 16(11), 1969; https://doi.org/10.3390/s16111969 - 23 Nov 2016
Cited by 24 | Viewed by 8214
Abstract
This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) [...] Read more.
This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object’s pose and enhances the system’s ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems. Full article
Show Figures

Figure 1

Figure 1
<p>Hardware configuration.</p>
Full article ">Figure 2
<p>The architecture for the proposed algorithm.</p>
Full article ">Figure 3
<p>Database of stable positions.</p>
Full article ">Figure 4
<p>Results for (<b>a</b>) ROI isolation; (<b>b</b>) plane segmentation and (<b>c</b>) statistical outlier removal.</p>
Full article ">Figure 5
<p>Object description: (<b>a</b>) viewpoint feature and (<b>b</b>) extended FPFH.</p>
Full article ">Figure 6
<p>Example of the resultant VFH for an object.</p>
Full article ">Figure 7
<p>Poses that are symmetrical along the viewing direction.</p>
Full article ">Figure 8
<p>MVFH description: (<b>a</b>) viewpoint feature and (<b>b</b>) example of the resultant MVFH for one object.</p>
Full article ">Figure 9
<p>Comparison of two descriptors (VFH and MVFH) for three poses: (<b>a</b>) when the normal direction of the object surface identical to the viewpoint direction; (<b>b</b>) when there is a yaw angle of +30° and (<b>c</b>) when there is a yaw angle of −30°.</p>
Full article ">Figure 10
<p>Object recognition and registration results: (<b>a</b>) recognition using the MVFH in the scan (color: white) and database (color: green) point clouds; (<b>b</b>) shifting procedure and (<b>c</b>) pose refinement using ICP.</p>
Full article ">Figure 11
<p>Twelve tested cases [<a href="#B16-sensors-16-01969" class="html-bibr">16</a>].</p>
Full article ">Figure 12
<p>False recognition rate for the mirrored poses.</p>
Full article ">Figure 13
<p>The pose estimation performance.</p>
Full article ">Figure 14
<p>The experimental setup.</p>
Full article ">Figure 15
<p>The work pieces used in the experiment.</p>
Full article ">Figure 16
<p>The results for object recognition.</p>
Full article ">Figure 17
<p>Analysis of the recognition capability: (<b>a</b>) using the VFH descriptor and (<b>b</b>) using the MVFH descriptor.</p>
Full article ">Figure 17 Cont.
<p>Analysis of the recognition capability: (<b>a</b>) using the VFH descriptor and (<b>b</b>) using the MVFH descriptor.</p>
Full article ">Figure 18
<p>The reults for object grasping.</p>
Full article ">
1438 KiB  
Review
State of the Art, Trends and Future of Bluetooth Low Energy, Near Field Communication and Visible Light Communication in the Development of Smart Cities
by Gonzalo Cerruela García, Irene Luque Ruiz and Miguel Ángel Gómez-Nieto
Sensors 2016, 16(11), 1968; https://doi.org/10.3390/s16111968 - 23 Nov 2016
Cited by 65 | Viewed by 15464
Abstract
The current social impact of new technologies has produced major changes in all areas of society, creating the concept of a smart city supported by an electronic infrastructure, telecommunications and information technology. This paper presents a review of Bluetooth Low Energy (BLE), Near [...] Read more.
The current social impact of new technologies has produced major changes in all areas of society, creating the concept of a smart city supported by an electronic infrastructure, telecommunications and information technology. This paper presents a review of Bluetooth Low Energy (BLE), Near Field Communication (NFC) and Visible Light Communication (VLC) and their use and influence within different areas of the development of the smart city. The document also presents a review of Big Data Solutions for the management of information and the extraction of knowledge in an environment where things are connected by an “Internet of Things” (IoT) network. Lastly, we present how these technologies can be combined together to benefit the development of the smart city. Full article
(This article belongs to the Special Issue Smart City: Vision and Reality)
Show Figures

Figure 1

Figure 1
<p>General structure of VLC components.</p>
Full article ">Figure 2
<p>State of art and visionary evolution of NFC, BLE and VLC technologies.</p>
Full article ">
1185 KiB  
Article
Effects of Sampling Conditions and Environmental Factors on Fecal Volatile Organic Compound Analysis by an Electronic Nose Device
by Daniel J. C. Berkhout, Marc A. Benninga, Ruby M. Van Stein, Paul Brinkman, Hendrik J. Niemarkt, Nanne K. H. De Boer and Tim G. J. De Meij
Sensors 2016, 16(11), 1967; https://doi.org/10.3390/s16111967 - 23 Nov 2016
Cited by 30 | Viewed by 6449
Abstract
Prior to implementation of volatile organic compound (VOC) analysis in clinical practice, substantial challenges, including methodological, biological and analytical difficulties are faced. The aim of this study was to evaluate the influence of several sampling conditions and environmental factors on fecal VOC profiles, [...] Read more.
Prior to implementation of volatile organic compound (VOC) analysis in clinical practice, substantial challenges, including methodological, biological and analytical difficulties are faced. The aim of this study was to evaluate the influence of several sampling conditions and environmental factors on fecal VOC profiles, analyzed by an electronic nose (eNose). Effects of fecal sample mass, water content, duration of storage at room temperature, fecal sample temperature, number of freeze–thaw cycles and effect of sampling method (rectal swabs vs. fecal samples) on VOC profiles were assessed by analysis of totally 725 fecal samples by means of an eNose (Cyranose320®). Furthermore, fecal VOC profiles of totally 1285 fecal samples from 71 infants born at three different hospitals were compared to assess the influence of center of origin on VOC outcome. We observed that all analyzed variables significantly influenced fecal VOC composition. It was feasible to capture a VOC profile using rectal swabs, although this differed significantly from fecal VOC profiles of similar subjects. In addition, 1285 fecal VOC-profiles could significantly be discriminated based on center of birth. In conclusion, standardization of methodology is necessary before fecal VOC analysis can live up to its potential as diagnostic tool in clinical practice. Full article
(This article belongs to the Special Issue Gas Sensors for Health Care and Medical Applications)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>–<b>g</b>) Scatterplot for the discrimination by electronic nose based on difference in several variables, including: (<b>a</b>) sample mass; (<b>b</b>) number of freeze–thaw cycles; (<b>c</b>) sample temperature; (<b>d</b>) water content; (<b>e</b>) duration of storage at room temperature; (<b>f</b>) rectal swabbing; and (<b>g</b>) center of origin. Axes depicted are orthogonal linear recombinations of the raw sensor data by means of principle component analysis. Illustrated axes solely comprise principle components demonstrated to be statistically significant different for the variable concerned. Individual VOC profiles are illustrated as marked points. The intersection of the lines deriving from the individual profiles demonstrates the mean VOC profile of this specific variable. All evaluated sampling conditions have a significant influence on detected fecal VOC profile, only dilution from 1:2 to 1:5 did not affect outcome. Abbreviations: AMC = Academic medical center; MMC = Maxima Medical Center; VUmc = Vrije Universiteit medical center.</p>
Full article ">Figure 1 Cont.
<p>(<b>a</b>–<b>g</b>) Scatterplot for the discrimination by electronic nose based on difference in several variables, including: (<b>a</b>) sample mass; (<b>b</b>) number of freeze–thaw cycles; (<b>c</b>) sample temperature; (<b>d</b>) water content; (<b>e</b>) duration of storage at room temperature; (<b>f</b>) rectal swabbing; and (<b>g</b>) center of origin. Axes depicted are orthogonal linear recombinations of the raw sensor data by means of principle component analysis. Illustrated axes solely comprise principle components demonstrated to be statistically significant different for the variable concerned. Individual VOC profiles are illustrated as marked points. The intersection of the lines deriving from the individual profiles demonstrates the mean VOC profile of this specific variable. All evaluated sampling conditions have a significant influence on detected fecal VOC profile, only dilution from 1:2 to 1:5 did not affect outcome. Abbreviations: AMC = Academic medical center; MMC = Maxima Medical Center; VUmc = Vrije Universiteit medical center.</p>
Full article ">
5341 KiB  
Article
A Real-Time Kinect Signature-Based Patient Home Monitoring System
by Gaddi Blumrosen, Yael Miron, Nathan Intrator and Meir Plotnik
Sensors 2016, 16(11), 1965; https://doi.org/10.3390/s16111965 - 23 Nov 2016
Cited by 34 | Viewed by 8054
Abstract
Assessment of body kinematics during performance of daily life activities at home plays a significant role in medical condition monitoring of elderly people and patients with neurological disorders. The affordable and non-wearable Microsoft Kinect (“Kinect”) system has been recently used to estimate human [...] Read more.
Assessment of body kinematics during performance of daily life activities at home plays a significant role in medical condition monitoring of elderly people and patients with neurological disorders. The affordable and non-wearable Microsoft Kinect (“Kinect”) system has been recently used to estimate human subject kinematic features. However, the Kinect suffers from a limited range and angular coverage, distortion in skeleton joints’ estimations, and erroneous multiplexing of different subjects’ estimations to one. This study addresses these limitations by incorporating a set of features that create a unique “Kinect Signature”. The Kinect Signature enables identification of different subjects in the scene, automatically assign the kinematics feature estimations only to the subject of interest, and provide information about the quality of the Kinect-based estimations. The methods were verified by a set of experiments, which utilize real-time scenarios commonly used to assess motor functions in elderly subjects and in subjects with neurological disorders. The experiment results indicate that the skeleton based Kinect Signature features can be used to identify different subjects in high accuracy. We demonstrate how these capabilities can be used to assign the Kinect estimations to the Subject of Interest, and exclude low quality tracking features. The results of this work can help in establishing reliable kinematic features, which can assist in future to obtain objective scores for medical analysis of patient condition at home while not restricted to perform daily life activities. Full article
(This article belongs to the Special Issue Sensing Technology for Healthcare System)
Show Figures

Figure 1

Figure 1
<p>Data analysis scheme flowchart: joint estimation, static features estimation based on the joints’ estimation, feature selection, artifact detection and exclusion in the feature domain, identification of the SoI skeleton based on the KS, and after the KS is assigned to the right subject, kinematic features for kinematic analysis can be derived.</p>
Full article ">Figure 2
<p>Second and third experiments’ sets: (<b>a</b>) Calibration; (<b>b</b>) tracking; and (<b>c</b>) tapping tracking. In the calibration phase, the two subjects were identified by the Kinect, their features were calculated, averaged, and stored, and formed the KS. The SoI (subject of interest) for the first experiment was the left subject, and for the second, the right subject. For the tracking, the two subjects moved along and out of the range of the Kinect.</p>
Full article ">Figure 3
<p>The effect of features selection and artifact removal on the subject-identification success probability. The accuracy declines with the number of features (<span class="html-italic">N</span><sub>f</sub>) and increases when artifact removal is applied (AR = 1). With artifact removal, with only two features, a feasible classification success rate of rate of over 90% can be achieved.</p>
Full article ">Figure 4
<p>The features in time domain in calibration period (<b>a</b>), where the SoI is in blue color, and the other subject is in red color; and in the related feature space (<b>b</b>). The two subjects are well separated in both time domain and in feature domain in all the features.</p>
Full article ">Figure 5
<p>The five different subject indexes assignments, which represent different KIs, for the two subject as of the Kinect device: the other subject assign to (<b>a</b>) the first index; (<b>b</b>) the SoI assigned to index 2; (<b>c</b>) the other subject assigned to index 3; (<b>d</b>) the SoI loses its index; (<b>e</b>) and assigned again with index 1. The same subject assignments are multiplexed, due to the lack of ability of Kinect to distinguish between the different subjects in different scenes. An identification of the SoI is essential in order to analyze the SoI motion.</p>
Full article ">Figure 6
<p>Artifact examples: (<b>a</b>) Skeleton out of proportions; (<b>b</b>) Skeleton wrong merge; (<b>c</b>) Skeleton distortion due to shadowing.</p>
Full article ">Figure 7
<p>The five KIs and their corresponding three Kinect subject assignments in time (<b>a</b>); their corssponding mapping to Kinect indexes in the active set (<b>b</b>); the “life duration” of the five KIs (<b>c</b>); and the KIs in feature space (<b>d</b>).</p>
Full article ">Figure 8
<p>Subject classification. (<b>a</b>) Shows the different KI (Kinect Instances) compared to the SoI signatures (black line). It can be observed that objects 2, and 5, are significantly closer to the signature, and therefor seem to be related to the SoI; (<b>b</b>) Demonstrate choosing a detection threshold to maximally separate between the two subjects; (<b>c</b>) Shows the different KI mapped to the SoI, and the other subject. Note, that the other subject third instance can be seen as a noisy KI of the other subject.</p>
Full article ">Figure 9
<p>The SoI’s gait experiment results. (<b>a</b>) Shows the SoI’s Quality of tracking (KSs’ 2 and 5), and (<b>b</b>) Shows the SoI’s Body plane (x-y) velocity over time. It is seen that there are continuous burst periods (5–10, 12–13, 20–22), the skeleton is distorted from its reference KS, and hence the tracking quality is low. These bursts correspond to the artifacts shown in <a href="#sensors-16-01965-f005" class="html-fig">Figure 5</a> of shadowing, merge of skeletons, and going in and out of the Kinect range; (<b>b</b>) shows the SoI’s ground plane velocity. The nulls in the plane velocity, fits the tagging of turn arounds as was derived from the Kinect video color image at around 11, and 18–20 s.</p>
Full article ">Figure 10
<p>Tapping experiment results. (<b>a</b>) shows the SoI of the first KI in the tapping experiment. Like in the gait experiment it is well separated in the feature space, and its tracking quality, due to its optimal range is high; (<b>b</b>) shows upper and lower limbs orientation over time in tapping experiment. The lower limbs indicate on approaching phase with two gait cycles of length of around 4 s, and then the tapping stage begins, with periodic upper limbs activity. The subject in background did not affect the estimations, due to the optimal SoI location, and the line of sight conditions.</p>
Full article ">
1595 KiB  
Article
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter
by Weijian Si, Liwei Wang and Zhiyu Qu
Sensors 2016, 16(11), 1964; https://doi.org/10.3390/s16111964 - 23 Nov 2016
Cited by 14 | Viewed by 4691
Abstract
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target [...] Read more.
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>True target tracks of Scenario 1 in the <span class="html-italic">xy</span>-plane; the start/end points for each track are denoted by •/■, respectively.</p>
Full article ">Figure 2
<p>True target tracks of Scenario 2 in the <span class="html-italic">xy</span>-plane; the start/end points for each track are denoted by •/■, respectively.</p>
Full article ">Figure 3
<p>Optimal sub-pattern assignment (OSPA) distance versus time for the three filters: (<b>a</b>) the results obtained from Scenario 1; (<b>b</b>) the results obtained from Scenario 2. P, proposed; DR, dynamic reweighting.</p>
Full article ">Figure 4
<p>OSPA distance versus varying clutter intensity for the three filters (<math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>): (<b>a</b>) the results obtained from Scenario 1; (<b>b</b>) the results obtained from Scenario 2.</p>
Full article ">Figure 4 Cont.
<p>OSPA distance versus varying clutter intensity for the three filters (<math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>): (<b>a</b>) the results obtained from Scenario 1; (<b>b</b>) the results obtained from Scenario 2.</p>
Full article ">Figure 5
<p>Average OSPA distance versus varying detection probability for the three filters (<math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.25</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> <mtext> </mtext> <msup> <mi mathvariant="normal">m</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math>): (<b>a</b>) the results obtained from Scenario 1; (<b>b</b>) the results obtained from Scenario 2.</p>
Full article ">Figure 6
<p>Tracking performance versus varying clutter intensity (<math display="inline"> <semantics> <mrow> <msub> <mi>p</mi> <mrow> <mi>D</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <mn>0.90</mn> </mrow> </semantics> </math>): (<b>a</b>) time averaged OSPA distance versus varying clutter intensity; (<b>b</b>) measurement selection error versus varying clutter intensity.</p>
Full article ">Figure 7
<p>Tracking performance versus varying detection probability (<math display="inline"> <semantics> <mrow> <msub> <mi>λ</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>0.25</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> <mtext> </mtext> <msup> <mi mathvariant="normal">m</mi> <mrow> <mo>−</mo> <mn>2</mn> </mrow> </msup> </mrow> </semantics> </math>): (<b>a</b>) time averaged OSPA distance versus varying detection probability; (<b>b</b>) measurement selection error versus varying detection probability.</p>
Full article ">
2632 KiB  
Article
Accelerating Families of Fuzzy K-Means Algorithms for Vector Quantization Codebook Design
by Edson Mata, Silvio Bandeira, Paulo De Mattos Neto, Waslon Lopes and Francisco Madeiro
Sensors 2016, 16(11), 1963; https://doi.org/10.3390/s16111963 - 23 Nov 2016
Cited by 3 | Viewed by 4722
Abstract
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms [...] Read more.
The performance of signal processing systems based on vector quantization depends on codebook design. In the image compression scenario, the quality of the reconstructed images depends on the codebooks used. In this paper, alternatives are proposed for accelerating families of fuzzy K-means algorithms for codebook design. The acceleration is obtained by reducing the number of iterations of the algorithms and applying efficient nearest neighbor search techniques. Simulation results concerning image vector quantization have shown that the acceleration obtained so far does not decrease the quality of the reconstructed images. Codebook design time savings up to about 40% are obtained by the accelerated versions with respect to the original versions of the algorithms. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Images <math display="inline"> <semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics> </math> pixels, 8.0 bpp. (<b>a</b>) Lena; (<b>b</b>) Barbara; (<b>c</b>) Elaine; (<b>d</b>) Boat; (<b>e</b>) Clock; (<b>f</b>) Goldhill; (<b>g</b>) Peppers; (<b>h</b>) Mandrill; (<b>i</b>) Tiffany.</p>
Full article ">Figure 1 Cont.
<p>Images <math display="inline"> <semantics> <mrow> <mn>256</mn> <mo>×</mo> <mn>256</mn> </mrow> </semantics> </math> pixels, 8.0 bpp. (<b>a</b>) Lena; (<b>b</b>) Barbara; (<b>c</b>) Elaine; (<b>d</b>) Boat; (<b>e</b>) Clock; (<b>f</b>) Goldhill; (<b>g</b>) Peppers; (<b>h</b>) Mandrill; (<b>i</b>) Tiffany.</p>
Full article ">Figure 2
<p>Image encoding using DWT.</p>
Full article ">Figure 3
<p>Images obtained from the inverse discrete wavelet transform with the exclusion of subbands <span class="html-italic">S</span><sub>11</sub>, <span class="html-italic">S</span><sub>12</sub> and <span class="html-italic">S</span><sub>13</sub>. (<b>a</b>) Lena PSNR = 30.05 dB; (<b>b</b>) Barbara PSNR = 25.54 dB; (<b>c</b>) Elaine PSNR = 31.88 dB; (<b>d</b>) Boat PSNR = 26.07 dB; (<b>e</b>) Clock PSNR = 29.02 dB; (<b>f</b>) Goldhill PSNR = 27.77 dB; (<b>g</b>) Peppers PSNR = 30.74 dB; (<b>h</b>) Mandrill PSNR = 24.93 dB; (<b>i</b>) Tiffany PSNR = 31.69 dB.</p>
Full article ">Figure 4
<p>Images Lena: (<b>a</b>) Original; (<b>b</b>) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.62 dB and SSIM = 0.7211); (<b>c</b>) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 29.35 dB and SSIM = 0.8367). Codebooks were designed with training set P-M-T by MFKM2-ENNS.</p>
Full article ">Figure 5
<p>Images Goldhill: (<b>a</b>) Original; (<b>b</b>) Reconstructed using spatial domain VQ with 0.3125 bpp (PSNR = 25.71 dB and SSIM = 0.6391); (<b>c</b>) Reconstructed using DWT VQ with 0.3125 bpp (PSNR = 26.81 dB and SSIM = 0.7640). Codebooks were designed with training set P-M-T by MFKM2-ENNS.</p>
Full article ">
1455 KiB  
Article
Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation
by Rahul Deb Das and Stephan Winter
Sensors 2016, 16(11), 1962; https://doi.org/10.3390/s16111962 - 23 Nov 2016
Cited by 21 | Viewed by 7702
Abstract
Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers’ smartphones. These traces can be used to [...] Read more.
Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers’ smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Trip uncertain temporal relationships between a reported trip (<math display="inline"> <semantics> <msub> <mi>T</mi> <mi>R</mi> </msub> </semantics> </math>) and predicted trip (<math display="inline"> <semantics> <msub> <mi>T</mi> <mi>P</mi> </msub> </semantics> </math>) based on Allen’s temporal calculus. In this figure <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>t</mi> <mn>2</mn> </msub> </semantics> </math> are the start time and end time of a given trip respectively.</p>
Full article ">Figure 2
<p>A raw trajectory is shown in Figure (<b>a</b>); Atomic segments are generated using an atomic kernel of time length <span class="html-italic">η</span> on the raw trajectory in Figure (<b>b</b>); Using a state-based bottom-up approach a given trajectory is then segmented into four segments that are detected as four distinct trips based on different transport modes with three transfers in Figure (<b>c</b>).</p>
Full article ">Figure 3
<p>A state-based bottom-up framework for travel dairy generation.</p>
Full article ">Figure 4
<p>General transit feed specification (GTFS) schema and consistency check between the predicted trip and the scheduled trip.</p>
Full article ">Figure 5
<p>Some possible stop sequence ambiguity along different routes: <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>O</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>D</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics> </math> denote an origin and destination stop along route <math display="inline"> <semantics> <msub> <mi>R</mi> <mi>i</mi> </msub> </semantics> </math>. However, departure time at <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>O</mi> </msub> </semantics> </math> must be earlier than at <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>D</mi> </msub> </semantics> </math>.</p>
Full article ">Figure 6
<p>Smartphone axes in different directions. This figure has been reproduced from [<a href="#B64-sensors-16-01962" class="html-bibr">64</a>].</p>
Full article ">Figure 7
<p>Data set 1: Low frequency (1Hz, 2Hz) GPS trajectories in Greater Melbourne.</p>
Full article ">Figure 8
<p>Map (<b>a</b>) shows the zone of ambiguity with a significant overlap between different public transport routes; Map (<b>b</b>) shows the overlap between the convex hull of the trajectory data set (Data set 1) and the zone of ambiguity.</p>
Full article ">Figure 9
<p>Performance of various classifiers in Layer 1 when using GPS and IMU information at 10 s (<b>a</b>) and 60 s (<b>b</b>).</p>
Full article ">Figure 10
<p>Precision of RF and MLP classifier at different temporal uncertainties.</p>
Full article ">Figure 11
<p>False discovery rate (FDR) of a RF based classifier for trip detection at different ς using a state based bottom up approach.</p>
Full article ">Figure 12
<p>Illustrates average proximity of some of the trips to different route types. Although there is an overlap by the routes of different public transport modes but a trip with a given mode type (for bus (<b>a</b>); train (<b>b</b>); tram (<b>c</b>)) shows a distinct proximity behavior to the given route type. However, since walking can happen anywhere for walking trips there is discernible visual pattern (<b>d</b>).</p>
Full article ">Figure 13
<p>False detection rate (FDR) generated by the walking-based model.</p>
Full article ">Figure 14
<p>A raw trajectory ID <math display="inline"> <semantics> <mrow> <mn>150615</mn> <mo>_</mo> <mn>1</mn> </mrow> </semantics> </math> in 2D without any semantic information (<b>a</b>); and in 3D as a space-time path with semantic information such as different trips with their start and end in space-time, modes used, travel direction, signal gap, and travel speed (<b>b</b>).</p>
Full article ">Figure 15
<p>A continuous acceleration profile showing distinct behavior of different transport modes even through the semantic gap due to GPS signal loss on the <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>T</mi> <mi>r</mi> <mi>a</mi> <mi>j</mi> <mi>e</mi> <mi>c</mi> <mi>t</mi> <mi>o</mi> <mi>r</mi> <mi>y</mi> <mi>I</mi> <mi>D</mi> <mn>150615</mn> <mo>_</mo> <mn>1</mn> <mo>)</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure 16
<p>Transport mode detection accuracy using a 5 s kernel over different test sensor traces.</p>
Full article ">
2450 KiB  
Article
An Improved Mobility-Based Control Protocol for Tolerating Clone Failures in Wireless Sensor Networks
by Yuping Zhou, Naixue Xiong, Mingxin Tan, Rufeng Huang and Jon Kleonbet
Sensors 2016, 16(11), 1955; https://doi.org/10.3390/s16111955 - 23 Nov 2016
Cited by 3 | Viewed by 4671
Abstract
Nowadays, with the ubiquitous presence of the Internet of Things industry, the application of emerging sensor networks has become a focus of public attention. Unattended sensor nodes can be comprised and cloned to destroy the network topology. This paper proposes a novel distributed [...] Read more.
Nowadays, with the ubiquitous presence of the Internet of Things industry, the application of emerging sensor networks has become a focus of public attention. Unattended sensor nodes can be comprised and cloned to destroy the network topology. This paper proposes a novel distributed protocol and management technique for the detection of mobile replicas to tolerate node failures. In our scheme, sensors’ location claims are forwarded to obtain samples only when the corresponding witnesses meet. Meanwhile, sequential tests of statistical hypotheses are applied to further detect the cloned node by witnesses. The combination of randomized detection based on encountering and sequential tests drastically reduces the routing overhead and false positive/negative rate for detection. Theoretical analysis and simulation results show the detection efficiency and reasonable overhead of the proposed method. Full article
(This article belongs to the Special Issue Topology Control in Emerging Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Comparison of detection probability (<math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1000</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 2
<p>Comparison of detection probability (<math display="inline"> <semantics> <mrow> <mi>R</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>500</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 3
<p>Detection probability versus detection time.</p>
Full article ">Figure 4
<p>The configurations of <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mn>0</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mi>α</mi> <mn>1</mn> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 5
<p>The average number of samples for each malicious node.</p>
Full article ">Figure 6
<p>The average number of samples for each benign node.</p>
Full article ">Figure 7
<p>The probability distribution of the amount of samples for malicious node.</p>
Full article ">Figure 8
<p>The probability distribution of the amount of samples for benign node.</p>
Full article ">Figure 9
<p>The detection probability under different <math display="inline"> <semantics> <mrow> <msub> <mi>n</mi> <mi>p</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>The average number of samples versus D when <span class="html-italic">V</span><sub>max</sub> = 10 m/s.</p>
Full article ">Figure 11
<p>The average number of samples versus <span class="html-italic">D</span> when <span class="html-italic">V</span><sub>max</sub> = 40 m/s.</p>
Full article ">Figure 12
<p>Communication overhead.</p>
Full article ">
2729 KiB  
Article
Optical Aptamer Probes of Fluorescent Imaging to Rapid Monitoring of Circulating Tumor Cell
by Ji Yeon Hwang, Sang Tae Kim, Ho-Seong Han, Kyunggon Kim and Jin Soo Han
Sensors 2016, 16(11), 1909; https://doi.org/10.3390/s16111909 - 23 Nov 2016
Cited by 20 | Viewed by 6919
Abstract
Fluorescence detecting of exogenous EpCAM (epithelial cell adhesion molecule) or muc1 (mucin1) expression correlated to cancer metastasis using nanoparticles provides pivotal information on CTC (circulating tumor cell) occurrence in a noninvasive tool. In this study, we study a new skill to detect extracellular [...] Read more.
Fluorescence detecting of exogenous EpCAM (epithelial cell adhesion molecule) or muc1 (mucin1) expression correlated to cancer metastasis using nanoparticles provides pivotal information on CTC (circulating tumor cell) occurrence in a noninvasive tool. In this study, we study a new skill to detect extracellular EpCAM/muc1 using quantum dot-based aptamer beacon (QD-EpCAM/muc1 ALB (aptamer linker beacon). The QD-EpCAM/muc1 ALB was designed using QDs (quantum dots) and probe. The EpCAM/muc1-targeting aptamer contains a Ep-CAM/muc1 binding sequence and BHQ1 (black hole quencher 1) or BHQ2 (black hole quencher2). In the absence of target EpCAM/muc1, the QD-EpCAM/muc1 ALB forms a partial duplex loop-like aptamer beacon and remained in quenched state because the BHQ1/2 quenches the fluorescence signal-on of the QD-EpCAM/muc1 ALB. The binding of EpCAM/muc1 of CTC to the EpCAM/muc1 binding aptamer sequence of the EpCAM/muc1-targeting oligonucleotide triggered the dissociation of the BHQ1/2 quencher and subsequent signal-on of a green/red fluorescence signal. Furthermore, acute inflammation was stimulated by trigger such as caerulein in vivo, which resulted in increased fluorescent signal of the cy5.5-EpCAM/muc1 ALB during cancer metastasis due to exogenous expression of EpCAM/muc1 in Panc02-implanted mouse model. Full article
(This article belongs to the Special Issue Nanobiosensing for Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of: QD<sub>525</sub>-muc1 ALB (<b>A</b>); cy5.5-EpCAM ALB (<b>B</b>); and QD<sub>565</sub>-EpCAM ALB (<b>C</b>). The EpCAM expression as an evidence of CTC was determined by the QD signal of ALB. The predicted secondary structures of full-length EpCAM ALB aptamer modeled using Mfold.</p>
Full article ">Figure 2
<p>Specificity of the QD<sub>565</sub>-EpCAM ALB to sense EpCAM expression of CTCs in vitro: (<b>A</b>) LAS 4000 images of Panc02 cells treated with the QD<sub>565</sub>-EpCAM ALB in PCR tube were shown. A fixed concentration of the QDs was incubated with various cell numbers of the EpCAM-targeting protein to detect the optimal contents of EpCAM needed to perform the best specific effect; (<b>B</b>) Fluorescence intensity of the QD<sub>565</sub>-EpCAM ALB after incubated with the different cell numbers of Panc02 cells. ROI analysis from the fluorescence tube image showed that the fluorescence signal increased in a cells-dependent manner. Data are displayed as mean ± standard deviations of triplicate samples (** <span class="html-italic">p</span> &lt; 0.005). The Confocal images QD525 and QD565 were recorded under excitation of 525 and 565 nm, respectively. The scale bar in the CLSM images is 20 μm. Fluorescence intensity of QD565-EpCAM/QD525-muc1 ALB on Panc02 cells after caerulein treatment (<b>C</b>).</p>
Full article ">Figure 3
<p>Selectivity study of various probe (RS; random sequence, EpCAM ALB, antibody, and mutant) against different cell lines including EpCAM-positive cell lines: (<b>A</b>) breast cancer cell line MDA-MB-231; (<b>B</b>) human gastric carcinoma cell line Kato III; and (<b>C</b>) negative cell line human kidney epithelial cell line HEK-293T.</p>
Full article ">Figure 4
<p>Confocal images of CTC existence from the Panc02 cells by the dose-dependent caerulein treatment after incubated with: EpCAM ALB (QD<sub>565</sub>, 1 pmol, specifically recognizes EpCAM), muc1-ALB (QD<sub>525</sub>-labeled, 2 pmol, specifically recognizes muc1) (<b>A</b>); and EpCAM/muc1 mutant probe (QD<sub>525</sub>/QD<sub>565</sub>-labeled, 2.5 pmol, specifically recognizes EpCAM/muc1, respectively) for 2 h (<b>B</b>). The Confocal images QD<sub>525</sub> and QD<sub>565</sub> were recorded under excitation of 525 and 565 nm, respectively. The scale bar in the CLSM images is 15 μm. Fluorescence intensity of QD-EpCAM/muc1 ALB on CTC cells after a dose-dependent caerulein. Data are displayed as mean ± standard deviations of triplicate samples (** <span class="html-italic">p</span> &lt; 0.005). P/C: phase contrast.</p>
Full article ">Figure 5
<p>Confocal microscopy imaging of CTC cells mixed with bloods and NIF-EpCAM/muc1 ALB probes. Either 5 pmol NIF-EpCAM ALB or 10 pmol NIF-muc1 ALB was mixed with isolated-bloods two weeks after caerulein injection in Panc02-implanted mice (<b>A</b>). Fluorescence signals two weeks after caerulein treatment were significantly enhanced compared to those of the untreated caerulein in Panc02-implanted group. Positive cell numbers of QD-EpCAM/muc1 ALB on CTC cells in bloods after caerulein treatment (<b>B</b>). Data are represented as mean ± standard deviations of triplicate samples (** <span class="html-italic">p</span> &lt; 0.005). Quantitative positive cells showed a higher fluorescent positive cells in blood with the caerulein-treated Panc02 cells than in mice without caerulein treatment (control) and normal mice, ** <span class="html-italic">p</span> &lt; 0.005.</p>
Full article ">Figure 6
<p>In vivo monitoring of the CTC existence pattern in mice with the NIF-EpCAM ALB incorporated into Panc02 cells. (<b>A</b>) The NIF-EpCAM ALB injection into mice was performed in the tail vein. The implanted trial of the Panc02 injection in pancreatic tissues, in the mice treated with caerulein for CTC existence, and placed by surgical implantation. An enhanced fluorescence signal in the CTC existence group (right) was detected compared to the group injected with phosphate-buffered saline (PBS) (left). Fluorescence images indicated that CTC cells of the implanted Panc02 cells in liver had similar observed; (<b>B</b>) Quantitative ROI analysis showed a higher fluorescence intensity in mice with the caerulein-treated Panc02 cells than in mice without caerulein treatment (middle), ** <span class="html-italic">p</span> &lt; 0.005. Dic: differential interference contrast; RFU: relative fluorescence unit.</p>
Full article ">
2815 KiB  
Article
Images from Bits: Non-Iterative Image Reconstruction for Quanta Image Sensors
by Stanley H. Chan, Omar A. Elgendy and Xiran Wang
Sensors 2016, 16(11), 1961; https://doi.org/10.3390/s16111961 - 22 Nov 2016
Cited by 50 | Viewed by 11830
Abstract
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal [...] Read more.
A quanta image sensor (QIS) is a class of single-photon imaging devices that measure light intensity using oversampled binary observations. Because of the stochastic nature of the photon arrivals, data acquired by QIS is a massive stream of random binary bits. The goal of image reconstruction is to recover the underlying image from these bits. In this paper, we present a non-iterative image reconstruction algorithm for QIS. Unlike existing reconstruction methods that formulate the problem from an optimization perspective, the new algorithm directly recovers the images through a pair of nonlinear transformations and an off-the-shelf image denoising algorithm. By skipping the usual optimization procedure, we achieve orders of magnitude improvement in speed and even better image reconstruction quality. We validate the new algorithm on synthetic datasets, as well as real videos collected by one-bit single-photon avalanche diode (SPAD) cameras. Full article
(This article belongs to the Special Issue Photon-Counting Image Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Block diagram of the QIS imaging model. An input signal <math display="inline"> <semantics> <mrow> <msub> <mi>c</mi> <mi>n</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics> </math> is scaled by a constant <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics> </math>. The first part of the block diagram is the upsampling <math display="inline"> <semantics> <mrow> <mo>(</mo> <mo>↑</mo> <mi>K</mi> <mo>)</mo> </mrow> </semantics> </math> followed by a linear filter <math display="inline"> <semantics> <mrow> <mo>{</mo> <msub> <mi>g</mi> <mi>k</mi> </msub> <mo>}</mo> </mrow> </semantics> </math>. The overall process can be written as <math display="inline"> <semantics> <mrow> <mi mathvariant="bold-italic">s</mi> <mo>=</mo> <mi>α</mi> <mi mathvariant="bold-italic">G</mi> <mi mathvariant="bold-italic">c</mi> </mrow> </semantics> </math>. The second part of the block diagram is to generate a binary random variable <math display="inline"> <semantics> <msub> <mi>B</mi> <mi>m</mi> </msub> </semantics> </math> from Poisson random variable <math display="inline"> <semantics> <msub> <mi>Y</mi> <mi>m</mi> </msub> </semantics> </math>. The example at the bottom shows the case where <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Pictorial interpretation of Proposition 1: Given an array of one-bit measurements (black = 0, white = 1), we compute the number of ones within a block of size <span class="html-italic">K</span>. Then, the solution of the MLE problem in Equation (<a href="#FD13-sensors-16-01961" class="html-disp-formula">13</a>) is found by applying an inverse incomplete Gamma function <math display="inline"> <semantics> <mrow> <msubsup> <mi mathvariant="sans-serif">Ψ</mi> <mi>q</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msubsup> <mrow> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </mrow> </semantics> </math> and a scaling factor <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>/</mo> <mi>α</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Image reconstruction using synthetic data. In this experiment, we generate one-bit measurements using a ground truth image (<b>a</b>) with <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>160</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> (so <math display="inline"> <semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>). The result shown in (<b>b</b>) is obtained using the simple summation, whereas the result shown in (<b>c</b>) is obtained using the MLE solution. It can be seen that the simple summation has a mismatch in the tone compared to the ground truth.</p>
Full article ">Figure 4
<p>Two possible ways of improving image smoothness for QIS. (<b>a</b>) The conventional approach denoises the image after <math display="inline"> <semantics> <msub> <mover accent="true"> <mi>c</mi> <mo>^</mo> </mover> <mi>n</mi> </msub> </semantics> </math> is computed; (<b>b</b>) the proposed approach: apply the denoiser before the inverse incomplete Gamma function, together with a pair of Anscombe transforms <math display="inline"> <semantics> <mi mathvariant="script">T</mi> </semantics> </math>. The symbol <math display="inline"> <semantics> <mi mathvariant="script">D</mi> </semantics> </math> in this figure denotes a generic Gaussian noise image denoiser.</p>
Full article ">Figure 5
<p>Illustration of Anscombe transform. Both sub-figures contain <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>64</mn> </mrow> </semantics> </math> (<math display="inline"> <semantics> <mrow> <mn>8</mn> <mo>×</mo> <mn>8</mn> </mrow> </semantics> </math>) pixels <math display="inline"> <semantics> <mrow> <msub> <mi>c</mi> <mn>0</mn> </msub> <mo>,</mo> <mi>…</mi> <mo>,</mo> <msub> <mi>c</mi> <mrow> <mi>N</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics> </math>. For each pixel, we generate 100 binary Poisson measurements and sum to obtain binomial random variables <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mn>0</mn> </msub> <mo>,</mo> <mi>…</mi> <mo>,</mo> <msub> <mi>S</mi> <mrow> <mi>N</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics> </math>. We then calculate the variance of each <math display="inline"> <semantics> <msub> <mi>S</mi> <mi>n</mi> </msub> </semantics> </math>. Note the constant variance after the Anscombe transform.</p>
Full article ">Figure 6
<p>Comparison between image denoising after the MLE solution and using the proposed Anscombe transform. The denoiser we use in this experiment is 3D block matching (BM3D) [<a href="#B53-sensors-16-01961" class="html-bibr">53</a>]. The binary observations are generated using the configurations <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>160</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>. The values shown are the peak signal to noise ratio (PSNR).</p>
Full article ">Figure 7
<p>PSNR comparison of various image reconstruction algorithms on the Berkeley Segmentation database [<a href="#B55-sensors-16-01961" class="html-bibr">55</a>]. In this experiment, we fix <math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>, and <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math>. The proposed algorithm uses BM3D [<a href="#B53-sensors-16-01961" class="html-bibr">53</a>] as the image denoiser.</p>
Full article ">Figure 8
<p>Runtime comparison of the proposed algorithm and the alternating direction method of multipliers (ADMM) algorithm [<a href="#B31-sensors-16-01961" class="html-bibr">31</a>].</p>
Full article ">Figure 9
<p>Influence of the oversampling factor <span class="html-italic">K</span> on the image reconstruction quality. In this experiment, we set <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mi>K</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>q</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>. <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>Image reconstruction of two real video sequences captured using a <math display="inline"> <semantics> <mrow> <mn>320</mn> <mo>×</mo> <mn>240</mn> </mrow> </semantics> </math> single-photon avalanche diode (SPAD) camera running at 10k frames per second [<a href="#B14-sensors-16-01961" class="html-bibr">14</a>,<a href="#B15-sensors-16-01961" class="html-bibr">15</a>,<a href="#B16-sensors-16-01961" class="html-bibr">16</a>]. In this experiment, we use <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>16</mn> </mrow> </semantics> </math> frames to construct one output frame. In both columns, the left are the raw one-bit measurements, and the right are the recovered images using the proposed algorithm.</p>
Full article ">Figure 11
<p>Image reconstruction of real video sequences captured using the <math display="inline"> <semantics> <mrow> <mn>512</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics> </math> SwissSPAD camera running at 156k frames per second [<a href="#B17-sensors-16-01961" class="html-bibr">17</a>,<a href="#B18-sensors-16-01961" class="html-bibr">18</a>]. (<b>a</b>) is a snapshot of the raw one-bit image. (<b>b</b>) shows the result of summing <span class="html-italic">T</span> = 4, 16, 64, 256 temporal frames with <math display="inline"> <semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>. (<b>c</b>) shows the corresponding results using the proposed algorithm.</p>
Full article ">
1095 KiB  
Article
IEEE 802.11ah: A Technology to Face the IoT Challenge
by Victor Baños-Gonzalez, M. Shahwaiz Afaqui, Elena Lopez-Aguilera and Eduard Garcia-Villegas
Sensors 2016, 16(11), 1960; https://doi.org/10.3390/s16111960 - 22 Nov 2016
Cited by 85 | Viewed by 14825
Abstract
Since the conception of the Internet of things (IoT), a large number of promising applications and technologies have been developed, which will change different aspects in our daily life. This paper explores the key characteristics of the forthcoming IEEE 802.11ah specification. This future [...] Read more.
Since the conception of the Internet of things (IoT), a large number of promising applications and technologies have been developed, which will change different aspects in our daily life. This paper explores the key characteristics of the forthcoming IEEE 802.11ah specification. This future IEEE 802.11 standard aims to amend the IEEE 802.11 legacy specification to support IoT requirements. We present a thorough evaluation of the foregoing amendment in comparison to the most notable IEEE 802.11 standards. In addition, we expose the capabilities of future IEEE 802.11ah in supporting different IoT applications. Also, we provide a brief overview of the technology contenders that are competing to cover the IoT communications framework. Numerical results are presented showing how the future IEEE 802.11ah specification offers the features required by IoT communications, thus putting forward IEEE 802.11ah as a technology to cater the needs of the Internet of Things paradigm. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Internet of Everything concept.</p>
Full article ">Figure 2
<p>Macro deployment A-MPDU throughput vs. coverage range in IEEE 802.11: (<b>a</b>) shows the throughput using 1 SS for 802.11n and 802.11ac; (<b>b</b>) exposes the throughput for 802.11ah in 1, 2, 4, 8, 16 MHz CBW with 1 SS, highlighting the new MCS10 with 1 SS; (<b>c</b>) depicts the throughput using 4 and 8 SS for 802.11n and 802.11ac, respectively; (<b>d</b>) highlights the throughput for 802.11ah using 4 SS.</p>
Full article ">Figure 3
<p>Indoor A-MPDU Throughput vs. coverage range in IEEE 802.11: (<b>a</b>) highlights the throughput using 1 SS for 802.11n and 802.11ac; (<b>b</b>) shows the throughput for 802.11ah in 1, 2, 4, 8, 16 MHz CBW with 1 SS, also exposes the throughput on 1MHz CBW and MCS10 with 1 SS; (<b>c</b>) depicts the throughput using 4 and 8 SS for 802.11n and 802.11ac, respectively; (<b>d</b>) highlights the throughput for 802.11ah using 4 SS.</p>
Full article ">Figure 3 Cont.
<p>Indoor A-MPDU Throughput vs. coverage range in IEEE 802.11: (<b>a</b>) highlights the throughput using 1 SS for 802.11n and 802.11ac; (<b>b</b>) shows the throughput for 802.11ah in 1, 2, 4, 8, 16 MHz CBW with 1 SS, also exposes the throughput on 1MHz CBW and MCS10 with 1 SS; (<b>c</b>) depicts the throughput using 4 and 8 SS for 802.11n and 802.11ac, respectively; (<b>d</b>) highlights the throughput for 802.11ah using 4 SS.</p>
Full article ">
3495 KiB  
Article
Evaluation of Hyaluronic Acid Dilutions at Different Concentrations Using a Quartz Crystal Resonator (QCR) for the Potential Diagnosis of Arthritic Diseases
by Luis Armando Carvajal Ahumada, Marco Xavier Rivera González, Oscar Leonardo Herrera Sandoval and José Javier Serrano Olmedo
Sensors 2016, 16(11), 1959; https://doi.org/10.3390/s16111959 - 22 Nov 2016
Cited by 12 | Viewed by 5917
Abstract
The main objective of this article is to demonstrate through experimental means the capacity of the quartz crystal resonator (QCR) to characterize biological samples of aqueous dilutions of hyaluronic acid according to their viscosity and how this capacity may be useful in the [...] Read more.
The main objective of this article is to demonstrate through experimental means the capacity of the quartz crystal resonator (QCR) to characterize biological samples of aqueous dilutions of hyaluronic acid according to their viscosity and how this capacity may be useful in the potential diagnosis of arthritic diseases. The synovial fluid is viscous due to the presence of hyaluronic acid, synthesized by synovial lining cells (type B), and secreted into the synovial fluid thus making the fluid viscous. In consequence, aqueous dilutions of hyaluronic acid may be used as samples to emulate the synovial fluid. Due to the viscoelastic and pseudo-plastic behavior of hyaluronic acid, it is necessary to use the Rouse model in order to obtain viscosity values comparable with viscometer measures. A Fungilab viscometer (rheometer) was used to obtain reference measures of the viscosity in each sample in order to compare them with the QCR prototype measures. Full article
(This article belongs to the Special Issue Point-of-Care Biosensors)
Show Figures

Figure 1

Figure 1
<p>Change in the morphology of the conductance curve for bare crystal and in contact with a liquid sample.</p>
Full article ">Figure 2
<p>Shear Stress and Shear Rate for Newtonian (red) and Pseudoplastic (black) fluids.</p>
Full article ">Figure 3
<p>Factor χ for polymeric fluids (pseudoplastic behavior).</p>
Full article ">Figure 4
<p>(<b>a</b>) Simplified Diagram Acquisition System of the Biosensor; (<b>b</b>) Blocks Diagram System of the Biosensor.</p>
Full article ">Figure 5
<p>Gamry Cell (<b>a</b>) and Prototype System (<b>b</b>).</p>
Full article ">Figure 6
<p>QCR biosensor performance vs. Fungilab viscometer performance.</p>
Full article ">Figure 7
<p>Response curve—Δf vs. Concentration of the glycerol samples (Newtonian behavior).</p>
Full article ">Figure 8
<p>Response curve of the QCR biosensor vs. the square root of the density-viscosity product (<b>a</b>) and Response curve of the QCR biosensor vs. concentration (<b>b</b>).</p>
Full article ">Figure 9
<p>Apparent viscosity obtained by Rouse model vs. viscometer measures.</p>
Full article ">
3562 KiB  
Article
Queuing Time Prediction Using WiFi Positioning Data in an Indoor Scenario
by Hua Shu, Ci Song, Tao Pei, Lianming Xu, Yang Ou, Libin Zhang and Tao Li
Sensors 2016, 16(11), 1958; https://doi.org/10.3390/s16111958 - 22 Nov 2016
Cited by 20 | Viewed by 5994
Abstract
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This [...] Read more.
Queuing is common in urban public places. Automatically monitoring and predicting queuing time can not only help individuals to reduce their wait time and alleviate anxiety but also help managers to allocate resources more efficiently and enhance their ability to address emergencies. This paper proposes a novel method to estimate and predict queuing time in indoor environments based on WiFi positioning data. First, we use a series of parameters to identify the trajectories that can be used as representatives of queuing time. Next, we divide the day into equal time slices and estimate individuals’ average queuing time during specific time slices. Finally, we build a nonstandard autoregressive (NAR) model trained using the previous day’s WiFi estimation results and actual queuing time to predict the queuing time in the upcoming time slice. A case study comparing two other time series analysis models shows that the NAR model has better precision. Random topological errors caused by the drift phenomenon of WiFi positioning technology (locations determined by a WiFi positioning system may drift accidently) and systematic topological errors caused by the positioning system are the main factors that affect the estimation precision. Therefore, we optimize the deployment strategy during the positioning system deployment phase and propose a drift ratio parameter pertaining to the trajectory screening phase to alleviate the impact of topological errors and improve estimates. The WiFi positioning data from an eight-day case study conducted at the T3-C entrance of Beijing Capital International Airport show that the mean absolute estimation error is 147 s, which is approximately 26.92% of the actual queuing time. For predictions using the NAR model, the proportion is approximately 27.49%. The theoretical predictions and the empirical case study indicate that the NAR model is an effective method to estimate and predict queuing time in indoor public areas. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Theoretical map of the trilateration positioning method.</p>
Full article ">Figure 2
<p>Process flow of the queuing time determination model.</p>
Full article ">Figure 3
<p>Queuing sequence of an individual in the time domain.</p>
Full article ">Figure 4
<p>Time slicing for queuing time estimation and prediction.</p>
Full article ">Figure 5
<p>General overview of the queue zone and the locations of the APs.</p>
Full article ">Figure 6
<p>Validation results of the WiFi-based estimation model.</p>
Full article ">Figure 7
<p>Passenger trajectory examples. (<b>a</b>–<b>d</b>) show trajectories of four different passengers in the T3-C Entrance.</p>
Full article ">Figure 8
<p>Comparison of the estimated and actual queuing times. (<b>a</b>) Comparison of the estimated and actual queuing times on 11 August; (<b>b</b>) Comparison of the estimated and actual queuing times on 12 August; (<b>c</b>) Comparison of the estimated and actual queuing times on 13 August; (<b>d</b>) Comparison of the estimated and actual queuing times on 14 August.</p>
Full article ">Figure 9
<p>Comparison of the predicted and actual queuing times. (<b>a</b>) Comparison of the predicted and actual queuing times on 14 August; (<b>b</b>) Comparison of the predicted and actual queuing times on 15 August; (<b>c</b>) Comparison of the predicted and actual queuing times on 16 August; (<b>d</b>) Comparison of the predicted and actual queuing times on 17 August.</p>
Full article ">
10902 KiB  
Article
Bamboo Classification Using WorldView-2 Imagery of Giant Panda Habitat in a Large Shaded Area in Wolong, Sichuan Province, China
by Yunwei Tang, Linhai Jing, Hui Li, Qingjie Liu, Qi Yan and Xiuxia Li
Sensors 2016, 16(11), 1957; https://doi.org/10.3390/s16111957 - 22 Nov 2016
Cited by 13 | Viewed by 7635
Abstract
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In [...] Read more.
This study explores the ability of WorldView-2 (WV-2) imagery for bamboo mapping in a mountainous region in Sichuan Province, China. A large area of this place is covered by shadows in the image, and only a few sampled points derived were useful. In order to identify bamboos based on sparse training data, the sample size was expanded according to the reflectance of multispectral bands selected using the principal component analysis (PCA). Then, class separability based on the training data was calculated using a feature space optimization method to select the features for classification. Four regular object-based classification methods were applied based on both sets of training data. The results show that the k-nearest neighbor (k-NN) method produced the greatest accuracy. A geostatistically-weighted k-NN classifier, accounting for the spatial correlation between classes, was then applied to further increase the accuracy. It achieved 82.65% and 93.10% of the producer’s and user’s accuracies respectively for the bamboo class. The canopy densities were estimated to explain the result. This study demonstrates that the WV-2 image can be used to identify small patches of understory bamboos given limited known samples, and the resulting bamboo distribution facilitates the assessments of the habitats of giant pandas. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The study area in Wuyipeng (the true color composition of the WV-2 MS Bands 5, 3 and 2 as red, green and blue channels, respectively).</p>
Full article ">Figure 2
<p>Flowchart of the classification process.</p>
Full article ">Figure 3
<p>Distributions of the reflectance of different land cover types across eight MS bands.</p>
Full article ">Figure 4
<p>Distributions of the reflectance of different land cover types across five bands and expanded training data.</p>
Full article ">Figure 5
<p>Spatial distribution of the samples: (<b>a</b>) training data and (<b>b</b>) testing data.</p>
Full article ">Figure 6
<p>Feature space optimization using nine features based on two sets of training data.</p>
Full article ">Figure 7
<p>Decision rules of the CART classification based on (<b>a</b>) original training data and (<b>b</b>) expanded training data.</p>
Full article ">Figure 8
<p>Classified maps generated using (<b>a</b>) CART; (<b>b</b>) <span class="html-italic">k</span>-NN; (<b>c</b>) Bayesian and (<b>d</b>) SVM methods based on the original training data.</p>
Full article ">Figure 9
<p>Classified maps generated using (<b>a</b>) CART; (<b>b</b>) <span class="html-italic">k</span>-NN; (<b>c</b>) Bayesian and (<b>d</b>) SVM methods based on the expanded training data.</p>
Full article ">Figure 10
<p>Radar charts of the accuracies using different classification methods based on the original and expanded training data (unit: %). (<b>a</b>) Overall accuracy; (<b>b</b>) Producer’s accuracy using original training data; (<b>c</b>) User’s accuracy using original training data; (<b>d</b>) Producer’s accuracy using expanded training data; (<b>e</b>) User’s accuracy using expanded training data.</p>
Full article ">Figure 11
<p>Estimated class-conditional probability plots and fitted models for each class. The lag on the <span class="html-italic">x</span>-axis is in units of pixels.</p>
Full article ">Figure 12
<p>Classified map using the g<span class="html-italic">k</span>-NN method.</p>
Full article ">Figure 13
<p>Tree crown photos taken using a fisheye camera at the testing locations. (<b>a</b>) The bamboo class surrounded by brush and was correctly classified; (<b>b</b>) the bamboo class covered by mixed woodland and was misclassified as brush.</p>
Full article ">Figure 14
<p>The canopies shown in binary maps of the photos shown in <a href="#sensors-16-01957-f013" class="html-fig">Figure 13</a>. (<b>a</b>) The bamboo class surrounded by brush and was correctly classified; (<b>b</b>) the bamboo class covered by mixed woodland and was misclassified as brush.</p>
Full article ">
5694 KiB  
Article
Probabilistic Model Updating for Sizing of Hole-Edge Crack Using Fiber Bragg Grating Sensors and the High-Order Extended Finite Element Method
by Jingjing He, Jinsong Yang, Yongxiang Wang, Haim Waisman and Weifang Zhang
Sensors 2016, 16(11), 1956; https://doi.org/10.3390/s16111956 - 21 Nov 2016
Cited by 28 | Viewed by 7267
Abstract
This paper presents a novel framework for probabilistic crack size quantification using fiber Bragg grating (FBG) sensors. The key idea is to use a high-order extended finite element method (XFEM) together with a transfer (T)-matrix method to analyze the reflection intensity spectra of [...] Read more.
This paper presents a novel framework for probabilistic crack size quantification using fiber Bragg grating (FBG) sensors. The key idea is to use a high-order extended finite element method (XFEM) together with a transfer (T)-matrix method to analyze the reflection intensity spectra of FBG sensors, for various crack sizes. Compared with the standard FEM, the XFEM offers two superior capabilities: (i) a more accurate representation of fields in the vicinity of the crack tip singularity and (ii) alleviation of the need for costly re-meshing as the crack size changes. Apart from the classical four-term asymptotic enrichment functions in XFEM, we also propose to incorporate higher-order functions, aiming to further improve the accuracy of strain fields upon which the reflection intensity spectra are based. The wavelength of the reflection intensity spectra is extracted as a damage sensitive quantity, and a baseline model with five parameters is established to quantify its correlation with the crack size. In order to test the feasibility of the predictive model, we design FBG sensor-based experiments to detect fatigue crack growth in structures. Furthermore, a Bayesian method is proposed to update the parameters of the baseline model using only a few available experimental data points (wavelength versus crack size) measured by one of the FBG sensors and an optical microscope, respectively. Given the remaining data points of wavelengths, even measured by FBG sensors at different positions, the updated model is shown to give crack size predictions that match well with the experimental observations. Full article
(This article belongs to the Special Issue Optical Fiber Sensors 2016)
Show Figures

Figure 1

Figure 1
<p>The flow chart of hole-edge crack quantitative detection.</p>
Full article ">Figure 2
<p>A solid containing a crack represented by the red solid line.</p>
Full article ">Figure 3
<p>A square plate with an edge crack: (<b>a</b>) geometry; (<b>b</b>) a finite element mesh with 30 × 30 nodes. Tip-enriched nodes <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>T</mi> </msub> </mrow> </semantics> </math> are indicated by red squares, where blue circles denote Heaviside-enriched nodes <math display="inline"> <semantics> <mrow> <msub> <mi>S</mi> <mi>H</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 4
<p>Comparison of stress profiles ahead of the crack tip between regular and high-order XFEM. High-order XFEM leads to very accurate results.</p>
Full article ">Figure 5
<p>A perforated plate with a crack emanating from the hole-edge: (<b>a</b>) geometry, boundary conditions and the layout of FBG sensors; (<b>b</b>) FE mesh with FBG Sensor #1 shown as the red dashed line.</p>
Full article ">Figure 6
<p>Contour plot of <math display="inline"> <semantics> <mrow> <msub> <mi>ε</mi> <mrow> <mi>z</mi> <mi>z</mi> </mrow> </msub> </mrow> </semantics> </math> around the crack. The corresponding crack size is 3 mm.</p>
Full article ">Figure 7
<p>Comparison between the simulated and experimental reflection intensity spectra of FBG1 for the structure with the initial 3-mm pre-crack: (<b>a</b>) without the raised cosine apodization function; (<b>b</b>) with the raised cosine apodization function.</p>
Full article ">Figure 8
<p>Comparison between the simulated reflection intensity spectra of FBG1 without the initial crack and with the 3-mm pre-crack.</p>
Full article ">Figure 9
<p>The simulated result: the wavelength shift of FBG1 versus the crack size.</p>
Full article ">Figure 10
<p>The illustration for the relative crack size <math display="inline"> <semantics> <msup> <mi>r</mi> <mo>′</mo> </msup> </semantics> </math></p>
Full article ">Figure 11
<p>The experimental setup for the hole-edge crack detection.</p>
Full article ">Figure 12
<p>Fatigue loading spectrum for the hole-edge crack specimen.</p>
Full article ">Figure 13
<p>The crack propagation path observed in the experiment. Note the curved path of the crack which introduces additional error from the numerical model.</p>
Full article ">Figure 14
<p>Fatigue testing data.</p>
Full article ">Figure 15
<p>Comparison between the XFEM-based crack size prediction and the actual crack size measurements in fatigue loading.</p>
Full article ">Figure 16
<p>Bayesian updating result with sensor FBG1 measurements. (<b>a</b>) Updating with five measurements; (<b>b</b>) updating with six measurements.</p>
Full article ">Figure 17
<p>Median and 95% bound predictions using the updated model for FBG1.</p>
Full article ">Figure 18
<p>The experiment result: the wavelength shift of FBG2 versus the crack size.</p>
Full article ">Figure 19
<p>The experimental and model prediction crack size of FBG2 versus the number of loading cycles.</p>
Full article ">Figure 20
<p>Median and 95% bound predictions using the updated model for FBG2.</p>
Full article ">
18233 KiB  
Article
Expanding the Detection of Traversable Area with RealSense for the Visually Impaired
by Kailun Yang, Kaiwei Wang, Weijian Hu and Jian Bai
Sensors 2016, 16(11), 1954; https://doi.org/10.3390/s16111954 - 21 Nov 2016
Cited by 78 | Viewed by 13439
Abstract
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers [...] Read more.
The introduction of RGB-Depth (RGB-D) sensors into the visually impaired people (VIP)-assisting area has stirred great interest of many researchers. However, the detection range of RGB-D sensors is limited by narrow depth field angle and sparse depth map in the distance, which hampers broader and longer traversability awareness. This paper proposes an effective approach to expand the detection of traversable area based on a RGB-D sensor, the Intel RealSense R200, which is compatible with both indoor and outdoor environments. The depth image of RealSense is enhanced with IR image large-scale matching and RGB image-guided filtering. Traversable area is obtained with RANdom SAmple Consensus (RANSAC) segmentation and surface normal vector estimation, preliminarily. A seeded growing region algorithm, combining the depth image and RGB image, enlarges the preliminary traversable area greatly. This is critical not only for avoiding close obstacles, but also for allowing superior path planning on navigation. The proposed approach has been tested on a score of indoor and outdoor scenarios. Moreover, the approach has been integrated into an assistance system, which consists of a wearable prototype and an audio interface. Furthermore, the presented approach has been proved to be useful and reliable by a field test with eight visually impaired volunteers. Full article
(This article belongs to the Special Issue Imaging: Sensors and Technologies)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The RealSense R200; (<b>b</b>,<b>f</b>) color image captured by the RealSense R200; (<b>c</b>,<b>g</b>) IR image captured by the right IR camera of the RealSense R200; (<b>d</b>,<b>h</b>) the original depth image from the RealSense R200; and (<b>e</b>,<b>i</b>) the guided filtered depth image acquired in our work.</p>
Full article ">Figure 2
<p>(<b>a</b>) Color image captured by the RealSense R200; (<b>b</b>) the original depth image captured by the RealSense R200; (<b>c</b>) traversable area detection with original depth image of the RealSense R200, which is limited to short range.</p>
Full article ">Figure 3
<p>Horizontal field angle of IR cameras.</p>
Full article ">Figure 4
<p>The flowchart of the approach.</p>
Full article ">Figure 5
<p>Comparison of depth maps under both indoor and outdoor environments. (<b>a</b>,<b>e</b>,<b>i</b>,<b>m</b>) Color images captured by the RealSense sensor; (<b>b</b>,<b>f</b>,<b>j</b>,<b>n</b>) original depth image from the RealSense sensor; (<b>c</b>,<b>g</b>,<b>k</b>,<b>o</b>) large-scale matched depth image; and (<b>d</b>,<b>h</b>,<b>l</b>,<b>p</b>) guided-filter depth image.</p>
Full article ">Figure 6
<p>Ground plane segmentation in indoor and outdoor environments. (<b>a</b>,<b>c</b>) Ground plane detection based on the RANSAC algorithm; (<b>b</b>,<b>d</b>) salient parts in the ground plane are dismissed with surface normal vector estimation.</p>
Full article ">Figure 7
<p>Traversable area expansion in indoor and outdoor environments. (<b>a</b>,<b>d</b>) Ground plane detection based on the RANSAC algorithm; (<b>b</b>,<b>e</b>) salient parts in the ground plane are dismissed with surface normal vector estimation; and (<b>c</b>,<b>f</b>) preliminary traversable area are expanded greatly with seeded region growing.</p>
Full article ">Figure 8
<p>Results of traversable area expansion in indoor environment. (<b>a</b>,<b>b</b>) Traversable area detection in offices; (<b>c</b>–<b>e</b>) traversable detection in corridors; (<b>f</b>) traversable area detection in an open area; (<b>g</b>) traversable area detection with color image blurring; abd (<b>h</b>) traversable area detection with color image under-exposing.</p>
Full article ">Figure 9
<p>Results of traversable area expansion in outdoor environment. (<b>a</b>–<b>g</b>) Traversable area detection on roads; (<b>h</b>) traversable area detection on a platform; and (<b>i</b>) traversable area detection on a playground.</p>
Full article ">Figure 10
<p>Comparisons of results of different traversable area detection approaches. (<b>a</b>–<b>d</b>) The set of images of a typical indoor scenario including color image, depth map, and calibrated IR pairs; (<b>e</b>–<b>i</b>) the results of different approaches on the indoor scenario; (<b>j</b>–<b>m</b>) the set of images of a typical outdoor scenario; and (<b>n</b>–<b>r</b>) the results of different approaches on the outdoor scenario.</p>
Full article ">Figure 11
<p>An example of expansion error. The ground has been unexpectedly expanded to a part of the car.</p>
Full article ">Figure 12
<p>The assisting system consists a frame which holds the RealSense R200 and the attitude sensor, a processor, and a bone-conducting headphone.</p>
Full article ">Figure 13
<p>Non-semantic stereophonic interface of the assisting system. Sounds of five directions of traversable area are presented by five musical instruments in 3D space, including trumpet, piano, gong, violin, and xylophone.</p>
Full article ">Figure 14
<p>Eight visually impaired volunteers took part (<b>a</b>–<b>d</b>). The moments of the assisting study. Participants’ faces are blurred for the protection of the privacy (we have gotten the approval to use the assisting performance study for research work).</p>
Full article ">Figure 15
<p>Obstacle arrangements. (<b>a</b>) An image of obstacle arrangement; (<b>b</b>) Four other obstacle arrangements.</p>
Full article ">
7492 KiB  
Article
Smart Toys Designed for Detecting Developmental Delays
by Diego Rivera, Antonio García, Bernardo Alarcos, Juan R. Velasco, José Eugenio Ortega and Isaías Martínez-Yelmo
Sensors 2016, 16(11), 1953; https://doi.org/10.3390/s16111953 - 20 Nov 2016
Cited by 18 | Viewed by 12071
Abstract
In this paper, we describe the design considerations and implementation of a smart toy system, a technology for supporting the automatic recording and analysis for detecting developmental delays recognition when children play using the smart toy. To achieve this goal, we take advantage [...] Read more.
In this paper, we describe the design considerations and implementation of a smart toy system, a technology for supporting the automatic recording and analysis for detecting developmental delays recognition when children play using the smart toy. To achieve this goal, we take advantage of the current commercial sensor features (reliability, low consumption, easy integration, etc.) to develop a series of sensor-based low-cost devices. Specifically, our prototype system consists of a tower of cubes augmented with wireless sensing capabilities and a mobile computing platform that collect the information sent from the cubes allowing the later analysis by childhood development professionals in order to verify a normal behaviour or to detect a potential disorder. This paper presents the requirements of the toy and discusses our choices in toy design, technology used, selected sensors, process to gather data from the sensors and generate information that will help in the decision-making and communication of the information to the collector system. In addition, we also describe the play activities the system supports. Full article
(This article belongs to the Special Issue Sensing Technology for Healthcare System)
Show Figures

Figure 1

Figure 1
<p>A child building a five-cube tower.</p>
Full article ">Figure 2
<p>Physical architecture of the proposed smart toy.</p>
Full article ">Figure 3
<p>Microcontroller schematic.</p>
Full article ">Figure 4
<p>Schematic of protection discharge circuit.</p>
Full article ">Figure 5
<p>Communication technologies analysed.</p>
Full article ">Figure 6
<p>Tilt schematic.</p>
Full article ">Figure 7
<p>Logic for sleeping methods.</p>
Full article ">Figure 8
<p>Programing interface.</p>
Full article ">Figure 9
<p>LDR schematic.</p>
Full article ">Figure 10
<p>Accelerometer schematic.</p>
Full article ">Figure 11
<p>Picture of the cube. (a-b) two half-cube pieces; (c) assembling pieces; and (d) the cube with plastic housing.</p>
Full article ">Figure 12
<p>Battery discharge.</p>
Full article ">Figure 13
<p>Time variability of energy consumption.</p>
Full article ">Figure 14
<p>G vector in a stopped and in a spun cube.</p>
Full article ">Figure 15
<p>The fixed magnetometer and the acceleration axes.</p>
Full article ">Figure 16
<p>Tait–Bryan angles Yaw (ψ), Pitch (φ), and Roll (θ) from Wikimedia (author: Juansempere).</p>
Full article ">Figure 17
<p>Local maximum of level 1 (<b>a</b>); and Local maximum of level 3 (<b>b</b>). Vertical units are fractions of <span class="html-italic">g</span> (9.8 m/s<sup>2</sup>).</p>
Full article ">Figure 18
<p>Sides and LDR location with names.</p>
Full article ">Figure 19
<p>Collector Software architecture.</p>
Full article ">Figure 20
<p>Screenshot of the collector web interface.</p>
Full article ">Figure 21
<p>Data recollection process.</p>
Full article ">
22651 KiB  
Article
Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
by Affan Shaukat, Peter C. Blacker, Conrad Spiteri and Yang Gao
Sensors 2016, 16(11), 1952; https://doi.org/10.3390/s16111952 - 20 Nov 2016
Cited by 28 | Viewed by 12561
Abstract
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers [...] Read more.
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Show Figures

Figure 1

Figure 1
<p>JPL stereo vision image processing on board MER rovers. (<b>a</b>) is from the stereo pair, while (<b>b</b>) is the elevation map generated using stereopsis (Courtesy NASA/JPL-Caltech).</p>
Full article ">Figure 2
<p>Comparison of the stereo depth perception systems on board MSL, MER, ExoMars rovers.</p>
Full article ">Figure 3
<p>MSL engineering cameras stereo range error as a function of distance from the camera.</p>
Full article ">Figure 4
<p>Image of a representative planetary surface (<b>a</b>); DTM generated using SfS method (<b>b</b>).</p>
Full article ">Figure 5
<p>Example illustration of a scattering of LIDAR points (<b>b</b>) of a flat surface with rock boulders (<b>a</b>); point cloud (<b>c</b>) and depth map (<b>d</b>).</p>
Full article ">Figure 6
<p>Proposed hardware setup for fusion of camera and LIDAR data for planetary scene reconstruction (<b>a</b>); and device connections (<b>b</b>).</p>
Full article ">Figure 7
<p>Image of an indoor laboratory environment illustrated as a flat array (<b>a</b>); and projected onto a unit sphere using its intrinsic lens model (<b>b</b>).</p>
Full article ">Figure 8
<p>Single line of LIDAR data points (<b>a</b>) of the indoor laboratory setup (<b>b</b>).</p>
Full article ">Figure 9
<p>Depth map generated from LIDAR points (<b>a</b>); and the reconstructed 3-D point cloud (<b>b</b>).</p>
Full article ">Figure 10
<p>Image produced from LIDAR intensity measurements (<b>a</b>) of the laboratory setup (<b>b</b>).</p>
Full article ">Figure 11
<p>Extrinsic and intrinsic calibration of the sensor suite, (<b>a</b>,<b>b</b>) illustrate the camera and LIDAR views respectively; (<b>c</b>,<b>d</b>) illustrate the target points before and after calibration respectively.</p>
Full article ">Figure 12
<p>Point cloud of camera-LIDAR fusion showing the RGB intensity values of each 3-D point (<b>b</b>); and a multispectral view of the point cloud showing R, G and NIR channels (<b>a</b>).</p>
Full article ">Figure 13
<p>Surface mesh rendered with vertex colour (<b>a</b>); and texture mapped with full resolution RGB image (<b>b</b>).</p>
Full article ">Figure 14
<p>Triangulation of a structured point cloud (<b>a</b>); false surface regions shown in blue generated by the mesh triangulation process (<b>b</b>); and the histogram-based frequency distribution of the incident angles for a typical scan with large occluded areas (<b>c</b>).</p>
Full article ">Figure 15
<p>Data structure representing a line of raw LIDAR points (<b>a</b>); and the current edge of a mesh being created (<b>b</b>).</p>
Full article ">Figure 16
<p>Calculating the mean Cartesian distance between lines (<b>a</b>); and between array elements (<b>b</b>).</p>
Full article ">Figure 17
<p>Method used to reduce the resolution of the line of raw LIDAR points and adding to the mesh creating a new edge array (<b>a</b>) and process of triangulating a stripe of reduced resolution LIDAR data into the mesh (<b>b</b>–<b>e</b>).</p>
Full article ">Figure 18
<p>Full resolution mesh with 116,000 triangles (<b>a</b>) and simplified mesh (<math display="inline"> <semantics> <mrow> <mi>δ</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>1</mn> </mrow> </semantics> </math> m) with 13,700 triangles (<b>b</b>).</p>
Full article ">Figure 19
<p>The test sites chosen for performing the experimental analysis; Site A (<b>a</b>), Site B (<b>b</b>) and Site C (<b>c</b>).</p>
Full article ">Figure 20
<p>CPU time taken to create mesh at different <span class="html-italic">δ</span> values approximated by a third order polynomial trendline to reduce data fluctuation.</p>
Full article ">Figure 21
<p>A comparison of the execution time of the proposed algorithm versus the QEC technique.</p>
Full article ">Figure 22
<p>A comparison of the proposed algorithm versus the QEC technique in terms of <math display="inline"> <semantics> <mrow> <msub> <mi>G</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>P</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> for Site A (<b>a</b>); Site B (<b>b</b>) and Site C (<b>c</b>).</p>
Full article ">Figure 23
<p>Data reduction ratio with change in <span class="html-italic">δ</span> values.</p>
Full article ">Figure 24
<p>Mean geometric deviation (mm) for different <span class="html-italic">δ</span> values.</p>
Full article ">Figure 25
<p>Surface plot of <math display="inline"> <semantics> <mrow> <msub> <mi>G</mi> <mi>d</mi> </msub> <mrow> <mo>(</mo> <mi>P</mi> <mo>,</mo> <mi>S</mi> <mo>)</mo> </mrow> </mrow> </semantics> </math> for an outdoor environment (<b>a</b>) and the respective histogram (<b>b</b>).</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop