[go: up one dir, main page]

Next Issue
Volume 17, May
Previous Issue
Volume 17, March
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 17, Issue 4 (April 2017) – 280 articles

Cover Story (view full-size image): Potential use of real-time GNSS-RF mobile geofences to define occupational safety hazard zones around ground workers and equipment on active logging operations. A field experiment and simulation were used to evaluate the factors affecting signal alert delays for mobile geofences. Results show that corrections are needed in order to account for error in the timing of alerts associated with path intersection angle and other factors. To read the full article, click here: View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
810 KiB  
Article
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
by Xianpeng Wang, Mengxing Huang, Xiaoqin Wu and Guoan Bi
Sensors 2017, 17(4), 939; https://doi.org/10.3390/s17040939 - 24 Apr 2017
Cited by 14 | Viewed by 5946
Abstract
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to [...] Read more.
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. Full article
Show Figures

Figure 1

Figure 1
<p>The configuration of colocated MIMO radar.</p>
Full article ">Figure 2
<p>The spatial spectrum of all methods (<span class="html-italic">M</span> = 4, <span class="html-italic">N</span> = 6, SNR = 0 dB, <span class="html-italic">L</span> = 100).</p>
Full article ">Figure 3
<p>The spatial spectrum of the proposed method with different number of sources (<span class="html-italic">M</span> = <span class="html-italic">N</span> = 2, SNR = 10 dB, <span class="html-italic">L</span> = 100).</p>
Full article ">Figure 4
<p>RMSE versus SNR for different methods (<math display="inline"> <semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>4</mn> <mo>,</mo> <mi>N</mi> <mo>=</mo> <mn>6</mn> <mo>,</mo> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 5
<p>RMSE versus snapshots for different methods (<span class="html-italic">M</span> = 4, <span class="html-italic">N</span> = 6, SNR = 0 dB).</p>
Full article ">Figure 6
<p>The probability of successful detection versus SNR (<span class="html-italic">M</span> = 4, <span class="html-italic">N</span> = 6, <span class="html-italic">L</span> = 100).</p>
Full article ">Figure 7
<p>RMSE of the proposed method with different number of transmit/receive elements (<math display="inline"> <semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics> </math>).</p>
Full article ">
5466 KiB  
Review
Progress in the Correlative Atomic Force Microscopy and Optical Microscopy
by Lulu Zhou, Mingjun Cai, Ti Tong and Hongda Wang
Sensors 2017, 17(4), 938; https://doi.org/10.3390/s17040938 - 24 Apr 2017
Cited by 41 | Viewed by 10372
Abstract
Atomic force microscopy (AFM) has evolved from the originally morphological imaging technique to a powerful and multifunctional technique for manipulating and detecting the interactions between molecules at nanometer resolution. However, AFM cannot provide the precise information of synchronized molecular groups and has many [...] Read more.
Atomic force microscopy (AFM) has evolved from the originally morphological imaging technique to a powerful and multifunctional technique for manipulating and detecting the interactions between molecules at nanometer resolution. However, AFM cannot provide the precise information of synchronized molecular groups and has many shortcomings in the aspects of determining the mechanism of the interactions and the elaborate structure due to the limitations of the technology, itself, such as non-specificity and low imaging speed. To overcome the technical limitations, it is necessary to combine AFM with other complementary techniques, such as fluorescence microscopy. The combination of several complementary techniques in one instrument has increasingly become a vital approach to investigate the details of the interactions among molecules and molecular dynamics. In this review, we reported the principles of AFM and optical microscopy, such as confocal microscopy and single-molecule localization microscopy, and focused on the development and use of correlative AFM and optical microscopy. Full article
(This article belongs to the Special Issue Single-Molecule Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of atomic force microscopy. In AFM, the tip-sample interactions are detected to characterize the topography and biophysical properties of sample. Reproduced from [<a href="#B2-sensors-17-00938" class="html-bibr">2</a>] with permission.</p>
Full article ">Figure 2
<p>Schematic diagram of confocal laser scanning microscopy. In confocal microscopy, the point illumination, the detector pinhole, and the focus in the specimen are all confocal with each other. Reproduced from [<a href="#B28-sensors-17-00938" class="html-bibr">28</a>] with permission.</p>
Full article ">Figure 3
<p>Schematic diagram of total internal reflection fluorescence illumination. When the excitation beam travels across the coverslip-sample interface (n<sub>1</sub> &lt; n<sub>2</sub>) with an incident angle θ above the critical angle θ<sub>c</sub> (indicated by the dashed line), the excitation beam is totally internally reflected back into the cover slip and an evanescent field is generated in the sample. Only fluorophores that are located in the evanescent field are excited (indicated by the green color). Here, n<sub>1</sub> and n<sub>2</sub> are, respectively, the refractive indices of the sample and the glass coverslip. Reproduced and rearranged from [<a href="#B34-sensors-17-00938" class="html-bibr">34</a>] with permission.</p>
Full article ">Figure 4
<p>The basic principle of stimulated emission depletion microscopy. In STED, the depletion beam is superimposed to the excitation beam to reduce the size of the excitation spot. The higher the depletion beam power, the smaller the size of the excitation spot. Reproduced and rearranged from [<a href="#B36-sensors-17-00938" class="html-bibr">36</a>] with permission.</p>
Full article ">Figure 5
<p>Imaging principle of single-molecule localization microscopy. In SMLM, only a small subset of fluorophores can be randomly switched on using appropriate illumination and localized at high resolution; after the small subset of fluorophores is switched off, a new subset is switched on and localized. This cycles is repeated to record many frames, including the localizations of individual fluorophores (<b>a</b>–<b>c</b>). Therefore, an super-resolution image is reconstructed from all of the successful localizations (<b>d</b>). Reproduced and rearranged from [<a href="#B36-sensors-17-00938" class="html-bibr">36</a>] with permission.</p>
Full article ">Figure 6
<p>Imaging a virus binding to cells using correlative force-distance curve-based AFM and confocal microscopy. The AFM tip was functionalized with a single EnvA-RABV (∆<span class="html-italic">G: eGFP</span>) virus (EnvA: the glycoprotein of the avian sarcoma leukosis vieus subgroup A; RABV: rabies virus). Mixed cultures of wild-type MDCK cells and TVA-mCherry (TVA: the avian tumor virus receptor A) expressing MDCK cells (red) were grown for three days. DIC image (<b>a</b>), fluorescence image (<b>b</b>) and overlay of both images (<b>c</b>) guiding the AFM tip to choose an area of interest (the dashed square), including both cell types. AFM topography (<b>d</b>) and corresponding adhesion map (<b>e</b>) in the dashed square, which were used to evaluate specific and nonspecific virus binding events. Distribution of adhesion forces of specific interactions (<b>f</b>,<b>g</b>) and nonspecific interactions (<b>h</b>). (<b>i</b>) Merged image of the topography and fluorescence images. The adhesion frequency was in line with the relative fluorescence intensity, which meant the specific adhesion events corresponding to specific binding events. Reproduced from [<a href="#B53-sensors-17-00938" class="html-bibr">53</a>] with permission.</p>
Full article ">Figure 7
<p>Imaging mature fibronectin (FN) fibril structure using the correlative TIRFM/AFM technique. In this experiment, the F-actin (red) of rat embryonic fibroblasts (REF52) were stained to visualize cellular structure, the live REF52 cells were incubated on a homogeneous coating of Alexa 488-labelled FN (green) for 4 h, and then fixed before data acquisition. (<b>A</b>) Superimposition of the phase contrast image and the fluorescence image guiding the AFM tip to choose a region including FN fibrils. (<b>B</b>) Three correlative images (fluorescence image, AFM deflection image, and AFM topography) of the same region in the dashed square of (<b>A</b>). (<b>C</b>) The merged image of topography and fluorescence image showing the FN fibril structure and the cellular structure. (<b>D</b>) Correlating the height with the corresponding fluorescence intensity to distinguish the FN fibril structure and the cellular structure using a 30-nm height cut-off (dashed line). (<b>E</b>) Three-dimensional topography intuitively showing the FN fibril structure. Reproduced from [<a href="#B57-sensors-17-00938" class="html-bibr">57</a>] with permission.</p>
Full article ">Figure 8
<p>Nanomanipulation of a 40 nm fluorescent bead using the correlative STED/AFM technique. (<b>A</b>) A comparison of confocal and STED images which shows that STED has a higher resolution. (<b>B</b>) Merged image of STED images acquired before (red) and after (green) AFM dragging of the same area. The overlay of both colors shows stationary beads in yellow. Magnified STED images acquired before (<b>C</b>) and after (<b>D</b>) AFM dragging, and the corresponding merged image (<b>E</b>), which clearly shows that the movement made by AFM at a subdiffraction distance. Reproduced from [<a href="#B64-sensors-17-00938" class="html-bibr">64</a>] with permission.</p>
Full article ">Figure 9
<p>The setup of correlative SMLM/AFM microscopy. (<b>A</b>) Schematic of the optical path which is aligned with the AFM cantilever. (<b>B</b>) Schematic of the AFM integrated with an inverted optical microscope. (<b>C</b>) Photograph of the correlative SMLM/AFM instrument. (<b>D</b>) Magnified photograph showing the AFM cantilever aligned with the optical axis. Reproduced from [<a href="#B15-sensors-17-00938" class="html-bibr">15</a>] with permission.</p>
Full article ">
8812 KiB  
Article
Improving Passive Time Reversal Underwater Acoustic Communications Using Subarray Processing
by Chengbing He, Lianyou Jing, Rui Xi, Qinyuan Li and Qunfei Zhang
Sensors 2017, 17(4), 937; https://doi.org/10.3390/s17040937 - 24 Apr 2017
Cited by 12 | Viewed by 4910
Abstract
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial [...] Read more.
Multichannel receivers are usually employed in high-rate underwater acoustic communication to achieve spatial diversity. In the context of multichannel underwater acoustic communications, passive time reversal (TR) combined with a single-channel adaptive decision feedback equalizer (TR-DFE) is a low-complexity solution to achieve both spatial and temporal focusing. In this paper, we present a novel receiver structure to combine passive time reversal with a low-order multichannel adaptive decision feedback equalizer (TR-MC-DFE) to improve the performance of the conventional TR-DFE. First, the proposed method divides the whole received array into several subarrays. Second, we conduct passive time reversal processing in each subarray. Third, the multiple subarray outputs are equalized with a low-order multichannel DFE. We also investigated different channel estimation methods, including least squares (LS), orthogonal matching pursuit (OMP), and improved proportionate normalized least mean squares (IPNLMS). The bit error rate (BER) and output signal-to-noise ratio (SNR) performances of the receiver algorithms are evaluated using simulation and real data collected in a lake experiment. The source-receiver range is 7.4 km, and the data rate with quadrature phase shift keying (QPSK) signal is 8 kbits/s. The uncoded BER of the single input multiple output (SIMO) systems varies between 1 × 10 1 and 2 × 10 2 for the conventional TR-DFE, and between 1 × 10 2 and 1 × 10 3 for the proposed TR-MC-DFE when eight hydrophones are utilized. Compared to conventional TR-DFE, the average output SNR of the experimental data is enhanced by 3 dB. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Multichannel decision feedback equalization (MC-DFE) block diagram.</p>
Full article ">Figure 2
<p>Conventional time reversal DFE (TR-DFE) block diagram.</p>
Full article ">Figure 3
<p>Proposed TR-DFE block diagram.</p>
Full article ">Figure 4
<p>Channel impulse responses (CIR) snapshots in simulation.</p>
Full article ">Figure 5
<p>Output signal-to-noise (SNR) varies with input SNR.</p>
Full article ">Figure 6
<p>Transmitted signal structure.</p>
Full article ">Figure 7
<p>Sound speed profile in DJK15 lake experiment.</p>
Full article ">Figure 8
<p>Examples of CIRs estimated using the least mean squares (LMS) algorithm from the DJK15 data: The 20 ms delay spread in the channel amounted to ISI spanning about L = 80 symbols.</p>
Full article ">Figure 9
<p>The normalized <span class="html-italic">q</span> function based on estimated CIRs using the LMS channel estimation algorithm: (<b>a</b>) <span class="html-italic">q</span> function using the channels (1,2,3,4); (<b>b</b>) <span class="html-italic">q</span> function using the channels (5,6,7,8). The top panel in each figure shows a snapshot of the <span class="html-italic">q</span> function.</p>
Full article ">Figure 10
<p>Performance for 10 packets using different methods. Results of the conventional TR-DFE are represented by solid lines. Results of the proposed method are represented by dot-dashed lines: (<b>a</b>) output SNR performance and (<b>b</b>) Mean squared error (MSE) performance.</p>
Full article ">Figure 11
<p>Scatter plot of the equalized first packet signal: (<b>a</b>) the conventional TR-DFE, (<b>b</b>) the proposed TR-MC-DFE (<span class="html-italic">P</span> = 2) and (<b>c</b>) MC-DFE.</p>
Full article ">Figure 12
<p>Performance of conventional TR-DFE with different subarray: (<b>a</b>) subarray with hydrophones (1,2,3,4) and (<b>b</b>) subarray with hydrophones (5,6,7,8).</p>
Full article ">Figure 13
<p>Trade-off between subarray length and DFE order. (<b>a</b>) performance of TR-MC-DFE using different subarrays, (<b>b</b>) performance of TR-MC-DFE using different DFE order.</p>
Full article ">
3372 KiB  
Article
Probabilistic Fatigue Life Updating for Railway Bridges Based on Local Inspection and Repair
by Young-Joo Lee, Robin E. Kim, Wonho Suh and Kiwon Park
Sensors 2017, 17(4), 936; https://doi.org/10.3390/s17040936 - 24 Apr 2017
Cited by 18 | Viewed by 5108
Abstract
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target [...] Read more.
Railway bridges are exposed to repeated train loads, which may cause fatigue failure. As critical links in a transportation network, railway bridges are expected to survive for a target period of time, but sometimes they fail earlier than expected. To guarantee the target bridge life, bridge maintenance activities such as local inspection and repair should be undertaken properly. However, this is a challenging task because there are various sources of uncertainty associated with aging bridges, train loads, environmental conditions, and maintenance work. Therefore, to perform optimal risk-based maintenance of railway bridges, it is essential to estimate the probabilistic fatigue life of a railway bridge and update the life information based on the results of local inspections and repair. Recently, a system reliability approach was proposed to evaluate the fatigue failure risk of structural systems and update the prior risk information in various inspection scenarios. However, this approach can handle only a constant-amplitude load and has limitations in considering a cyclic load with varying amplitude levels, which is the major loading pattern generated by train traffic. In addition, it is not feasible to update the prior risk information after bridges are repaired. In this research, the system reliability approach is further developed so that it can handle a varying-amplitude load and update the system-level risk of fatigue failure for railway bridges after inspection and repair. The proposed method is applied to a numerical example of an in-service railway bridge, and the effects of inspection and repair on the probabilistic fatigue life are discussed. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Generic stress response of a railway bridge under passing train.</p>
Full article ">Figure 2
<p>Test bridge (i.e., Calumet Bridge) marked as CN1 and CN2 [<a href="#B32-sensors-17-00936" class="html-bibr">32</a>,<a href="#B33-sensors-17-00936" class="html-bibr">33</a>].</p>
Full article ">Figure 3
<p>Locations of 10 members with the maximum stresses.</p>
Full article ">Figure 4
<p>Stress history of Members 13 and 27.</p>
Full article ">Figure 5
<p>Reliability indices of 10 selected members obtained from proposed method. (<b>a</b>) Members 13, 27, 8, 6 and 7; (<b>b</b>) Members 5, 10, 9, 17 and 4.</p>
Full article ">Figure 6
<p>Reliability indices of bridge system with varying correlation coefficients.</p>
Full article ">Figure 7
<p>Reliability updating results by the proposed method for inequality cases (Scenarios 1–4 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Scenario 1; (<b>b</b>) Scenario 2; (<b>c</b>) Scenario 3; (<b>d</b>) Scenario 4.</p>
Full article ">Figure 8
<p>Comparison of updated reliability indices for inequality cases (Scenarios 1–4 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Member 13; (<b>b</b>) Member 27; (<b>c</b>) System.</p>
Full article ">Figure 9
<p>Reliability updating results by the proposed method for inequality cases (Scenarios 5–7 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Scenario 5; (<b>b</b>) Scenario 6; (<b>c</b>) Scenario 7.</p>
Full article ">Figure 10
<p>Comparison of updated reliability indices for equality cases (Scenarios 5–7 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Member 13; (<b>b</b>) Member 27; (<b>c</b>) System.</p>
Full article ">Figure 11
<p>Reliability updating results by the proposed method for inequality cases (Scenarios 8 and 9 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Scenario 8; (<b>b</b>) Scenario 9.</p>
Full article ">Figure 12
<p>Comparison of updated reliability indices for equality cases (Scenarios 8 and 9 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Member 13; (<b>b</b>) Member 27; (<b>c</b>) System.</p>
Full article ">Figure 12 Cont.
<p>Comparison of updated reliability indices for equality cases (Scenarios 8 and 9 in <a href="#sensors-17-00936-t003" class="html-table">Table 3</a>). (<b>a</b>) Member 13; (<b>b</b>) Member 27; (<b>c</b>) System.</p>
Full article ">
2701 KiB  
Article
Improved Line Tracing Methods for Removal of Bad Streaks Noise in CCD Line Array Image—A Case Study with GF-1 Images
by Bo Wang, Jianwei Bao, Shikui Wang, Houjun Wang and Qinghong Sheng
Sensors 2017, 17(4), 935; https://doi.org/10.3390/s17040935 - 24 Apr 2017
Cited by 5 | Viewed by 4243
Abstract
Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is [...] Read more.
Remote sensing images could provide us with tremendous quantities of large-scale information. Noise artifacts (stripes), however, made the images inappropriate for vitalization and batch process. An effective restoration method would make images ready for further analysis. In this paper, a new method is proposed to correct the stripes and bad abnormal pixels in charge-coupled device (CCD) linear array images. The method involved a line tracing method, limiting the location of noise to a rectangular region, and corrected abnormal pixels with the Lagrange polynomial algorithm. The proposed detection and restoration method were applied to Gaofen-1 satellite (GF-1) images, and the performance of this method was evaluated by omission ratio and false detection ratio, which reached 0.6% and 0%, respectively. This method saved 55.9% of the time, compared with traditional method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The original images of GF-1. (<b>a</b>) farmlands; (<b>b</b>) mountains; (<b>c</b>) islands.</p>
Full article ">Figure 2
<p>Steps of transverse stripe noise handling.</p>
Full article ">Figure 3
<p>Image with bad streaks noise.</p>
Full article ">Figure 4
<p>Compare with neighborhood mean and Lagrange polynomial algorithm.</p>
Full article ">Figure 5
<p>The results of repair algorithm on the simulation image.</p>
Full article ">Figure 6
<p>The results of repair algorithm on the real GF-1 images.</p>
Full article ">
3749 KiB  
Article
Convergent Validity of a Wearable Sensor System for Measuring Sub-Task Performance during the Timed Up-and-Go Test
by James Beyea, Chris A. McGibbon, Andrew Sexton, Jeremy Noble and Colleen O’Connell
Sensors 2017, 17(4), 934; https://doi.org/10.3390/s17040934 - 23 Apr 2017
Cited by 53 | Viewed by 6894
Abstract
Background: The timed-up-and-go test (TUG) is one of the most commonly used tests of physical function in clinical practice and for research outcomes. Inertial sensors have been used to parse the TUG test into its composite phases (rising, walking, turning, etc.), but have [...] Read more.
Background: The timed-up-and-go test (TUG) is one of the most commonly used tests of physical function in clinical practice and for research outcomes. Inertial sensors have been used to parse the TUG test into its composite phases (rising, walking, turning, etc.), but have not validated this approach against an optoelectronic gold-standard, and to our knowledge no studies have published the minimal detectable change of these measurements. Methods: Eleven adults performed the TUG three times each under normal and slow walking conditions, and 3 m and 5 m walking distances, in a 12-camera motion analysis laboratory. An inertial measurement unit (IMU) with tri-axial accelerometers and gyroscopes was worn on the upper-torso. Motion analysis marker data and IMU signals were analyzed separately to identify the six main TUG phases: sit-to-stand, 1st walk, 1st turn, 2nd walk, 2nd turn, and stand-to-sit, and the absolute agreement between two systems analyzed using intra-class correlation (ICC, model 2) analysis. The minimal detectable change (MDC) within subjects was also calculated for each TUG phase. Results: The overall difference between TUG sub-tasks determined using 3D motion capture data and the IMU sensor data was <0.5 s. For all TUG distances and speeds, the absolute agreement was high for total TUG time and walk times (ICC > 0.90), but less for chair activity (ICC range 0.5–0.9) and typically poor for the turn time (ICC < 0.4). MDC values for total TUG time ranged between 2–4 s or 12–22% of the TUG time measurement. MDC of the sub-task times were higher proportionally, being 20–60% of the sub-task duration. Conclusions: We conclude that a commercial IMU can be used for quantifying the TUG phases with accuracy sufficient for clinical applications; however, the MDC when using inertial sensors is not necessarily improved over less sophisticated measurement tools. Full article
(This article belongs to the Special Issue Wearable and Ambient Sensors for Healthcare and Wellness Applications)
Show Figures

Figure 1

Figure 1
<p>Experimental set-up for the study. Within the viewing volume of the motion capture system (12 Vicon T-160 cameras), the participant performed either a 3 m or 5 m Timed Up and Go (TUG) test, while signals were simultaneously captured with a Microstrain 3DM-GX1 inertial measurement unit (IMU). The IMU produced only a digital output, therefore a separate 3-axis accelerometer was mounted to the 3DM-GX1 for synchronizing the two systems, as shown in the lower portion of the illustration.</p>
Full article ">Figure 2
<p>TUG event time determination from motion capture markers on the right and left shoulders. Top: X (anterior/posterior) coordinate is used to detect the start (event 1) and end of movement (event 7). Bottom: Z (superior/inferior) coordinate is then used to detect the end of the sit-to-stand (event 2) when the shoulder marker first exceeds 95% of the standing shoulder height. Middle: The two turns were then detected using the Y (medio/lateral) coordinate, from first locating the cross-point of the left/right shoulder markers (shorted vertical dashed lines), and then locating the maxima and minima before and after (or vice versa depending on turn direction) to define the start (events 3 and 5) and end (events 4 and 6) of the turns.</p>
Full article ">Figure 3
<p>Timed up and go event time determination from the inertial measurement unit’s (IMU) accelerometer and gyroscope channel data. (<b>A</b>) When mounted on the torso the IMU’s X axis in the superior/inferior direction, the Y axis is in the medio/lateral direction, and the Z axis is in the anterior/posterior direction. The gyroscope X channel therefore registered the turning motion, and the gyroscope Y and accelerometer Z and X channels registered the chair activity. (<b>B</b>) The raw signals were filtered with a zero-lag 4th order Butterworth filter at 10 Hz, rectified, power-scaled, and normalized from 0–1, producing the sensor impulse profile shown in the lower portion of the figure. A fixed threshold value is then used to find the on-off times of each impulse, thus defining the sub-task transition events.</p>
Full article ">Figure 3 Cont.
<p>Timed up and go event time determination from the inertial measurement unit’s (IMU) accelerometer and gyroscope channel data. (<b>A</b>) When mounted on the torso the IMU’s X axis in the superior/inferior direction, the Y axis is in the medio/lateral direction, and the Z axis is in the anterior/posterior direction. The gyroscope X channel therefore registered the turning motion, and the gyroscope Y and accelerometer Z and X channels registered the chair activity. (<b>B</b>) The raw signals were filtered with a zero-lag 4th order Butterworth filter at 10 Hz, rectified, power-scaled, and normalized from 0–1, producing the sensor impulse profile shown in the lower portion of the figure. A fixed threshold value is then used to find the on-off times of each impulse, thus defining the sub-task transition events.</p>
Full article ">Figure 4
<p>Relative error between the marker-based and IMU-based measurement systems for sub-task phases and the total TUG time, for each of the four experimental conditions: (<b>A</b>) 3 m normal speed; (<b>B</b>) 5 m normal speed; (<b>C</b>) 3 m slow speed, and; (<b>D</b>) 5 m slow speed. Whiskers represent the 95% confidence intervals on the mean error and the asterisk indicates error that was significantly different from zero (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 5
<p>Intraclass correlation between measures (ICC<sub>b</sub>) showing the level of absolute agreement and confidence of that agreement level, for each sub-task phase and total timed up and go time, for each of the four experimental conditions: (<b>A</b>) 3 m normal speed; (<b>B</b>) 5 m normal speed; (<b>C</b>) 3 m slow speed, and; (<b>D</b>) 5 m slow speed. Whiskers represent the 95% confidence intervals on the ICC and the asterisk indicates sub-tasks where the 95% confidence interval enclosed zero, and was therefore non-significant (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 6
<p>Intraclass correlation within measures (ICC<sub>w</sub>) showing the level of repeatability of sensor (light gray) and Vicon (dark gray) measurements of sub-task performance, for each of the four experimental conditions: (<b>A</b>) 3 m normal speed; (<b>B</b>) 5 m normal speed; (<b>C</b>) 3 m slow speed, and; (<b>D</b>) 5 m slow speed. Whiskers represent the 95% confidence intervals on the ICC and the asterisk indicates sub-tasks where the 95% confidence interval enclosed zero, and was therefore non-significant (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 7
<p>Minimal detectable change at 95% confidence (MDC<sup>95</sup>) for sub-task and total TUG performance measures for the sensor (light gray) and Vicon (dark gray) measures, for each of the four experimental conditions: (<b>A</b>) 3 m normal speed; (<b>B</b>) 5 m normal speed; (<b>C</b>) 3 m slow speed, and; (<b>D</b>) 5 m slow speed.</p>
Full article ">Figure 8
<p>Ratio of minimal detectable change at 95% confidence (MDC<sup>95</sup>) to mean duration of the sub-task and total timed up and go performance measures for the sensor (light gray) and Vicon (dark gray) measures, for each of the four experimental conditions: (<b>A</b>) 3 m normal speed; (<b>B</b>) 5 m normal speed; (<b>C</b>) 3 m slow speed, and; (<b>D</b>) 5 m slow speed.</p>
Full article ">
2730 KiB  
Article
Using Wavelet Packet Transform for Surface Roughness Evaluation and Texture Extraction
by Xiao Wang, Tielin Shi, Guanglan Liao, Yichun Zhang, Yuan Hong and Kepeng Chen
Sensors 2017, 17(4), 933; https://doi.org/10.3390/s17040933 - 23 Apr 2017
Cited by 46 | Viewed by 6247
Abstract
Surface characterization plays a significant role in evaluating surface functional performance. In this paper, we introduce wavelet packet transform for surface roughness characterization and surface texture extraction. Surface topography is acquired by a confocal laser scanning microscope. Smooth border padding and de-noise process [...] Read more.
Surface characterization plays a significant role in evaluating surface functional performance. In this paper, we introduce wavelet packet transform for surface roughness characterization and surface texture extraction. Surface topography is acquired by a confocal laser scanning microscope. Smooth border padding and de-noise process are implemented to generate a roughness surface precisely. By analyzing the high frequency components of a simulated profile, surface textures are separated by using wavelet packet transform, and the reconstructed roughness and waviness coincide well with the original ones. Wavelet packet transform is then used as a smooth filter for texture extraction. A roughness specimen and three real engineering surfaces are also analyzed in detail. Profile and areal roughness parameters are calculated to quantify the characterization results and compared with those measured by a profile meter. Most obtained roughness parameters agree well with the measurement results, and the largest deviation occurs in the skewness. The relations between the roughness parameters and noise are analyzed by simulation for explaining the relatively large deviations. The extracted textures reflect the surface structure and indicate the manufacturing conditions well, which is helpful for further feature recognition and matching. By using wavelet packet transform, engineering surfaces are comprehensively characterized including evaluating surface roughness and extracting surface texture. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Procedure of 1D wavelet packet transform (WPT); (<b>b</b>) procedure of 2D WPT.</p>
Full article ">Figure 2
<p>The simulated results for the decomposition of the surface profile: (<b>a</b>) the simulated profile; (<b>b</b>) the waviness line separated by WPT; (<b>c</b>) the reconstructed roughness line by WPT; (<b>d</b>) the errors between the primary waviness (roughness) and the reconstructed waviness (roughness) by using WPT; (<b>e</b>) the primary waviness, reconstructed waviness, reconstructed waviness by border processing (BP); (<b>f</b>) the errors of the reconstructed waviness with or without BP relative to the primary waviness.</p>
Full article ">Figure 3
<p>The characterization results of the roughness specimen: (<b>a</b>) the primary surface topography measured by the confocal laser scanning microscope (CLSM); (<b>b</b>) the roughness topography obtained by WPT; (<b>c</b>) the profile roughness parameters and their relative errors calculated by WPT and the profile meter (PM); (<b>d</b>) the areal roughness parameters and their relative errors obtained by WPT and PM; (<b>e</b>) the smoothed surface topography by WPT; (<b>f</b>) the extracted 490 feature points of surface texture.</p>
Full article ">Figure 4
<p>The analyzed results of the milled surface: (<b>a</b>) the primary surface topography by CLSM, (<b>b</b>) the roughness topography obtained by WPT; (<b>c</b>) the profile roughness parameters and their relative errors calculated by PM and WPT; (<b>d</b>) the areal roughness parameters together with relative errors obtained by WPT and PM; (<b>e</b>) the smoothed surface topography by WPT; (<b>f</b>) the extracted 1371 feature points of surface texture.</p>
Full article ">Figure 5
<p>The evaluation results of the turned surface: (<b>a</b>) the primary surface topography by CLSM; (<b>b</b>) the roughness topography obtained by WPT; (<b>c</b>) the profile roughness parameters and their relative errors calculated by PM and WPT; (<b>d</b>) the areal roughness parameters and their relative errors obtained by WPT and PM; (<b>e</b>) the smoothed surface topography by WPT; (<b>f</b>) the extracted 1126 feature points of surface texture.</p>
Full article ">Figure 6
<p>The characterized results of the grinding surface: (<b>a</b>) the primary surface topography by CLSM; (<b>b</b>) the roughness topography obtained by WPT; (<b>c</b>) the profile roughness parameters and their relative errors calculated by PM and WPT; (<b>d</b>) the areal roughness parameters and their relative errors obtained by WPT and PM; (<b>e</b>) the smoothed surface topography by WPT; (<b>f</b>) the extracted 532 feature points of surface texture.</p>
Full article ">Figure 7
<p>The analyzed results of the relations between parameters and noise: the surface topography with level-0 noise (<b>a</b>), level-1 noise (<b>b</b>), level-2 noise (<b>c</b>), and level-3 noise (<b>d</b>); (<b>e</b>) the profile roughness parameters and their relative errors; (<b>f</b>) the areal roughness parameters and their relative errors.</p>
Full article ">
1527 KiB  
Article
Implementation Issues of Adaptive Energy Detection in Heterogeneous Wireless Networks
by Iker Sobron, Iñaki Eizmendi, Wallace A. Martins, Paulo S. R. Diniz, Juan Luis Ordiales and Manuel Velez
Sensors 2017, 17(4), 932; https://doi.org/10.3390/s17040932 - 23 Apr 2017
Cited by 1 | Viewed by 4590
Abstract
Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying [...] Read more.
Spectrum sensing (SS) enables the coexistence of non-coordinated heterogeneous wireless systems operating in the same band. Due to its computational simplicity, energy detection (ED) technique has been widespread employed in SS applications; nonetheless, the conventional ED may be unreliable under environmental impairments, justifying the use of ED-based variants. Assessing ED algorithms from theoretical and simulation viewpoints relies on several assumptions and simplifications which, eventually, lead to conclusions that do not necessarily meet the requirements imposed by real propagation environments. This work addresses those problems by dealing with practical implementation issues of adaptive least mean square (LMS)-based ED algorithms. The paper proposes a new adaptive ED algorithm that uses a variable step-size guaranteeing the LMS convergence in time-varying environments. Several implementation guidelines are provided and, additionally, an empirical assessment and validation with a software defined radio-based hardware is carried out. Experimental results show good performance in terms of probabilities of detection ( P d > 0 . 9 ) and false alarm ( P f 0 . 05 ) in a range of low signal-to-noise ratios around [ - 4 , 1 ] dB, in both single-node and cooperative modes. The proposed sensing methodology enables a seamless monitoring of the radio electromagnetic spectrum in order to provide band occupancy information for an efficient usage among several wireless communications systems. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2016)
Show Figures

Figure 1

Figure 1
<p>Step-size upper bound analysis.</p>
Full article ">Figure 2
<p>Number of iterations to decrease to <math display="inline"> <semantics> <mfrac> <mn>1</mn> <mi mathvariant="normal">e</mi> </mfrac> </semantics> </math> of the initial value as a function of signal-to-noise ratio (SNR) and <math display="inline"> <semantics> <msub> <mi>P</mi> <mi mathvariant="normal">f</mi> </msub> </semantics> </math> for hypotheses <math display="inline"> <semantics> <msub> <mi>H</mi> <mn>0</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>H</mi> <mn>1</mn> </msub> </semantics> </math>.</p>
Full article ">Figure 3
<p>Flowchart of the adaptive variable-step-size-least-mean-squares (VSSLMS)-based energy detection (ED) algorithm.</p>
Full article ">Figure 4
<p>SDR framework formed by 1 primary user (PU) and 3 secondary users (SUs) which can work in stand-alone or cooperative modes. On the left-hand side, the PU antenna is marked by a yellow solid-line circle and SU antennas are marked by brown dashed-line circles.</p>
Full article ">Figure 5
<p>The depicted control panel allows for configuring <math display="inline"> <semantics> <msub> <mi>P</mi> <mi mathvariant="normal">f</mi> </msub> </semantics> </math> and the parameters of receiving the data acquisition process. In addition, the panel shows on the left the computed detection thresholds for the selected channels.</p>
Full article ">Figure 6
<p>This control panel allows for configuring the variable step-size in (<a href="#FD12-sensors-17-00932" class="html-disp-formula">12</a>), PU/SU roles, weighting strategy between uniform and SNR weighted, ED modes (single-node/cooperation), channel frequencies, and data saving. The power spectrum and the ED decisions for the selected channels are also depicted. In the snapshot, one can observe a PU transmission is detected in 670 MHz; cooperative mode (mode 2) with uniform weighting strategy is active; The Universal Software Radio Peripheral (USRP) device is working as PU transmitter from the transmission (TX) antenna (TX indicator on) and the SU receiver from the reception (RX) antenna (Detector (DX) indicator on); VSS is active (“ceil” and “min” flags on) and, finally, data is not saved.</p>
Full article ">Figure 7
<p>Behavior of the VSSLMS-based adaptive ED algorithm according to the PU states for single-node detection at SNRs in the range of <math display="inline"> <semantics> <mrow> <mo>[</mo> <mo>-</mo> <mn>4</mn> <mo>,</mo> <mspace width="4pt"/> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>]</mo> </mrow> </semantics> </math> dB.</p>
Full article ">Figure 8
<p>Behavior of the VSSLMS-based adaptive ED algorithm according to the PU states for single-node detection at SNRs in the range of <math display="inline"> <semantics> <mrow> <mo>[</mo> <mn>3</mn> <mo>,</mo> <mspace width="4pt"/> <mn>5</mn> <mo>]</mo> </mrow> </semantics> </math> dB.</p>
Full article ">Figure 9
<p>Performance comparison for different values of SNR for single-node detection.</p>
Full article ">
1153 KiB  
Article
Smartphone Location-Independent Physical Activity Recognition Based on Transportation Natural Vibration Analysis
by Taeho Hur, Jaehun Bang, Dohyeong Kim, Oresti Banos and Sungyoung Lee
Sensors 2017, 17(4), 931; https://doi.org/10.3390/s17040931 - 23 Apr 2017
Cited by 17 | Viewed by 6309
Abstract
Activity recognition through smartphones has been proposed for a variety of applications. The orientation of the smartphone has a significant effect on the recognition accuracy; thus, researchers generally propose using features invariant to orientation or displacement to achieve this goal. However, those features [...] Read more.
Activity recognition through smartphones has been proposed for a variety of applications. The orientation of the smartphone has a significant effect on the recognition accuracy; thus, researchers generally propose using features invariant to orientation or displacement to achieve this goal. However, those features reduce the capability of the recognition system to differentiate among some specific commuting activities (e.g., bus and subway) that normally involve similar postures. In this work, we recognize those activities by analyzing the vibrations of the vehicle in which the user is traveling. We extract natural vibration features of buses and subways to distinguish between them and address the confusion that can arise because the activities are both static in terms of user movement. We use the gyroscope to fix the accelerometer to the direction of gravity to achieve an orientation-free use of the sensor. We also propose a correction algorithm to increase the accuracy when used in free living conditions and a battery saving algorithm to consume less power without reducing performance. Our experimental results show that the proposed system can adequately recognize each activity, yielding better accuracy in the detection of bus and subway activities than existing methods. Full article
(This article belongs to the Special Issue Smartphone-based Pedestrian Localization and Navigation)
Show Figures

Figure 1

Figure 1
<p>Overall architecture of the proposed methodology. AR: activity recognition.</p>
Full article ">Figure 2
<p>Comparison of staying, walking, and jogging using accelerometer magnitude signals.</p>
Full article ">Figure 3
<p>Comparison of staying, riding a bus, and riding a subway using accelerometer magnitude signals.</p>
Full article ">Figure 4
<p>Z-axis signal in the frequency domain for staying, riding a bus, and riding a subway.</p>
Full article ">Figure 5
<p>Distinguishing bus and subway on ground level using GPS location data.</p>
Full article ">Figure 6
<p>Flowchart of the proposed AR algorithm.</p>
Full article ">Figure 7
<p>Example of the correction algorithm.</p>
Full article ">Figure 8
<p>Example of the correction algorithm.</p>
Full article ">Figure 9
<p>Transition state diagram between activities.</p>
Full article ">Figure 10
<p>Flowchart of the proposed battery-saving algorithm. ACC: accelerometer.</p>
Full article ">Figure 11
<p>Smartphone carried in four different positions.</p>
Full article ">Figure 12
<p>Comparison of the performance of the battery-saving algorithm during different activities. (<b>a</b>) Staying; (<b>b</b>) Walking; (<b>c</b>) Riding in a bus; and (<b>d</b>) Riding in a subway.</p>
Full article ">
1230 KiB  
Article
Simulation Study of the Localization of a Near-Surface Crack Using an Air-Coupled Ultrasonic Sensor Array
by Steven Delrue, Vladislav Aleshin, Mikael Sørensen and Lieven De Lathauwer
Sensors 2017, 17(4), 930; https://doi.org/10.3390/s17040930 - 22 Apr 2017
Cited by 8 | Viewed by 5351
Abstract
The importance of Non-Destructive Testing (NDT) to check the integrity of materials in different fields of industry has increased significantly in recent years. Actually, industry demands NDT methods that allow fast (preferably non-contact) detection and localization of early-stage defects with easy-to-interpret results, so [...] Read more.
The importance of Non-Destructive Testing (NDT) to check the integrity of materials in different fields of industry has increased significantly in recent years. Actually, industry demands NDT methods that allow fast (preferably non-contact) detection and localization of early-stage defects with easy-to-interpret results, so that even a non-expert field worker can carry out the testing. The main challenge is to combine as many of these requirements into one single technique. The concept of acoustic cameras, developed for low frequency NDT, meets most of the above-mentioned requirements. These cameras make use of an array of microphones to visualize noise sources by estimating the Direction Of Arrival (DOA) of the impinging sound waves. Until now, however, because of limitations in the frequency range and the lack of integrated nonlinear post-processing, acoustic camera systems have never been used for the localization of incipient damage. The goal of the current paper is to numerically investigate the capabilities of locating incipient damage by measuring the nonlinear airborne emission of the defect using a non-contact ultrasonic sensor array. We will consider a simple case of a sample with a single near-surface crack and prove that after efficient excitation of the defect sample, the nonlinear defect responses can be detected by a uniform linear sensor array. These responses are then used to determine the location of the defect by means of three different DOA algorithms. The results obtained in this study can be considered as a first step towards the development of a nonlinear ultrasonic camera system, comprising the ultrasonic sensor array as the hardware and nonlinear post-processing and source localization software. Full article
(This article belongs to the Special Issue Sensor Technologies for Health Monitoring of Composite Structures)
Show Figures

Figure 1

Figure 1
<p>Illustration of the model geometry consisting of a 5-mm aluminum plate with a horizontally-oriented near-surface crack of 1 mm in length and positioned at a depth of 0.2 mm. An <math display="inline"> <semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics> </math> Lamb mode is excited at the leftmost boundary and propagates through the sample. While interacting with the crack, nonlinearities are being generated, causing high-frequency ultrasonic radiation in the ambient air (i.e., nonlinear air-coupled emission). The nonlinear radiation is captured by an air-coupled ultrasonic sensor array to be used for defect localization. The sensor array is positioned 3 cm above the sample.</p>
Full article ">Figure 2
<p>Snapshot of the calculated <span class="html-italic">y</span>-component of the displacement field in the aluminum sample, clearly illustrating the presence of an <math display="inline"> <semantics> <msub> <mi>A</mi> <mn>0</mn> </msub> </semantics> </math> guided Lamb wave. The black dot indicates the location of the near-surface crack.</p>
Full article ">Figure 3
<p>(Top) Frequency spectra of the calculated normal displacement signals for a number of points on the top surface of the plate, with <span class="html-italic">x</span>-coordinates ranging from −50 mm to 50 mm. (Bottom) Normalized maximum FFT amplitude response measured along the top surface of the plate.</p>
Full article ">Figure 4
<p>(Top) Frequency spectra obtained after applying the pulse inversion technique on the calculated normal displacement signals for a number of points on the top surface of the plate, with <span class="html-italic">x</span>-coordinates ranging from −50 mm to 50 mm. (Bottom) Normalized maximum FFT amplitude response measured along the top surface of the plate, after applying the pulse inversion technique. The figures clearly illustrate the generation of a second harmonic at the position of the crack (<math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>).</p>
Full article ">Figure 5
<p>Radiation patterns in air above the aluminum plate with a near-surface crack. (Top) Fundamental frequency field showing no evidence of the presence of a crack. (Middle) Second harmonic field showing slight radiation of the harmonic into the air, starting from the crack position (<math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>). (Bottom) Second harmonic field obtained after applying the pulse inversion technique. The crack clearly behaves as a source of nonlinear emission.</p>
Full article ">Figure 6
<p>Representation of the near-field situation used in the beamforming algorithms.</p>
Full article ">Figure 7
<p>Normalized power <span class="html-italic">P</span> versus all possible defect locations. The power is calculated by applying the sum-and-delay approach on the second harmonic signals emitted by the defect. The sensor array used here contains 161 elements, ranging from <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mo>−</mo> <mn>40</mn> </mrow> </semantics> </math> mm to <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> mm and separated by a distance <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> mm. The red, dashed line shows the result obtained without the use of pulse inversion (i.e., using the second harmonic signals from <a href="#sensors-17-00930-f005" class="html-fig">Figure 5</a>, middle figure). The black, solid line shows the result obtained when using pulse inversion (i.e., using the second harmonic signals from <a href="#sensors-17-00930-f005" class="html-fig">Figure 5</a>, bottom figure). In both cases, the exact location of the defect occurs at the maximum of the power function.</p>
Full article ">Figure 8
<p>Color-coded plot of the difference between the exact defect location and the location obtained when applying the sum-and-delay approach using sensor arrays with varying numbers of elements and centered at different coordinates. The results were obtained using the second harmonic signals emitted by the defect (without using pulse inversion). Saturated yellow regions mean that the determined location is equal to or more than 5 cm away from the exact defect location.</p>
Full article ">Figure 9
<p>Color-coded plot of the difference between the exact defect location and the location obtained when applying the sum-and-delay approach using sensor arrays with varying numbers of elements and centered at different coordinates. The results were obtained using the second harmonic signals emitted by the defect, with the use of pulse inversion.</p>
Full article ">Figure 10
<p>Angle <math display="inline"> <semantics> <msub> <mi>θ</mi> <mi>k</mi> </msub> </semantics> </math> versus sensor element number of the first of two successive sensors at which the wave impinges. The solid black line corresponds to the angles calculated using the direct linear approach. The dashed red line corresponds to the angles that are theoretically expected.</p>
Full article ">Figure 11
<p>Color-coded plot of the difference between the exact defect location and the location obtained when applying the direct linear approach using sensor arrays with varying numbers of elements and centered at different coordinates. The results were obtained using the second harmonic signals emitted by the defect, with the use of pulse inversion.</p>
Full article ">Figure 12
<p>Angle <math display="inline"> <semantics> <msub> <mi>θ</mi> <mi>k</mi> </msub> </semantics> </math> versus sensor element number of the first of four successive sensors at which the wave impinges. The solid black line corresponds to the angles calculated using the direct quadratic approach. The dashed red line corresponds to the angles that are theoretically expected.</p>
Full article ">Figure 13
<p>Color-coded plot of the difference between the exact defect location and the location obtained when applying the direct quadratic approach using sensor arrays with varying numbers of elements and centered at different coordinates. The results were obtained using the second harmonic signals emitted by the defect, with the use of pulse inversion.</p>
Full article ">Figure 14
<p>Calculated defect location <math display="inline"> <semantics> <msub> <mi>x</mi> <mrow> <mi>C</mi> <mi>r</mi> <mi>a</mi> <mi>c</mi> <mi>k</mi> </mrow> </msub> </semantics> </math> versus the distance <span class="html-italic">D</span> from the sensor array to the test surface. The sensor array used contains 161 elements, ranging from <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mo>−</mo> <mn>40</mn> </mrow> </semantics> </math> mm to <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>40</mn> </mrow> </semantics> </math> mm and separated by a distance <math display="inline"> <semantics> <mrow> <mi>d</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math> mm. Three different approaches were used to determine the crack location: the sum-and-delay approach (crosses), the direct linear approach (circles) and the direct quadratic approach (squares). Only those crack locations that are closer than 1 cm to the exact location of the defect (i.e., <math display="inline"> <semantics> <mrow> <mi>x</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>) are shown in the graph.</p>
Full article ">
4162 KiB  
Article
A Miniature Aerosol Sensor for Detecting Polydisperse Airborne Ultrafine Particles
by Chao Zhang, Dingqu Wang, Rong Zhu, Wenming Yang and Peng Jiang
Sensors 2017, 17(4), 929; https://doi.org/10.3390/s17040929 - 22 Apr 2017
Cited by 14 | Viewed by 6076
Abstract
Counting and sizing of polydisperse airborne nanoparticles have attracted most attentions owing to increasing widespread presence of airborne engineered nanoparticles or ultrafine particles. Here we report a miniature aerosol sensor to detect particle size distribution of polydisperse ultrafine particles based on ion diffusion [...] Read more.
Counting and sizing of polydisperse airborne nanoparticles have attracted most attentions owing to increasing widespread presence of airborne engineered nanoparticles or ultrafine particles. Here we report a miniature aerosol sensor to detect particle size distribution of polydisperse ultrafine particles based on ion diffusion charging and electrical detection. The aerosol sensor comprises a couple of planar electrodes printed on two circuit boards assembled in parallel, where charging, precipitation and measurement sections are integrated into one chip, which can detect aerosol particle size in of 30–500 nm, number concentration in range of 5 × 102 – 5 × 107 /cm3. The average relative errors of the measured aerosol number concentration and the particle size are estimated to be 12.2% and 13.5% respectively. A novel measurement scheme is proposed to actualize a real-time detection of polydisperse particles by successively modulating the measurement voltage and deducing the particle size distribution through a smart data fusion algorithm. The effectiveness of the aerosol sensor is experimentally demonstrated via measurements of polystyrene latex (PSL) aerosol and nucleic acid aerosol, as well as sodium chloride aerosol particles. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Working principle of aerosol sensor. (<b>a</b>) Schematic overview of the aerosol sensing chip with ion diffusion charging; (<b>b</b>) The motion behavior of the charged aerosol particles and gas ions in precipitation section; (<b>c</b>) The motion behavior of the charged aerosol particles in measurement section; (<b>d</b>) Histogram of particle number concentration versus particle size; (<b>e</b>) The structural diagram of aerosol sensing chip.</p>
Full article ">Figure 2
<p>System for aerosol sensor. (<b>a</b>) A prototype of a packaged aerosol sensor system, including an aerosol sensing chip, a homemade signal conditioning circuit, and a micro pump assembled in a case; (<b>b</b>) The signal flow of the aerosol sensor system.</p>
Full article ">Figure 3
<p>Structure of the neural network modeling the relationship between the aerosol sensor outputs and the polydisperse aerosol particle size distribution.</p>
Full article ">Figure 4
<p>Experimental set up for testing the aerosol sensor.</p>
Full article ">Figure 5
<p>Comparison of measured particle number concentration (<b>left</b>) and particle size (<b>right</b>) with reference data for monodisperse aerosols.</p>
Full article ">Figure 6
<p>Comparison of NaCl particle size distribution measured by the aerosol sensor and the reference data for polydisperse NaCl aerosol. Error bars from three repeated measurement are included. Maximum sizes of polydisperse NaCl particles are (<b>a</b>) 37 nm, (<b>b</b>) 58 nm, (<b>c</b>) 100 nm, (<b>d</b>) 215 nm, and (<b>e</b>) 368 nm, respectively.</p>
Full article ">Figure 7
<p>Relative errors of total number concentration in different aerosol particle size interval of (<b>a</b>) 0 to 50 nm, (<b>b</b>) 50 to 100 nm, (<b>c</b>) 100 to 150 nm, (<b>d</b>) 150 to 200 nm, (<b>e</b>) 200 to 250 nm, (<b>f</b>) 250 to 300 nm. Error bars from three repeated measurements are included.</p>
Full article ">
337 KiB  
Article
A Weighted Belief Entropy-Based Uncertainty Measure for Multi-Sensor Data Fusion
by Yongchuan Tang, Deyun Zhou, Shuai Xu and Zichang He
Sensors 2017, 17(4), 928; https://doi.org/10.3390/s17040928 - 22 Apr 2017
Cited by 84 | Viewed by 6671
Abstract
In real applications, how to measure the uncertain degree of sensor reports before applying sensor data fusion is a big challenge. In this paper, in the frame of Dempster–Shafer evidence theory, a weighted belief entropy based on Deng entropy is proposed to quantify [...] Read more.
In real applications, how to measure the uncertain degree of sensor reports before applying sensor data fusion is a big challenge. In this paper, in the frame of Dempster–Shafer evidence theory, a weighted belief entropy based on Deng entropy is proposed to quantify the uncertainty of uncertain information. The weight of the proposed belief entropy is based on the relative scale of a proposition with regard to the frame of discernment (FOD). Compared with some other uncertainty measures in Dempster–Shafer framework, the new measure focuses on the uncertain information represented by not only the mass function, but also the scale of the FOD, which means less information loss in information processing. After that, a new multi-sensor data fusion approach based on the weighted belief entropy is proposed. The rationality and superiority of the new multi-sensor data fusion method is verified according to an experiment on artificial data and an application on fault diagnosis of a motor rotor. Full article
(This article belongs to the Special Issue Soft Sensors and Intelligent Algorithms for Data Fusion)
Show Figures

Figure 1

Figure 1
<p>Comparison between the weighted belief entropy and other uncertainty measures.</p>
Full article ">Figure 2
<p>The flow chart of sensor data fusion based on the weighted belief entropy.</p>
Full article ">
3928 KiB  
Article
The Use of IMMUs in a Water Environment: Instrument Validation and Application of 3D Multi-Body Kinematic Analysis in Medicine and Sport
by Anna Lisa Mangia, Matteo Cortesi, Silvia Fantozzi, Andrea Giovanardi, Davide Borra and Giorgio Gatta
Sensors 2017, 17(4), 927; https://doi.org/10.3390/s17040927 - 22 Apr 2017
Cited by 21 | Viewed by 6439
Abstract
The aims of the present study were the instrumental validation of inertial-magnetic measurements units (IMMUs) in water, and the description of their use in clinical and sports aquatic applications applying customized 3D multi-body models. Firstly, several tests were performed to map the magnetic [...] Read more.
The aims of the present study were the instrumental validation of inertial-magnetic measurements units (IMMUs) in water, and the description of their use in clinical and sports aquatic applications applying customized 3D multi-body models. Firstly, several tests were performed to map the magnetic field in the swimming pool and to identify the best volume for experimental test acquisition with a mean dynamic orientation error lower than 5°. Successively, the gait and the swimming analyses were explored in terms of spatiotemporal and joint kinematics variables. The extraction of only spatiotemporal parameters highlighted several critical issues and the joint kinematic information has shown to be an added value for both rehabilitative and sport training purposes. Furthermore, 3D joint kinematics applied using the IMMUs provided similar quantitative information than that of more expensive and bulky systems but with a simpler and faster setup preparation, a lower time consuming processing phase, as well as the possibility to record and analyze a higher number of strides/strokes without limitations imposed by the cameras. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technologies in Italy 2016)
Show Figures

Figure 1

Figure 1
<p>Representation of the wooden bar with the 7 IMMUs (IMMU1–IMMU7) positioned as shown in the figure.</p>
Full article ">Figure 2
<p>Experimental setup. (<b>a</b>) The structure (in gray) is rigidly connected and fixed and has the following dimensions 60 cm× 30 cm × 138 cm. The box is rigidly connected to the central bar (in blue) but free to rotate around the vertical axis. The IMMUs are shown in one of the tested configurations. The whole structure is in a plastic material; (<b>b</b>) IMMUs positioning to test all three axis rotations.</p>
Full article ">Figure 3
<p>Relative static orientation error (deg) due to the swimming pool side and bottom measured by the IMMU at 250 cm from the beginning of the lane and at different distances (test B) from the pool side with respect to IMMU 1, the furthest IMMU from the bottom of the pool.</p>
Full article ">Figure 4
<p>Relative static orientation errors (deg) averaged on all the IMMUs (IMMU1–IMMU6) and on the three repetitions of the tests C and D. The orientation error is showed as function of the difference from the metal objects in the DL (red lines, test C) and UW (blue lines, test D) settings.</p>
Full article ">Figure 5
<p>Orientation error (range of the Euler angles) for the three rotational directions, for the two different velocities and for the two settings (DL and UW). Mean values and SD are represented.</p>
Full article ">Figure 6
<p>Angular kinematic patterns of the lower limb joints (hip, knee, and ankle) in the sagittal plane. Median, 25th, and 75th percentiles for all the participants for the DL (red solid line and shaded area) and UW (blue solid line and shaded area) settings. The gait cycles are normalized in time.</p>
Full article ">Figure 7
<p>Angular kinematic patterns of the lower limb joints (hip, knee, and ankle) in the sagittal plane. Median values, 25th, and 75th percentiles for all the participants for the young healthy participants (blue solid line and shaded area) and elderly healthy participants (green solid line and shaded area) in the UW settings. The gait cycles are normalized in time.</p>
Full article ">Figure 8
<p>Angular kinematic patterns of the lower limb joints in the sagittal plane. Median, 25th, and 75th percentiles in UW setting, of the injured (red solid line and red stripes area) and the contralateral (blue solid line and light blue shaded area) limbs of the participants. The healthy young adult patterns are indicated with the black line and gray shaded area. The gait cycles are normalized in time.</p>
Full article ">Figure 9
<p>Examples of the gyroscope data output recorded from the right forearm for the six athletes (A1–A6). The three time events, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>P</mi> <mi>U</mi> <mi>L</mi> </mrow> </msub> </mrow> </semantics> </math> as the red circle, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>P</mi> <mi>U</mi> <mi>S</mi> </mrow> </msub> </mrow> </semantics> </math> as the yellow square, and <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mrow> <mi>R</mi> <mi>E</mi> <mi>C</mi> </mrow> </msub> </mrow> </semantics> </math> as the purple triangle, are shown in the figure.</p>
Full article ">Figure 10
<p>Shoulder and Elbow joint angle median values on all the stroke cycles, computed using IMMUs relative to the sagittal, frontal, and transversal plane of front-crawl swimming for the six participants. The stroke cycles are normalized in time. Simulated front-crawl bands are shown for comparison.</p>
Full article ">
2615 KiB  
Article
Experimental Demonstration and Circuitry for a Very Compact Coil-Only Pulse Echo EMAT
by Dirk Rueter
Sensors 2017, 17(4), 926; https://doi.org/10.3390/s17040926 - 22 Apr 2017
Cited by 9 | Viewed by 7603
Abstract
This experimental study demonstrates for the first time a solid-state circuitry and design for a simple compact copper coil (without an additional bulky permanent magnet or bulky electromagnet) as a contactless electromagnetic acoustic transducer (EMAT) for pulse echo operation at MHz frequencies. A [...] Read more.
This experimental study demonstrates for the first time a solid-state circuitry and design for a simple compact copper coil (without an additional bulky permanent magnet or bulky electromagnet) as a contactless electromagnetic acoustic transducer (EMAT) for pulse echo operation at MHz frequencies. A pulsed ultrasound emission into a metallic test object is electromagnetically excited by an intense MHz burst at up to 500 A through the 0.15 mm filaments of the transducer. Immediately thereafter, a smoother and quasi “DC-like” current of 100 A is applied for about 1 ms and allows an echo detection. The ultrasonic pulse echo operation for a simple, compact, non-contacting copper coil is new. Application scenarios for compact transducer techniques include very narrow and hostile environments, in which, e.g., quickly moving metal parts must be tested with only one, non-contacting ultrasound shot. The small transducer coil can be operated remotely with a cable connection, separate from the much bulkier supply circuitry. Several options for more technical and fundamental progress are discussed. Full article
(This article belongs to the Special Issue Acoustic Sensing and Ultrasonic Drug Delivery)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) A conventional EMAT includes a flat induction coil and a permanent magnet. The RF induction coil induces eddy currents (small circles) within the surface sheet (δ) of the nearby (distance g) metallic test object. The static field (red arrows) from the permanent magnet (in distance g + d) and the eddy currents affect the RF Lorentz forces (black arrows) within the skin sheet and thus excite ultrasonic vibrations in the test metal. Notably, the induction coil also induces eddy currents in the metallic parts of the magnet, which in return reduce the intended eddy currents in the target. (<b>b</b>) Without a permanent magnet, this decrease in the eddy currents in the target is absent. Here, the RF magnetic field of the induction coil itself—together with the eddy currents in the surface sheet—excite the ultrasonic vibrations in the test metal. Furthermore, the projected field from the induction coil and the spatial distribution of the eddy currents in the target are geometrically matched.</p>
Full article ">Figure 2
<p>(<b>a</b>) The circuitry is based on seven blocks, each with a distinct function and influence. Pulsed and high currents/voltages occur in a serial connection from blocks 1 to 6. Block 4 is a small ultrasound transducer coil with (undesired but unavoidable) resistance of 0.25 Ω. The transducer can be connected to the circuitry with a cable. (<b>b</b>) The detailed circuitry contains only two relevant transistors: the insulated gate bipolar transistor (IGBT) T1 works as a fast high-power switch (block 1) and T2 as a simple amplifier (block 7) for the ultrasound echoes.</p>
Full article ">Figure 3
<p>Practical appearance of the setup. Parts are labeled according to <a href="#sensors-17-00926-f002" class="html-fig">Figure 2</a>. The small spiral coil was made from a 0.15 mm copper wire (magnified inset in the left bottom corner) and was repetitively pulsed up to 500 A from the circuitry. The coil magnetically excited and received megahertz ultrasound pulses from the test specimen, separated by a small air gap (typ. &lt;1 mm).</p>
Full article ">Figure 4
<p>The measured currents on the microsecond-scale approach several hundred amperes immediately after switching. Function block 5 (L2 and C3) imprints a distinct 1.2 MHz modulation in the current pulse, directly effective for ultrasound emission. After 20 µs, the current decays to an almost constant “DC” value of about 100 A, fed from the low-voltage bank block 3. A low-voltage slow discharge alone, without contribution from the high-voltage bank 2, is shown for comparison. With an additional cable connection, the modulation frequency and the peak amplitudes are reduced to 1.05 MHz and 400 A.</p>
Full article ">Figure 5
<p>Characteristic current pulse and recorded ultrasound echoes from a single pulse (with numerical offsets for better representation) from the 30 cm aluminum rod and over a prolonged time period (600 µs). The frequencies of the echo signals cannot be recognized in this time scale; however, the actual periodicity of the echoes is closely related to the imprinted modulations in <a href="#sensors-17-00926-f004" class="html-fig">Figure 4</a>. After quick relaxation of the intense excitation pulse, the 100 A “DC” bias current persists much longer, over hundreds of microseconds. The amplifier for the echo signal requires recovery time and relaxation until about 100 µs after the power pulse (artifacts at 120–160 µs). After 220 µs and 440 µs, a distinct first and second 1.2 MHz echo from the aluminum rod (dark blue: distance g = 0.2 mm and no connection cable used) is obtained, with amplitudes of up to 300 mV. A weaker (green) and (due to the dispersion of the group velocity in the aluminum rod) somewhat delayed echo pattern with about 1 MHz is obtained when a 1 m connection cable is used. The coupling efficiency of EMATs is strongly affected by distance: when the gap g between the aluminum rod and the transducer coil is increased to g = 0.5 mm or g = 3 mm (light blue and black), the echo signal is significantly reduced or even fades away.</p>
Full article ">Figure 6
<p>Pulse echo signals (non-averaged) on a shorter time scale. Echoes at 1 MHz are obtained from a mono-modal aluminum tube at different air gaps g between the coil and the target. The transducer coil is operated through a 50 cm thin coaxial cable. The first echo from the 6 cm tube is observed after 20–25 µs: the emission pulse starts at 10 µs, and the maximum of a clear echo is obtained at 35 µs. The early echoes reveal a characteristic width of about 5 µs. A broadening (dispersion of the tube) is observable for the later echoes toward 130 µs. Again, the coupling efficiency and the signal intensity are strongly affected by the air gap, and the echo has become small at g = 1.2 mm. The qualitative signal properties (frequency, time, and width) are virtually not affected by the air gap.</p>
Full article ">
2584 KiB  
Article
Imaging of the Finger Vein and Blood Flow for Anti-Spoofing Authentication Using a Laser and a MEMS Scanner
by Jaekwon Lee, Seunghwan Moon, Juhun Lim, Min-Joo Gwak, Jae Gwan Kim, Euiheon Chung and Jong-Hyun Lee
Sensors 2017, 17(4), 925; https://doi.org/10.3390/s17040925 - 22 Apr 2017
Cited by 18 | Viewed by 8058
Abstract
A new authentication method employing a laser and a scanner is proposed to improve image contrast of the finger vein and to extract blood flow pattern for liveness detection. A micromirror reflects a laser beam and performs a uniform raster scan. Transmissive vein [...] Read more.
A new authentication method employing a laser and a scanner is proposed to improve image contrast of the finger vein and to extract blood flow pattern for liveness detection. A micromirror reflects a laser beam and performs a uniform raster scan. Transmissive vein images were obtained, and compared with those of an LED. Blood flow patterns were also obtained based on speckle images in perfusion and occlusion. Curvature ratios of the finger vein and blood flow intensities were found to be nearly constant, regardless of the vein size, which validated the high repeatability of this scheme for identity authentication with anti-spoofing. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Finger vein imaging system using a near-infrared (NIR) laser and a Micro Electro Mechanical Systems (MEMS) scanner.</p>
Full article ">Figure 2
<p>IR sensor images (12.5 mm × 12.5 mm) of an index finger vein using (<b>a</b>) the LED array, and (<b>b</b>) the laser with the MEMS scanner; the circular dots represent the detected maximum values of the curvature on the vein image.</p>
Full article ">Figure 3
<p>Intensity profile measured along segments A-A’.</p>
Full article ">Figure 4
<p>Curvature and score values in the cross-sectional profile for (<b>a</b>) the LED array, and (<b>b</b>) the VILS.</p>
Full article ">Figure 5
<p>Images of the finger vein in (<b>a</b>) perfusion and (<b>b</b>) occlusion, and images of the blood flow in (<b>c</b>) perfusion and (<b>d</b>) occlusion.</p>
Full article ">Figure 6
<p>Curvatures of (<b>a</b>) finger vein and (<b>b</b>) blood flow pattern in the cross-sectional profile (B-B’).</p>
Full article ">
3015 KiB  
Article
State Estimation Using Dependent Evidence Fusion: Application to Acoustic Resonance-Based Liquid Level Measurement
by Xiaobin Xu, Zhenghui Li, Guo Li and Zhe Zhou
Sensors 2017, 17(4), 924; https://doi.org/10.3390/s17040924 - 21 Apr 2017
Cited by 5 | Viewed by 4046
Abstract
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in [...] Read more.
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results. Full article
Show Figures

Figure 1

Figure 1
<p>The relationships of two pieces of dependent evidence.</p>
Full article ">Figure 2
<p>(<b>a</b>) Triangle possibility distributions of state noises, (<b>b</b>) Triangle possibility distributions of observation noises.</p>
Full article ">Figure 3
<p>Flowchart of state estimation iterative algorithm.</p>
Full article ">Figure 4
<p>The possibility distribution of state noise and its evidence construction.</p>
Full article ">Figure 5
<p>Structure of a level gauge.</p>
Full article ">Figure 6
<p>Waveform graph (<span class="html-italic">L</span> = 4.6 m).</p>
Full article ">Figure 7
<p>Resonance frequencies and amplitudes (<span class="html-italic">L</span> = 4.6 m).</p>
Full article ">Figure 8
<p>Probability distribution <span class="html-italic">π<sub>v</sub></span> of state noise <span class="html-italic">v</span>.</p>
Full article ">Figure 9
<p>Probability distribution <span class="html-italic">π<sub>w</sub></span> of observation noise <span class="html-italic">w</span>.</p>
Full article ">Figure 10
<p>(<b>a</b>) Estimation results of resonance frequencies, (<b>b</b>) Absolute values of frequency estimation errors.</p>
Full article ">Figure 11
<p>(<b>a</b>) Estimation results of level <span class="html-italic">L</span>, (<b>b</b>) Absolute values of length estimation errors.</p>
Full article ">
2516 KiB  
Article
A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field
by Xiang Gao, Shenggang Yan and Bin Li
Sensors 2017, 17(4), 923; https://doi.org/10.3390/s17040923 - 21 Apr 2017
Cited by 21 | Viewed by 4264
Abstract
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of [...] Read more.
Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Specific solution to calculated initial phase <math display="inline"> <semantics> <mi mathvariant="sans-serif">ρ</mi> </semantics> </math>.</p>
Full article ">Figure 2
<p>Coordinate of relative position between target and sensor.</p>
Full article ">Figure 3
<p>Overhead view of the position between the radiation source and magnetic sensor.</p>
Full article ">Figure 4
<p>Alternating magnetic field data of the radiation source.</p>
Full article ">Figure 5
<p>Amplitude curve of the alternating magnetic field data obtained by coherent demodulation.</p>
Full article ">Figure 6
<p>Simulation location results.</p>
Full article ">Figure 7
<p>Difference between locating results and actual value in three direction.</p>
Full article ">Figure 8
<p>(<b>a</b>) Experimental tests; (<b>b</b>) Overhead view of experimental tests.</p>
Full article ">Figure 9
<p>(<b>a</b>) Three-component fluxgate sensor of HS-MS-FG-3-LN; (<b>b</b>) Data acquisition card of NI 9239.</p>
Full article ">Figure 10
<p>The structure of the experimental system.</p>
Full article ">Figure 11
<p>Magnetic field data acquired by three-component fluxgate sensor.</p>
Full article ">Figure 12
<p>Alternating magnetic field data processed through a high-pass filter.</p>
Full article ">Figure 13
<p>Amplitude curve of three component magnetic field data.</p>
Full article ">Figure 14
<p>Contrast of location result and actual value in three directions of the experimental tests.</p>
Full article ">
1389 KiB  
Article
Zero-Sum Matrix Game with Payoffs of Dempster-Shafer Belief Structures and Its Applications on Sensors
by Xinyang Deng, Wen Jiang and Jiandong Zhang
Sensors 2017, 17(4), 922; https://doi.org/10.3390/s17040922 - 21 Apr 2017
Cited by 42 | Viewed by 6341
Abstract
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be [...] Read more.
The zero-sum matrix game is one of the most classic game models, and it is widely used in many scientific and engineering fields. In the real world, due to the complexity of the decision-making environment, sometimes the payoffs received by players may be inexact or uncertain, which requires that the model of matrix games has the ability to represent and deal with imprecise payoffs. To meet such a requirement, this paper develops a zero-sum matrix game model with Dempster–Shafer belief structure payoffs, which effectively represents the ambiguity involved in payoffs of a game. Then, a decomposition method is proposed to calculate the value of such a game, which is also expressed with belief structures. Moreover, for the possible computation-intensive issue in the proposed decomposition method, as an alternative solution, a Monte Carlo simulation approach is presented, as well. Finally, the proposed zero-sum matrix games with payoffs of Dempster–Shafer belief structures is illustratively applied to the sensor selection and intrusion detection of sensor networks, which shows its effectiveness and application process. Full article
(This article belongs to the Special Issue Soft Sensors and Intelligent Algorithms for Data Fusion)
Show Figures

Figure 1

Figure 1
<p>Example of a D-S belief structure’s CDF with respect to its associated variable <span class="html-italic">x</span>.</p>
Full article ">Figure 2
<p>The CDF of the value of the game given in <a href="#sensors-17-00922-t001" class="html-table">Table 1</a> with respect to the associated variable <span class="html-italic">x</span>.</p>
Full article ">Figure 3
<p>Obtained CDF of the value of the game shown in <a href="#sensors-17-00922-t001" class="html-table">Table 1</a> with respect to the associated variable <span class="html-italic">x</span> by using the LHS-based Monte Carlo simulation while sampling size <span class="html-italic">T</span> = 10,000.</p>
Full article ">Figure 4
<p>CDF of the value of the game shown in <a href="#sensors-17-00922-t002" class="html-table">Table 2</a> with respect to the associated variable <span class="html-italic">x</span> by using the LHS-based Monte Carlo simulation with different sampling sizes <span class="html-italic">T</span>.</p>
Full article ">Figure 5
<p>The detection ranges of Sensors A and B in the form of crisp numbers.</p>
Full article ">Figure 6
<p>Two cases in sensor selection for submarine detection.</p>
Full article ">Figure 7
<p>Squared detection ranges for Sensors A and B in the form of crisp numbers.</p>
Full article ">Figure 8
<p>The detection ranges of Sensors A and B in the form of D-S belief structures.</p>
Full article ">Figure 9
<p>CDFs of the values of sensor games given in <a href="#sensors-17-00922-f008" class="html-fig">Figure 8</a> with respect to the associated variable <span class="html-italic">x</span>.</p>
Full article ">Figure 10
<p>Squared detection ranges for Sensors A and B in the form of D-S belief structures.</p>
Full article ">Figure 11
<p>CDFs of the values of sensor games given in <a href="#sensors-17-00922-f010" class="html-fig">Figure 10</a> with respect to the associated variable <span class="html-italic">x</span>.</p>
Full article ">Figure 12
<p>The minimum security level (MSL) of the sensor network with respect to parameters <math display="inline"> <semantics> <msub> <mi>γ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>γ</mi> <mn>2</mn> </msub> </semantics> </math> in terms of <math display="inline"> <semantics> <msubsup> <mi>U</mi> <mrow> <mi>d</mi> <mi>e</mi> <mi>f</mi> <mi>e</mi> <mi>n</mi> <mi>d</mi> <mi>e</mi> <mi>r</mi> </mrow> <mo>′</mo> </msubsup> </semantics> </math>.</p>
Full article ">Figure 13
<p>The value of fluctuation coefficient <span class="html-italic">Q</span> with respect to different mean values of <math display="inline"> <semantics> <msub> <mi>γ</mi> <mn>1</mn> </msub> </semantics> </math> and <math display="inline"> <semantics> <msub> <mi>γ</mi> <mn>2</mn> </msub> </semantics> </math>.</p>
Full article ">
2865 KiB  
Article
A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning
by Xin Li, Peng Zhang, Jiming Guo, Jinling Wang and Weining Qiu
Sensors 2017, 17(4), 921; https://doi.org/10.3390/s17040921 - 21 Apr 2017
Cited by 30 | Viewed by 4509
Abstract
Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first [...] Read more.
Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Distribution of pseudolite antennas.</p>
Full article ">Figure 2
<p>2-D contour map of pseudolite AFV for one epoch.</p>
Full article ">Figure 3
<p>Evolution of global optimal particle AFV using the IPSO method.</p>
Full article ">Figure 4
<p>Evolution of the global optimal particle 3-D coordinate component.</p>
Full article ">Figure 5
<p>AFM reliability with various preliminary coordinates.</p>
Full article ">Figure 6
<p>AFM searching time with IPSO and traditional grid methods.</p>
Full article ">Figure 7
<p>Final pseudolite positioning results with two different AFM searching methods.</p>
Full article ">Figure 8
<p>Differences in positioning results between AFM and LS methods.</p>
Full article ">Figure 9
<p>2-D positioning results with the static test.</p>
Full article ">Figure 10
<p>Fixed rail for the kinematic test.</p>
Full article ">Figure 11
<p>Kinematic trajectory based on the pseudolite positioning result.</p>
Full article ">Figure 12
<p>Kinematic positioning errors computed by LS method.</p>
Full article ">
1178 KiB  
Article
Dynamic Construction Scheme for Virtualization Security Service in Software-Defined Networks
by Zhaowen Lin, Dan Tao and Zhenji Wang
Sensors 2017, 17(4), 920; https://doi.org/10.3390/s17040920 - 21 Apr 2017
Cited by 13 | Viewed by 5102
Abstract
For a Software Defined Network (SDN), security is an important factor affecting its large-scale deployment. The existing security solutions for SDN mainly focus on the controller itself, which has to handle all the security protection tasks by using the programmability of the network. [...] Read more.
For a Software Defined Network (SDN), security is an important factor affecting its large-scale deployment. The existing security solutions for SDN mainly focus on the controller itself, which has to handle all the security protection tasks by using the programmability of the network. This will undoubtedly involve a heavy burden for the controller. More devastatingly, once the controller itself is attacked, the entire network will be paralyzed. Motivated by this, this paper proposes a novel security protection architecture for SDN. We design a security service orchestration center in the control plane of SDN, and this center physically decouples from the SDN controller and constructs SDN security services. We adopt virtualization technology to construct a security meta-function library, and propose a dynamic security service composition construction algorithm based on web service composition technology. The rule-combining method is used to combine security meta-functions to construct security services which meet the requirements of users. Moreover, the RETE algorithm is introduced to improve the efficiency of the rule-combining method. We evaluate our solutions in a realistic scenario based on OpenStack. Substantial experimental results demonstrate the effectiveness of our solutions that contribute to achieve the effective security protection with a small burden of the SDN controller. Full article
Show Figures

Figure 1

Figure 1
<p>Security protection architecture for SDN.</p>
Full article ">Figure 2
<p>Architecture of the security service orchestration center.</p>
Full article ">Figure 3
<p>State transition diagram I.</p>
Full article ">Figure 4
<p>State transition diagram II.</p>
Full article ">Figure 5
<p>State transition diagram III.</p>
Full article ">Figure 6
<p>State transition diagram.</p>
Full article ">Figure 7
<p>Optimized security rule composition network.</p>
Full article ">Figure 8
<p>Network topology.</p>
Full article ">Figure 9
<p>Configuration result of Firewall.</p>
Full article ">Figure 10
<p>Flow charts of (<b>a</b>) OVS3; (<b>b</b>) OVS4; (<b>c</b>) OVS5 and (<b>d</b>) OVS6.</p>
Full article ">Figure 10 Cont.
<p>Flow charts of (<b>a</b>) OVS3; (<b>b</b>) OVS4; (<b>c</b>) OVS5 and (<b>d</b>) OVS6.</p>
Full article ">Figure 11
<p>The rule-combining time comparison with and without using optimization algorithm for single user.</p>
Full article ">Figure 12
<p>The rule-combining time comparison with and without using optimization algorithm for multiple users.</p>
Full article ">Figure 13
<p>Comparison between two algorithms.</p>
Full article ">
4160 KiB  
Article
Image-Guided Laparoscopic Surgical Tool (IGLaST) Based on the Optical Frequency Domain Imaging (OFDI) to Prevent Bleeding
by Byung Jun Park, Seung Rag Lee, Hyun Jin Bang, Byung Yeon Kim, Jeong Hun Park, Dong Guk Kim, Sung Soo Park and Young Jae Won
Sensors 2017, 17(4), 919; https://doi.org/10.3390/s17040919 - 21 Apr 2017
Cited by 6 | Viewed by 5952
Abstract
We present an image-guided laparoscopic surgical tool (IGLaST) to prevent bleeding. By applying optical frequency domain imaging (OFDI) to a specially designed laparoscopic surgical tool, the inside of fatty tissue can be observed before a resection, and the presence and size of blood [...] Read more.
We present an image-guided laparoscopic surgical tool (IGLaST) to prevent bleeding. By applying optical frequency domain imaging (OFDI) to a specially designed laparoscopic surgical tool, the inside of fatty tissue can be observed before a resection, and the presence and size of blood vessels can be recognized. The optical sensing module on the IGLaST head has a diameter of less than 390 µm and is moved back and forth by a linear servo actuator in the IGLaST body. We proved the feasibility of IGLaST by in vivo imaging inside the fatty tissue of a porcine model. A blood vessel with a diameter of about 2.2 mm was clearly observed. Our proposed scheme can contribute to safe surgery without bleeding by monitoring vessels inside the tissue and can be further expanded to detect invisible nerves of the laparoscopic thyroid during prostate gland surgery. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of an image-guided laparoscopic surgical tool (IGLaST) based on the optical frequency domain imaging (OFDI) technique.</p>
Full article ">Figure 2
<p>IGLaST head with the route for the optical sensing module.</p>
Full article ">Figure 3
<p>(<b>a</b>) Design and manufacture of the ball-lens fiber. (<b>b</b>) Simulation results for the ball-lens fiber. (<b>c</b>) Experimentally measured beam profile of the ball-lens fiber at a working distance of 1.6 mm.</p>
Full article ">Figure 4
<p>Optical sensing module for IGLaST.</p>
Full article ">Figure 5
<p>Beam diameter (FWHM) of the optical sensing module.</p>
Full article ">Figure 6
<p>In vivo imaging inside the fatty tissue of a porcine model with IGLaST; the blood vessel inside the tissue is visible. (<b>a</b>) Laparoscopic image of the porcine model. (<b>b</b>) Laparoscopic image of the porcine model after the tissue is grasped with IGLaST. (<b>c</b>) OFDI image inside the fatty tissue.</p>
Full article ">Figure 7
<p>(<b>a</b>) In vivo imaging of the blood flow inside the fatty tissue of a porcine model with IGLaST. (<b>b</b>) Identification of blood flow with the morphological image processing technique.</p>
Full article ">
1598 KiB  
Article
Autonomous Multi-Robot Search for a Hazardous Source in a Turbulent Environment
by Branko Ristic, Daniel Angley, Bill Moran and Jennifer L. Palmer
Sensors 2017, 17(4), 918; https://doi.org/10.3390/s17040918 - 21 Apr 2017
Cited by 33 | Viewed by 4806
Abstract
Finding the source of an accidental or deliberate release of a toxic substance into the atmosphere is of great importance for national security. The paper presents a search algorithm for turbulent environments which falls into the class of cognitive (infotaxi) algorithms. [...] Read more.
Finding the source of an accidental or deliberate release of a toxic substance into the atmosphere is of great importance for national security. The paper presents a search algorithm for turbulent environments which falls into the class of cognitive (infotaxi) algorithms. Bayesian estimation of the source parameter vector is carried out using the Rao–Blackwell dimension-reduction method, while the robots are controlled autonomously to move in a scalable formation. Estimation and control are carried out in a centralised replicated fusion architecture assuming all-to-all communication. The paper presents a comprehensive numerical analysis of the proposed algorithm, including the search-time and displacement statistics. Full article
Show Figures

Figure 1

Figure 1
<p>An illustration of a path of a coordinated group of <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> searching robots at three consecutive time instants, with the scale of the formation increasing. The small circles in the figure represent robot locations <math display="inline"> <semantics> <msubsup> <mi mathvariant="bold">r</mi> <mi>k</mi> <mi>i</mi> </msubsup> </semantics> </math>; the vertical lines indicate the current headings <math display="inline"> <semantics> <msubsup> <mi>ϕ</mi> <mi>k</mi> <mi>i</mi> </msubsup> </semantics> </math>, for <math display="inline"> <semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mi>N</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>An illustrative run of the multi-robot search algorithm. Figures (<b>a</b>) and (<b>b</b>) show the search area, the paths of <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> platforms, and the Monte Carlo samples (brown coloured dots) <math display="inline"> <semantics> <msub> <mrow> <mo stretchy="false">{</mo> <msubsup> <mi mathvariant="bold">r</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo stretchy="false">(</mo> <mi>m</mi> <mo stretchy="false">)</mo> </mrow> </msubsup> <mo stretchy="false">}</mo> </mrow> <mrow> <mn>1</mn> <mo>≤</mo> <mi>m</mi> <mo>≤</mo> <mi>M</mi> </mrow> </msub> </semantics> </math> at <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>33</mn> </mrow> </semantics> </math>, respectively. The true source location is indicated by a pink asterisk. The contours of the mean plume are plotted with blue lines. Figure (<b>c</b>) shows the prior probability density function (PDF) <math display="inline"> <semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics> </math> (red dashed line), the posterior PDF <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo>|</mo> <msub> <mi mathvariant="bold">r</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">z</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>k</mi> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics> </math> at <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>48</mn> </mrow> </semantics> </math> (green solid line), and the true value of <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> (blue asterisk).</p>
Full article ">Figure 2 Cont.
<p>An illustrative run of the multi-robot search algorithm. Figures (<b>a</b>) and (<b>b</b>) show the search area, the paths of <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> platforms, and the Monte Carlo samples (brown coloured dots) <math display="inline"> <semantics> <msub> <mrow> <mo stretchy="false">{</mo> <msubsup> <mi mathvariant="bold">r</mi> <mrow> <mn>0</mn> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo stretchy="false">(</mo> <mi>m</mi> <mo stretchy="false">)</mo> </mrow> </msubsup> <mo stretchy="false">}</mo> </mrow> <mrow> <mn>1</mn> <mo>≤</mo> <mi>m</mi> <mo>≤</mo> <mi>M</mi> </mrow> </msub> </semantics> </math> at <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>33</mn> </mrow> </semantics> </math>, respectively. The true source location is indicated by a pink asterisk. The contours of the mean plume are plotted with blue lines. Figure (<b>c</b>) shows the prior probability density function (PDF) <math display="inline"> <semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo stretchy="false">)</mo> </mrow> </semantics> </math> (red dashed line), the posterior PDF <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo>|</mo> <msub> <mi mathvariant="bold">r</mi> <mn>0</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold">z</mi> <mrow> <mn>1</mn> <mo>:</mo> <mi>k</mi> </mrow> </msub> <mo stretchy="false">)</mo> </mrow> </semantics> </math> at <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>48</mn> </mrow> </semantics> </math> (green solid line), and the true value of <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>4</mn> </mrow> </semantics> </math> (blue asterisk).</p>
Full article ">Figure 3
<p>Mean search time when varying the number of platforms, <span class="html-italic">N</span>, from 1 to 10. The error bars show the 95% confidence interval for the estimate of the mean.</p>
Full article ">Figure 4
<p>Mean search time for <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> platforms when varying the side length of the search area from 200 to 1000. The error bars show the 95% confidence interval for the estimate of the mean.</p>
Full article ">Figure 5
<p>Histograms of robot formation displacements, as chosen by the search algorithm. The travel cost per unit distance was <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>01</mn> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>02</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 6
<p>Q-Q plot of the search times for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math>, versus the inverse Gaussian distribution with parameters fitted using maximum likelihood estimation. The source location was fixed at <math display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>187</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mn>187</mn> <mo>.</mo> <mn>5</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math> and the initial centroid position at <math display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>187</mn> <mo>.</mo> <mn>5</mn> <mo>,</mo> <mo>−</mo> <mn>187</mn> <mo>.</mo> <mn>5</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math>. The green lines show 95% confidence bands [<a href="#B35-sensors-17-00918" class="html-bibr">35</a>].</p>
Full article ">Figure 7
<p>An illustrative run of the multi-robot search algorithm on the experimental dataset using <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> platforms. (<b>a</b>–<b>d</b>) show the positions and trajectories of the platforms at times <math display="inline"> <semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics> </math>, 10, 20 and 30, respectively. The plume from the experimental dataset can be seen in the top left of the search area, with darker areas representing higher concentrations.</p>
Full article ">Figure 8
<p>Experimental data: Q-Q plot of the search times for (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>5</mn> </mrow> </semantics> </math> versus the inverse Gaussian distribution with parameters fitted using maximum likelihood estimation. The source was placed as shown in <a href="#sensors-17-00918-f007" class="html-fig">Figure 7</a> and initial centroid position was fixed at <math display="inline"> <semantics> <mrow> <mo stretchy="false">[</mo> <mn>125</mn> <mo>,</mo> <mo>−</mo> <mn>125</mn> <mo stretchy="false">]</mo> </mrow> </semantics> </math>. The green lines show 95% confidence bands [<a href="#B35-sensors-17-00918" class="html-bibr">35</a>].</p>
Full article ">
9406 KiB  
Article
Accurate Ambient Noise Assessment Using Smartphones
by Willian Zamora, Carlos T. Calafate, Juan-Carlos Cano and Pietro Manzoni
Sensors 2017, 17(4), 917; https://doi.org/10.3390/s17040917 - 21 Apr 2017
Cited by 41 | Viewed by 7137
Abstract
Nowadays, smartphones have become ubiquitous and one of the main communication resources for human beings. Their widespread adoption was due to the huge technological progress and to the development of multiple useful applications. Their characteristics have also experienced a substantial improvement as they [...] Read more.
Nowadays, smartphones have become ubiquitous and one of the main communication resources for human beings. Their widespread adoption was due to the huge technological progress and to the development of multiple useful applications. Their characteristics have also experienced a substantial improvement as they now integrate multiple sensors able to convert the smartphone into a flexible and multi-purpose sensing unit. The combined use of multiple smartphones endowed with several types of sensors gives the possibility to monitor a certain area with fine spatial and temporal granularity, a procedure typically known as crowdsensing. In this paper, we propose using smartphones as environmental noise-sensing units. For this purpose, we focus our study on the sound capture and processing procedure, analyzing the impact of different noise calculation algorithms, as well as in determining their accuracy when compared to a professional noise measurement unit. We analyze different candidate algorithms using different types of smartphones, and we study the most adequate time period and sampling strategy to optimize the data-gathering process. In addition, we perform an experimental study comparing our approach with the results obtained using a professional device. Experimental results show that, if the smartphone application is well tuned, it is possible to measure noise levels with a accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones, i.e., 35–95 dB. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed crowdsensing architecture for noise analysis.</p>
Full article ">Figure 2
<p>Noise calibration using professional devices. Location: reverberant acoustic chamber, Technical University of Valencia.</p>
Full article ">Figure 3
<p>Sampling rate analysis. (<b>a</b>) Algorithm #1.; (<b>b</b>) Algorithm #2.; (<b>c</b>) Algorithm #3.</p>
Full article ">Figure 4
<p>Block size analysis. (<b>a</b>) Algorithm #1.; (<b>b</b>) Algorithm #2.; (<b>c</b>) Algorithm #3.</p>
Full article ">Figure 5
<p>Sampling rate analysis when fixing the block size. (<b>a</b>) Algorithm #1; (<b>b</b>) Algorithm #3.</p>
Full article ">Figure 6
<p>Smartphone models used for testing. From left to right: BQ Aquaris, Samsung J5, Samsung S4 and Samsung S7 Edge.</p>
Full article ">Figure 7
<p>Estimation accuracy for the different smartphone models with and without linear regression. (<b>a</b>) Algorithm #1: default sampling; (<b>b</b>) Algorithm #1: values adjusted using linear regression; (<b>c</b>) Algorithm #1: estimation error.</p>
Full article ">Figure 8
<p>Estimation error analysis when using similar smartphones. (<b>a</b>) Before regression; (<b>b</b>) after regression; (<b>c</b>) estimation error.</p>
Full article ">Figure 9
<p>Sampling period analysis. (<b>a</b>) S7 edge; (<b>b</b>) BQ Aquaris; (<b>c</b>) estimation error.</p>
Full article ">Figure 10
<p>Analysis sampling S7 and Aquaris for 1 s.</p>
Full article ">Figure 11
<p>Sampling period and error analysis when removing the first sample. (<b>a</b>) S7 Edge; (<b>b</b>) Aquaris; (<b>c</b>) error analysis.</p>
Full article ">Figure 12
<p>Analysis for the mobile scenario.</p>
Full article ">Figure 13
<p>Analysis noise pollutions for three scenario outdoor environment. (<b>a</b>) Mobile scenario; (<b>b</b>) main avenue; (<b>c</b>) outdoor coffee shop.</p>
Full article ">
5590 KiB  
Article
Suitability of Strain Gage Sensors for Integration into Smart Sport Equipment: A Golf Club Example
by Anton Umek, Yuan Zhang, Sašo Tomažič and Anton Kos
Sensors 2017, 17(4), 916; https://doi.org/10.3390/s17040916 - 21 Apr 2017
Cited by 27 | Viewed by 9004
Abstract
Wearable devices and smart sport equipment are being increasingly used in amateur and professional sports. Smart sport equipment employs various sensors for detecting its state and actions. The correct choice of the most appropriate sensor(s) is of paramount importance for efficient and successful [...] Read more.
Wearable devices and smart sport equipment are being increasingly used in amateur and professional sports. Smart sport equipment employs various sensors for detecting its state and actions. The correct choice of the most appropriate sensor(s) is of paramount importance for efficient and successful operation of sport equipment. When integrated into the sport equipment, ideal sensors are unobstructive, and do not change the functionality of the equipment. The article focuses on experiments for identification and selection of sensors that are suitable for the integration into a golf club with the final goal of their use in real time biofeedback applications. We tested two orthogonally affixed strain gage (SG) sensors, a 3-axis accelerometer, and a 3-axis gyroscope. The strain gage sensors are calibrated and validated in the laboratory environment by a highly accurate Qualisys Track Manager (QTM) optical tracking system. Field test results show that different types of golf swing and improper movement in early phases of golf swing can be detected with strain gage sensors attached to the shaft of the golf club. Thus they are suitable for biofeedback applications to help golfers to learn repetitive golf swings. It is suggested that the use of strain gage sensors can improve the golf swing technical error detection accuracy and that strain gage sensors alone are enough for basic golf swing analysis. Our final goal is to be able to acquire and analyze as many parameters of a smart golf club in real time during the entire duration of the swing. This would give us the ability to design mobile and cloud biofeedback applications with terminal or concurrent feedback that will enable us to speed-up motor skill learning in golf. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified architecture of IoT sport applications using smart sport equipment and wearables with integrated sensors. Smart sport equipment and wearables send sensor data to the IoT cloud directly or through the gateway. Sensor data processing can be performed locally by the mobile device or in the cloud. Results can be checked by any connected device.</p>
Full article ">Figure 2
<p>Smart golf club equipped with various sensors fixed to the shaft at different positions: the IMU device with 3-axis accelerometer and 3-axis gyroscope is just below the grip; two SG sensors pasted orthogonally to each other along the shaft’s axis; Two independent rigid bodies used by the QTM optical system are fixed at the top and at the bottom of the shaft.</p>
Full article ">Figure 3
<p>Graphical representation of the smart golf club shaft with two orthogonally placed strain gage sensors measuring the bends of the shaft in two orthogonal directions. The front SG senor is placed in the direction of the Z axis and side SG sensor in the direction of the X axis of the Shimmer 3 IMU device; both measured from the axis of the shaft.</p>
Full article ">Figure 4
<p>3D rigid body tracking with Qualisys™ Track Manager. The positions of two independent rigid bodies attached to the golf club shaft at one point in time are shown in the global coordinate system XYZ. Each rigid body is composed of three markers and has an independent local coordinate system with the origin in one of the markers.</p>
Full article ">Figure 5
<p>Recording and analysis window of a single swing. Swing recording time is 3 s with the impact point set at 2 s. The analysis window has the width of 1.5 s.</p>
Full article ">Figure 6
<p>Bending test for SG sensor calibration and validation. The golf club shaft bends under the force exerted to the bottom of the shaft. Bending is calculated from the height difference of two markers measured by the QTM system.</p>
Full article ">Figure 7
<p>Front SG sensor calibration and validation. The bend is measured directly by the QTM system and indirectly by the SG sensor. The horizontal axis shows the mass of the applied weight in [g], the vertical axis shows the bend of the club head in [mm].</p>
Full article ">Figure 8
<p>Field test. The movement of the golf club during a typical golf swing is recorded by the SG sensors and the IMU device. (<b>a</b>) SG sensors are connected by wire to the nearby cRIO device. (<b>b</b>) The IMU device is connected wirelessly to the nearby laptop (not shown in pictures).</p>
Full article ">Figure 9
<p>Measurement repeatability test. Graphs show sensor signals for the same series of five consecutive swings performed by a professional golf player: (<b>a</b>) two orthogonal SG sensors; (<b>b</b>) 3-axis accelerometer; (<b>c</b>) 3-axis gyroscope. In graph (<b>a</b>) blue plots represent the front sensor and red plots represent the side sensor, while graphs (<b>b</b>,<b>c</b>) red, green, and blue lines represent X, Y, and Z axis respectively.</p>
Full article ">Figure 9 Cont.
<p>Measurement repeatability test. Graphs show sensor signals for the same series of five consecutive swings performed by a professional golf player: (<b>a</b>) two orthogonal SG sensors; (<b>b</b>) 3-axis accelerometer; (<b>c</b>) 3-axis gyroscope. In graph (<b>a</b>) blue plots represent the front sensor and red plots represent the side sensor, while graphs (<b>b</b>,<b>c</b>) red, green, and blue lines represent X, Y, and Z axis respectively.</p>
Full article ">Figure 10
<p>Player’s signatures of five successful consecutive swings of four different golf players: graphs (<b>a</b>,<b>b</b>) show swings of two experienced amateur players; graphs (<b>c</b>,<b>d</b>) show swings of two professional golf players. All graphs show the response of two orthogonally placed SG sensors, where blue plots represent the front sensor and red plots represent the side sensor. Signals show that each player has a distinctive sensor response (signature).</p>
Full article ">Figure 11
<p>Player’s signatures shown in 2D strain plot indicate that each player has a very distinctive sensor response. Graph (<b>a</b>) shows signatures of experienced amateur players; graph (<b>b</b>) shows signatures of two two professional golf players. The strain in both axes is given in [<span class="html-italic">mε</span>]. The horizontal axis shows the strain of the front sensor and the vertical axis shows the strain of the side sensor.</p>
Full article ">Figure 12
<p>Technical error detection. Averages of five consecutive swings: (<b>a</b>) two orthogonal strain gage sensors; (<b>b</b>) 3-axis accelerometer; (<b>c</b>) 3-axis gyroscope. Solid lines represent averages of five consecutive successful swings; dashed lines represent averages of five swings with “slice” technical error. Traces of the same color are distinctively different from each other. We expect that technical error detection is possible. In graph (<b>a</b>) blue plots represent the front sensor and red plots represent the side sensor, while graphs (<b>b</b>,<b>c</b>) red, green, and blue lines represent X, Y, and Z axis respectively.</p>
Full article ">Figure 13
<p>Comparison of 2D strain [<span class="html-italic">mε</span>] plot for regular swings and swings with technical errors of two professional golf players: (<b>a</b>) Player 1; (<b>b</b>) Player 4. Regular swings (straight) differ from swings with technical errors (slice, draw) in one or more swing phases. We expect the identification of technical errors is possible. The strain in both axes is given in [<span class="html-italic">mε</span>]. The horizontal axis shows the strain of the front sensor and the vertical axis shows the strain of the side sensor.</p>
Full article ">
8256 KiB  
Article
Celestial Object Imaging Model and Parameter Optimization for an Optical Navigation Sensor Based on the Well Capacity Adjusting Scheme
by Hao Wang, Jie Jiang and Guangjun Zhang
Sensors 2017, 17(4), 915; https://doi.org/10.3390/s17040915 - 21 Apr 2017
Cited by 6 | Viewed by 4728
Abstract
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally [...] Read more.
The simultaneous extraction of optical navigation measurements from a target celestial body and star images is essential for autonomous optical navigation. Generally, a single optical navigation sensor cannot simultaneously image the target celestial body and stars well-exposed because their irradiance difference is generally large. Multi-sensor integration or complex image processing algorithms are commonly utilized to solve the said problem. This study analyzes and demonstrates the feasibility of simultaneously imaging the target celestial body and stars well-exposed within a single exposure through a single field of view (FOV) optical navigation sensor using the well capacity adjusting (WCA) scheme. First, the irradiance characteristics of the celestial body are analyzed. Then, the celestial body edge model and star spot imaging model are established when the WCA scheme is applied. Furthermore, the effect of exposure parameters on the accuracy of star centroiding and edge extraction is analyzed using the proposed model. Optimal exposure parameters are also derived by conducting Monte Carlo simulation to obtain the best performance of the navigation sensor. Finally, laboratorial and night sky experiments are performed to validate the correctness of the proposed model and optimal exposure parameters. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Spatial position relationship between the sensor and target celestial body.</p>
Full article ">Figure 2
<p>Reflected radiation flux over a sphere.</p>
Full article ">Figure 3
<p>(<b>a</b>) Accumulated photoelectrons versus time in normal integration mode; and (<b>b</b>) accumulated photoelectrons versus time using the WCA scheme.</p>
Full article ">Figure 4
<p>Blurred edge model.</p>
Full article ">Figure 5
<p>1D profile of the celestial body image in normal integration mode.</p>
Full article ">Figure 6
<p>1D profile of the celestial body image when using the WCA scheme.</p>
Full article ">Figure 7
<p>(<b>a</b>) Star signal intensity distribution utilizing normal integration mode; and (<b>b</b>) star signal intensity distribution using the WCA scheme.</p>
Full article ">Figure 8
<p>Edge detection error <math display="inline"> <semantics> <mrow> <msub> <mi>δ</mi> <mi>E</mi> </msub> </mrow> </semantics> </math> (<b>left</b>) versus well capacity <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> </mrow> </semantics> </math> and AIT <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>S</mi> </msub> </mrow> </semantics> </math> (<b>right</b>).</p>
Full article ">Figure 9
<p>Effect of different well capacity values on the edge detection results when: (<b>a</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>1000</mn> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>5000</mn> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics> </math>; and (<b>c</b>) <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>14</mn> <mo>,</mo> <mn>000</mn> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p><math display="inline"> <semantics> <mrow> <msub> <mi>δ</mi> <mrow> <mi>x</mi> <mo>,</mo> <mi>S</mi> </mrow> </msub> </mrow> </semantics> </math> versus well capacity <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> </mrow> </semantics> </math> and AIT <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>S</mi> </msub> </mrow> </semantics> </math> for different star magnitudes: (<b>a</b>) star magnitude = 2; (<b>b</b>) star magnitude = 4; (<b>c</b>) star magnitude = 5; and (<b>d</b>) star magnitude = 6.</p>
Full article ">Figure 11
<p>Overall star centroiding error versus well capacity.</p>
Full article ">Figure 12
<p>Simulation results of the optimal <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>S</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 13
<p>Simulation results of the optimal <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 14
<p>Setup for the laboratorial experiment.</p>
Full article ">Figure 15
<p>(<b>a</b>–<b>d</b>) Star images of magnitudes 2, 4, 5 and 6 when using the normal integration mode; and (<b>e</b>–<b>h</b>) star images of magnitudes 2, 4, 5 and 6 when using the WCA scheme.</p>
Full article ">Figure 16
<p>Star centroiding error versus well capacity: (<b>a</b>) star magnitude = 2; (<b>b</b>) star magnitude = 4; (<b>c</b>) star magnitude = 5; and (<b>d</b>) star magnitude = 6.</p>
Full article ">Figure 17
<p>Setup of the night sky experiment.</p>
Full article ">Figure 18
<p>Lunar images with different exposure parameters: (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>0.4</mn> <mtext> </mtext> <mi>ms</mi> </mrow> </semantics> </math> utilizing the normal integration mode; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>30</mn> <mtext> </mtext> <mi>ms</mi> </mrow> </semantics> </math> utilizing the normal integration mode; (<b>c</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>30</mn> <mtext> </mtext> <mi>ms</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>29.67</mn> <mtext> </mtext> <mi>ms</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>3750</mn> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics> </math> utilizing the WCA scheme; and (<b>d</b>) <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>=</mo> <mn>30</mn> <mtext> </mtext> <mi>ms</mi> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <msub> <mi>T</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>29.67</mn> <mtext> </mtext> <mi>ms</mi> <mo>,</mo> </mrow> </semantics> </math> <math display="inline"> <semantics> <mrow> <msub> <mi>Q</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>6093</mn> <msup> <mi>e</mi> <mo>−</mo> </msup> </mrow> </semantics> </math> utilizing the WCA scheme.</p>
Full article ">Figure 19
<p>Edge detection and circle fitting results of the Moon image.</p>
Full article ">Figure 20
<p>Observations of the Moon and stars in the same FOV.</p>
Full article ">
1778 KiB  
Article
Influence of Wind Speed on RGB-D Images in Tree Plantations
by Dionisio Andújar, José Dorado, José María Bengochea-Guevara, Jesús Conesa-Muñoz, César Fernández-Quintanilla and Ángela Ribeiro
Sensors 2017, 17(4), 914; https://doi.org/10.3390/s17040914 - 21 Apr 2017
Cited by 18 | Viewed by 5575
Abstract
Weather conditions can affect sensors’ readings when sampling outdoors. Although sensors are usually set up covering a wide range of conditions, their operational range must be established. In recent years, depth cameras have been shown as a promising tool for plant phenotyping and [...] Read more.
Weather conditions can affect sensors’ readings when sampling outdoors. Although sensors are usually set up covering a wide range of conditions, their operational range must be established. In recent years, depth cameras have been shown as a promising tool for plant phenotyping and other related uses. However, the use of these devices is still challenged by prevailing field conditions. Although the influence of lighting conditions on the performance of these cameras has already been established, the effect of wind is still unknown. This study establishes the associated errors when modeling some tree characteristics at different wind speeds. A system using a Kinect v2 sensor and a custom software was tested from null wind speed up to 10 m·s−1. Two tree species with contrasting architecture, poplars and plums, were used as model plants. The results showed different responses depending on tree species and wind speed. Estimations of Leaf Area (LA) and tree volume were generally more consistent at high wind speeds in plum trees. Poplars were particularly affected by wind speeds higher than 5 m·s−1. On the contrary, height measurements were more consistent for poplars than for plum trees. These results show that the use of depth cameras for tree characterization must take into consideration wind conditions in the field. In general, 5 m·s−1 (18 km·h−1) could be established as a conservative limit for good estimations. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2017)
Show Figures

Figure 1

Figure 1
<p>Schematic design of the portable system electrically powered at 220 V by an electric car.</p>
Full article ">Figure 2
<p>RGB images (<b>a</b>) used to quantify the leaf area, after their transformation to binary images (<b>b</b>) and subsequent application of the Otsu’s thresholding method. Upper side corresponds to a poplar sample. Down side corresponds to a plum-tree sample. A 100 cm<sup>2</sup> black square was included in each image as reference area.</p>
Full article ">Figure 3
<p>Example of poplar (figures on top) and plum (figures at the bottom) tree models created at different wind speeds, from 0 to 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 4
<p>Regression analysis comparing actual height vs. model height for (<b>a</b>) poplar trees and (<b>b</b>) plum trees at wind speeds ranging from 0 to 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 5
<p>Regression analysis comparing Leaf Area (LA) vs. tree volume for (<b>a</b>) poplar trees and (<b>b</b>) plum trees at wind speeds ranging from 0 to 10 m·s<sup>−1</sup>.</p>
Full article ">Figure 6
<p>Regression analysis comparing dry biomass (g) vs. tree volume for (<b>a</b>) poplar trees and (<b>b</b>) plum trees at wind speeds ranging from 0 to 10 m·s<sup>−1</sup>.</p>
Full article ">
3462 KiB  
Article
Synthesis, Characterization and Enhanced Sensing Properties of a NiO/ZnO p–n Junctions Sensor for the SF6 Decomposition Byproducts SO2, SO2F2, and SOF2
by Hongcheng Liu, Qu Zhou, Qingyan Zhang, Changxiang Hong, Lingna Xu, Lingfeng Jin and Weigen Chen
Sensors 2017, 17(4), 913; https://doi.org/10.3390/s17040913 - 21 Apr 2017
Cited by 83 | Viewed by 7333
Abstract
The detection of partial discharge and analysis of the composition and content of sulfur hexafluoride SF6 gas components are important to evaluate the operating state and insulation level of gas-insulated switchgear (GIS) equipment. This paper reported a novel sensing material made of [...] Read more.
The detection of partial discharge and analysis of the composition and content of sulfur hexafluoride SF6 gas components are important to evaluate the operating state and insulation level of gas-insulated switchgear (GIS) equipment. This paper reported a novel sensing material made of pure ZnO and NiO-decorated ZnO nanoflowers which were synthesized by a facile and environment friendly hydrothermal process for the detection of SF6 decomposition byproducts. X-ray diffraction (XRD), field emission scanning electron microscopy (FESEM), transmission electron microscopy (TEM), high resolution transmission electron microscopy (HRTEM), energy-dispersive X-ray spectroscopy (EDS) and X-ray photoelectron spectroscopy (XPS) were used to characterize the structural and morphological properties of the prepared gas-sensitive materials. Planar-type chemical gas sensors were fabricated and their gas sensing performances toward the SF6 decomposition byproducts SO2, SO2F2, and SOF2 were systemically investigated. Interestingly, the sensing behaviors of the fabricated ZnO nanoflowers-based sensor to SO2, SO2F2, and SOF2 gases can be obviously enhanced in terms of lower optimal operating temperature, higher gas response and shorter response-recovery time by introducing NiO. Finally, a possible gas sensing mechanism for the formation of the p–n junctions between NiO and ZnO is proposed to explain the enhanced gas response. All results demonstrate a promising approach to fabricate high-performance gas sensors to detect SF6 decomposition byproducts. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the CGS-1TP gas sensing analysis system.</p>
Full article ">Figure 2
<p>X-ray powder diffraction patterns of pure and 3 at-% NiO-decorated ZnO nanostructures.</p>
Full article ">Figure 3
<p>FESEM images of (<b>a</b>) pure and (<b>b</b>) NiO-decorated ZnO nanostructures (<b>c</b>) TEM and (<b>d</b>) HRTEM image of the NiO-decorated ZnO microflowers.</p>
Full article ">Figure 3 Cont.
<p>FESEM images of (<b>a</b>) pure and (<b>b</b>) NiO-decorated ZnO nanostructures (<b>c</b>) TEM and (<b>d</b>) HRTEM image of the NiO-decorated ZnO microflowers.</p>
Full article ">Figure 4
<p>EDS spectra of 3 at-% NiO-decorated ZnO nanostructures.</p>
Full article ">Figure 5
<p>XPS survey spectra of NiO-decorated ZnO: (<b>a</b>) full spectrum; (<b>b</b>) Ni 2p; (<b>c</b>) Zn 2p; (<b>d</b>) O 1s.</p>
Full article ">Figure 6
<p>The electric resistance properties of the prepared sensors to different temperatures in air.</p>
Full article ">Figure 7
<p>Gas response of the pure and NiO-decorated ZnO gas sensors to 100 ppm SO<sub>2</sub>, SOF<sub>2</sub>, and SO<sub>2</sub>F<sub>2</sub> at different working temperature.</p>
Full article ">Figure 8
<p>(<b>a</b>) Response of the pure and NiO-decorated ZnO gas sensors to various concentrations of SO<sub>2</sub> at 220 °C; (<b>b</b>) Linear fitting curves of pure and NiO-decorated ZnO sensors to 5~100 ppm of SO<sub>2</sub>.</p>
Full article ">Figure 9
<p>(<b>a</b>) Pure and NiO-decorated ZnO gas sensors’ response to different concentrations of SOF<sub>2</sub> at the 260 °C operating temperature; (<b>b</b>) Linear relationship between the sensors’ response value and the SOF<sub>2</sub> concentration.</p>
Full article ">Figure 10
<p>(<b>a</b>) Response of the pure and NiO-decorated ZnO gas sensors to SO<sub>2</sub>F<sub>2</sub> with different concentration at 260 °C; (<b>b</b>) Linear fitting curves of the prepared the sensors to 5~100 ppm of SO<sub>2</sub>F<sub>2</sub>.</p>
Full article ">Figure 11
<p>Response-recovery curves of the NiO-decorated ZnO gas sensor to 100 ppm SO<sub>2</sub>, SO<sub>2</sub>F<sub>2</sub> and SOF<sub>2</sub>.</p>
Full article ">Figure 12
<p>The stability and repeatability of the NiO-decorated ZnO sensor against SO<sub>2</sub>, SO<sub>2</sub>F<sub>2</sub> and SOF<sub>2</sub>.</p>
Full article ">Figure 13
<p>Energy band schematics for p-type NiO/n-type ZnO heterojunction (<b>a</b>) in air atmosphere; (<b>b</b>) in test gases. Ec: lower level of conduction band; EF: Fermi level; Ev: upper level of valence band.</p>
Full article ">
7690 KiB  
Article
A Multi-Platform Optical Sensor for In Vivo and In Vitro Algae Classification
by Chee-Loon Ng, Qing-Qing Chen, Jia-Jing Chua and Harold F. Hemond
Sensors 2017, 17(4), 912; https://doi.org/10.3390/s17040912 - 20 Apr 2017
Cited by 7 | Viewed by 7139
Abstract
Differentiation among major algal groups is important for the ecological and biogeochemical characterization of water bodies, and for practical management of water resources. It helps to discern the taxonomic groups that are beneficial to aquatic life from the organisms causing harmful algal blooms. [...] Read more.
Differentiation among major algal groups is important for the ecological and biogeochemical characterization of water bodies, and for practical management of water resources. It helps to discern the taxonomic groups that are beneficial to aquatic life from the organisms causing harmful algal blooms. An LED-induced fluorescence (LEDIF) instrument capable of fluorescence, absorbance, and scattering measurements; is used for in vivo and in vitro identification and quantification of four algal groups found in freshwater and marine environments. Aqueous solutions of individual and mixed dissolved biological pigments relevant to different algal groups were measured to demonstrate the LEDIF’s capabilities in measuring extracted pigments. Different genera of algae were cultivated and the cell counts of the samples were quantified with a hemacytometer and/or cellometer. Dry weight of different algae cells was also measured to determine the cell counts-to-dry weight correlations. Finally, in vivo measurements of different genus of algae at different cell concentrations and mixed algal group in the presence of humic acid were performed with the LEDIF. A field sample from a local reservoir was measured with the LEDIF and the results were verified using hemacytometer, cellometer, and microscope. The results demonstrated the LEDIF’s capabilities in classifying and quantifying different groups of live algae. Full article
Show Figures

Figure 1

Figure 1
<p>The layout and packagings of the LEDIF: (<b>a</b>) Isometric and front views of the LEDIF packaged inside a 20 × 15 × 20 cm enclosure for portable mode sensing; (<b>b</b>) Packaging for fixed location sensing; (<b>c</b>) LEDIF block diagram; (<b>d</b>) LEDIF packaged inside a 30 (L) × 20 (Dia) cm cylindrical pressure hull for autonomous platform deployment.</p>
Full article ">Figure 2
<p>Qualification of LEDIF’s excitation sources: (<b>a</b>) Measurement of LED central wavelengths and FWHM bandwidths using LEDIF’s NIST-calibrated spectrometer; (<b>b</b>) Comparison of manufacturer’s reported values and LEDIF’s measured values.</p>
Full article ">Figure 3
<p>(<b>a</b>) Characterization of cell counting instruments. Number of cells-to-dry weight correlations of different genera of (<b>b</b>) green; (<b>c</b>) cyanobacteria, (<b>d</b>) golden-brown; and (<b>e</b>) red algae.</p>
Full article ">Figure 4
<p>Measurement of aqueous (<b>a</b>) phycocyanin, (<b>b</b>) phycoerythrin, (<b>c</b>) chlorophyll a fluorescence peak intensity as a function of concentration at different excitation wavelengths. Insert graphs show the emission spectra excited at (<b>a</b>) 595 nm, (<b>b</b>) and (<b>c</b>) 402 nm; (<b>d</b>) Measurement of Suwannee River humic acid fluorescence peak intensity as a function of concentration at three different emission wavelengths. Insert graph shows emission spectrum excited at 371 nm; (<b>e</b>) Emission spectrum and excitation-emission matrix of a complex lab mixture.</p>
Full article ">Figure 5
<p>(<b>a</b>) Fluorescence spectra of Anki and Chlor of different cell concentrations excited at 402 nm wavelength. Legend: genus of algae_cells per mL; (<b>b</b>) Normalized peak intensity of Anki and Chlor as a function of cell concentration. Legend: genus of algae_excitation wavelength.</p>
Full article ">Figure 6
<p>Fluorescence spectra of Ana and Cyl of different cell concentrations excited at (<b>a</b>) 612 nm and (<b>b</b>) 402 nm wavelength. Legend: genus of algae_cells per mL; (<b>c</b>) Fluorescence spectra of Ana and Cyl excited at different wavelengths; (<b>d</b>) 619 nm emission peak of Cyl as a function of cell concentration; Normalized (<b>e</b>) phycocyanin and (<b>f</b>) measured and predicted chlorophyll a peak intensities of Ana and Cyl as a function of cell concentration. Legend: genus of algae_excitation wavelength_M is measured and P is predicted data.</p>
Full article ">Figure 7
<p>(<b>a</b>) Fluorescence spectra of Porp of different cell concentrations (cells/mL) excited at 523 nm; (<b>b</b>) Observed (obs) and inner-filtering corrected (corr) phycoerythrin fluorescence peak intensities of Prop as a function of cell concentration excited at 523 nm; (<b>c</b>) Measured (M) and predicted (P) phycocyanin peak intensity of Porp as a function of cell concentration excited at 402 nm and 523 nm; (<b>d</b>) Measured (M) and predicted (P) chlorophyll a peak intensities of Porp as a function of cell concentration excited at 523 nm.</p>
Full article ">Figure 8
<p>(<b>a</b>) Absorbance spectra of Cyc of different cell concentrations (cells/mL). Legend: genus of algae_cells per mL; (<b>b</b>) Fucoxanthin absorbance peaks of Cyc as a function of cell concentration. Legend: genus of algae_absorbance peak wavelength_V is growing and V<sub>max</sub> is fully grown algae.</p>
Full article ">Figure 9
<p>Fluorescence spectra of (<b>a</b>) Anki, Chlor, Ana, and Cyl mixture and (<b>b</b>) Anki, Chlor, Ana, Cyl, and 5 mg/L humic acid mixture excited at 523 and 595 nm (insert graph).</p>
Full article ">Figure 10
<p>Fluorescence spectra of a field sample dominated by <span class="html-italic">Microcystis</span> (insert graph) observed in a local reservoir in Singapore.</p>
Full article ">
8377 KiB  
Article
Rapid Texture Optimization of Three-Dimensional Urban Model Based on Oblique Images
by Weilong Zhang, Ming Li, Bingxuan Guo, Deren Li and Ge Guo
Sensors 2017, 17(4), 911; https://doi.org/10.3390/s17040911 - 20 Apr 2017
Cited by 12 | Viewed by 4905
Abstract
Seamless texture mapping is one of the key technologies for photorealistic 3D texture reconstruction. In this paper, a method of rapid texture optimization of 3D urban reconstruction based on oblique images is proposed aiming at the existence of texture fragments, seams, and inconsistency [...] Read more.
Seamless texture mapping is one of the key technologies for photorealistic 3D texture reconstruction. In this paper, a method of rapid texture optimization of 3D urban reconstruction based on oblique images is proposed aiming at the existence of texture fragments, seams, and inconsistency of color in urban 3D texture mapping based on low-altitude oblique images. First, we explore implementing radiation correction on the experimental images with a radiation procession algorithm. Then, an efficient occlusion detection algorithm based on OpenGL is proposed according to the mapping relation between the terrain triangular mesh surface and the images to implement the occlusion detection of the visible texture on the triangular facets as well as create a list of visible images. Finally, a texture clustering algorithm is put forward based on Markov Random Field utilizing the inherent attributes of the images and solve the energy function minimization by Graph-Cuts. The experimental results display that the method is capable of decreasing the existence of texture fragments, seams, and inconsistency of color in the 3D texture model reconstruction. Full article
Show Figures

Figure 1

Figure 1
<p>Local 3D model of the experimental area. (<b>a</b>) 3D fitting model; (<b>b</b>) 3D mesh model.</p>
Full article ">Figure 2
<p>OpenGL transform procession.</p>
Full article ">Figure 3
<p>Triangle facet subdivision: (<b>a</b>) original mesh; (<b>b</b>) global quartering; (<b>c</b>) our method.</p>
Full article ">Figure 4
<p>Occlusion conditions.</p>
Full article ">Figure 5
<p>Labeled graph: (<b>a</b>) shows the general labeled graph; (<b>b</b>) shows our labeled graph.</p>
Full article ">Figure 6
<p>The <span class="html-italic">α</span>-<span class="html-italic">β</span> graph.</p>
Full article ">Figure 7
<p>Experimental oblique images.</p>
Full article ">Figure 8
<p>Mesh simplification. (<b>a</b>) Pyramid simplified model; (<b>b</b>) Scene simplified instance.</p>
Full article ">Figure 9
<p>Contrast of defogging effect. (<b>a</b>) Original image; (<b>b</b>) Wallis processing result; (<b>c</b>) Dark channel processing result; (<b>d</b>) Depth map.</p>
Full article ">Figure 10
<p>Contrast of camera response curves.</p>
Full article ">Figure 11
<p>Contrast of texture radiation correction.</p>
Full article ">Figure 12
<p>Efficiency contrast of occlusion detection.</p>
Full article ">Figure 13
<p>Results comparison of occlusion detection.</p>
Full article ">Figure 14
<p>Contrast of clustering optimization.</p>
Full article ">Figure 15
<p>Comparison of the statistical results of clustering: (<b>a</b>) Before clustering; (<b>b</b>) After clustering.</p>
Full article ">Figure 16
<p>Texture reconstruction results.</p>
Full article ">Figure 17
<p>Contrast of local zoom views of texture optimization.</p>
Full article ">
4835 KiB  
Article
Investigating Surface and Near-Surface Bushfire Fuel Attributes: A Comparison between Visual Assessments and Image-Based Point Clouds
by Christine Spits, Luke Wallace and Karin Reinke
Sensors 2017, 17(4), 910; https://doi.org/10.3390/s17040910 - 20 Apr 2017
Cited by 15 | Viewed by 5591
Abstract
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques [...] Read more.
Visual assessment, following guides such as the Overall Fuel Hazard Assessment Guide (OFHAG), is a common approach for assessing the structure and hazard of varying bushfire fuel layers. Visual assessments can be vulnerable to imprecision due to subjectivity between assessors, while emerging techniques such as image-based point clouds can offer land managers potentially more repeatable descriptions of fuel structure. This study compared the variability of estimates of surface and near-surface fuel attributes generated by eight assessment teams using the OFHAG and Fuels3D, a smartphone method utilising image-based point clouds, within three assessment plots in an Australian lowland forest. Surface fuel hazard scores derived from underpinning attributes were also assessed. Overall, this study found considerable variability between teams on most visually assessed variables, resulting in inconsistent hazard scores. Variability was observed within point cloud estimates but was, however, on average two to eight times less than that seen in visual estimates, indicating greater consistency and repeatability of this method. It is proposed that while variability within the Fuels3D method may be overcome through improved methods and equipment, inconsistencies in the OFHAG are likely due to the inherent subjectivity between assessors, which may be more difficult to overcome. This study demonstrates the capability of the Fuels3D method to efficiently and consistently collect data on fuel hazard and structure, and, as such, this method shows potential for use in fire management practices where accurate and reliable data is essential. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area southeast of Melbourne, Australia, with the location and photos of study plots in Cardinia Reservoir (shaded blue).</p>
Full article ">Figure 2
<p>Diagram of transect and plot layout for image capture, showing the approximate location of samples (s1 to s9), and a photograph indicating frame set up.</p>
Full article ">Figure 3
<p>Example point clouds derived from the photosets captured in Plots 1, 2 and 3. (<b>a,b</b>) show a point cloud captured in Plot 1 with the Oppo phone, (<b>c</b>,<b>d</b>) show a point cloud captured in Plot 2 with the Motorola phone and (<b>e</b>,<b>f</b>) show a point cloud captured in Plot 3 with the Sony phone. Distance is measured from the plot centre, and height reflects estimated height above sea level.</p>
Full article ">Figure 4
<p>Boxplots of eight assessment teams’ estimates of surface and near-surface fuel attributes from visual assessments (dark grey), and image-based point clouds (light grey) across Plots 1, 2 and 3. Boxplots show the median and quartiles, minimum (lower whisker), maximum (upper whisker), with mean (red point) also shown. (<b>a</b>) surface percent cover, (<b>b</b>) surface litter depth, (<b>c</b>) near-surface percent cover, (<b>d</b>) near-surface percent dead, (<b>e</b>) OFHAG near-surface average height and point cloud mean maximum height, and (<b>f</b>) near-surface average height.</p>
Full article ">Figure 5
<p>Isochrones showing the spread of surface litter height (<span class="html-italic">y</span>-axis) and percent cover values (<span class="html-italic">x</span>-axis) and resulting surface fuel hazard score from visually assessed metrics (dashed outline) and point cloud metrics (solid outline) across Plots 1, 2 and 3.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop