[go: up one dir, main page]

Next Issue
Volume 17, August
Previous Issue
Volume 17, June
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 17, Issue 7 (July 2017) – 236 articles

Cover Story (view full-size image): Capsule endoscopy is a less invasive way than conventional endoscopy to image the interior of the gastrointestinal (GI) tract for potential clinical diagnosis of conditions such as inflammation and cancer. The SonoCAIT device (capsule for autonomous imaging and therapy) has been developed to explore whether a similar approach can be applied for treatment. Based on the principle of ultrasound-mediated targeted drug delivery, it includes a miniature camera, a drug-delivery channel and an ultrasound transducer able to manipulate microbubbles and drugs. Together, these components support one stage in a proposed patient pathway which also includes capsule-endoscopy based diagnosis, marking of sites of disease, and post-treatment follow-up. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
2504 KiB  
Article
The Role of Heart-Rate Variability Parameters in Activity Recognition and Energy-Expenditure Estimation Using Wearable Sensors
by Heesu Park, Suh-Yeon Dong, Miran Lee and Inchan Youn
Sensors 2017, 17(7), 1698; https://doi.org/10.3390/s17071698 - 24 Jul 2017
Cited by 33 | Viewed by 6987
Abstract
Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little [...] Read more.
Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system. Full article
(This article belongs to the Special Issue Wearable and Ambient Sensors for Healthcare and Wellness Applications)
Show Figures

Figure 1

Figure 1
<p>Overall system flow. A cross symbol in a circle indicates the concatenation of two feature vectors.</p>
Full article ">Figure 2
<p>Sensors used in this study: (<b>a</b>) Shimmer3. This picture was obtained from its official website (<a href="http://www.shimmersensing.com/" target="_blank">http://www.shimmersensing.com/</a>); (<b>b</b>) T-REX TR100A attached on a patch-type electrode. This picture was obtained from its official manual.</p>
Full article ">Figure 3
<p>Experiment pictures for all six activities.</p>
Full article ">Figure 4
<p>Feature distributions of five activities in (<b>a</b>) time and (<b>b</b>) frequency domains. Each value represents the average feature value of one subject.</p>
Full article ">Figure 5
<p>Scatter plots of training samples. (<b>a</b>) Samples of classes sitting (SI) and standing (ST) before selection; (<b>b</b>) Samples of classed SI and ST after selection; (<b>c</b>) Samples of classes walking (WK) and ascending (AS) before selection; (<b>d</b>) Samples of classes WK and AS after selection.</p>
Full article ">Figure 6
<p>The effect of data type on EE estimation performance, in the case of the activity-specific model (subject 10). Gray-shaded regions indicate dynamic activities (WK, AS, and RU).</p>
Full article ">Figure 7
<p>The effect of data type on EE estimation performance, in the case of the activity-specific model (subject 10). Gray-shaded regions indicate dynamic activities (WK, AS, and running (RU)).</p>
Full article ">
9071 KiB  
Article
Parametric Loop Division for 3D Localization in Wireless Sensor Networks
by Tanveer Ahmad, Xue Jun Li and Boon-Chong Seet
Sensors 2017, 17(7), 1697; https://doi.org/10.3390/s17071697 - 24 Jul 2017
Cited by 48 | Viewed by 6621
Abstract
Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper [...] Read more.
Localization in Wireless Sensor Networks (WSNs) has been an active topic for more than two decades. A variety of algorithms were proposed to improve the localization accuracy. However, they are either limited to two-dimensional (2D) space, or require specific sensor deployment for proper operations. In this paper, we proposed a three-dimensional (3D) localization scheme for WSNs based on the well-known parametric Loop division (PLD) algorithm. The proposed scheme localizes a sensor node in a region bounded by a network of anchor nodes. By iteratively shrinking that region towards its center point, the proposed scheme provides better localization accuracy as compared to existing schemes. Furthermore, it is cost-effective and independent of environmental irregularity. We provide an analytical framework for the proposed scheme and find its lower bound accuracy. Simulation results shows that the proposed algorithm provides an average localization accuracy of 0.89 m with a standard deviation of 1.2 m. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Classification of localization algorithms.</p>
Full article ">Figure 2
<p>(<b>a</b>) Triangulation of Parametric nodes distribution from control vertices; (<b>b</b>) Parametric points calculation in Loop division.</p>
Full article ">Figure 3
<p>Triangulation and midpoint calculations in a parametric Loop division (PLD) network.</p>
Full article ">Figure 4
<p>Effect on parameterization with various parametric factors.</p>
Full article ">Figure 5
<p>Localized volume region along with localized node.</p>
Full article ">Figure 6
<p>Flow diagram of PLD algorithm.</p>
Full article ">Figure 7
<p>Mean error analysis with different volumes of PLD.</p>
Full article ">Figure 8
<p>Effect of Multipath Fading on Localization Error.</p>
Full article ">Figure 9
<p>Location of anchor nodes, actual sensor nodes and estimated sensor nodes in 3D environment.</p>
Full article ">Figure 10
<p>Average localization error after 1000 iterations.</p>
Full article ">Figure 11
<p>Localization error under different percentage of anchor node density.</p>
Full article ">Figure 12
<p>Localization error vs varying percentage of anchor node density.</p>
Full article ">Figure 13
<p>Percentage maximum standard deviation with varying anchor node volume.</p>
Full article ">Figure 14
<p>Comparison of lower bounds PLD network error to existing systems.</p>
Full article ">Figure 15
<p>Comparison of the average position error of PLD with DV-Hop at 20% anchor nodes.</p>
Full article ">Figure 16
<p>Comparison of the average position error of PLD with DV-Hop at 25% anchor nodes.</p>
Full article ">Figure 17
<p>Comparison of the average position error of PLD with DV-Hop at 30% anchor nodes.</p>
Full article ">Figure 18
<p>Impact of transmission range and localization accuracy of PLD with different network size.</p>
Full article ">Figure 19
<p>Accuracy of PLD network with different volume of PLD network.</p>
Full article ">Figure 20
<p>Cumulative error probability in PLD network with <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> m and <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics> </math> m.</p>
Full article ">Figure 21
<p>Influence of reference anchor node position vs localization error.</p>
Full article ">Figure 22
<p>Complexity comparison between PLD and multi-dimensional scaling (MDS)-MAP.</p>
Full article ">Figure 23
<p>Number of anchor nodes required and their corresponding lower bounds.</p>
Full article ">Figure 24
<p>Localization error distance of PLD with A = 5.</p>
Full article ">Figure 25
<p>Localization error distance of PLD with A = 6.</p>
Full article ">Figure 26
<p>Random experiment of localization error of PLD with six anchor nodes in each cluster.</p>
Full article ">Figure 27
<p>Mean localization error of PLD, DV-Hop, Advanced DV-Hop and MDS-MAP.</p>
Full article ">
1781 KiB  
Article
A Secure and Verifiable Outsourced Access Control Scheme in Fog-Cloud Computing
by Kai Fan, Junxiong Wang, Xin Wang, Hui Li and Yintang Yang
Sensors 2017, 17(7), 1695; https://doi.org/10.3390/s17071695 - 24 Jul 2017
Cited by 71 | Viewed by 9316
Abstract
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data [...] Read more.
With the rapid development of big data and Internet of things (IOT), the number of networking devices and data volume are increasing dramatically. Fog computing, which extends cloud computing to the edge of the network can effectively solve the bottleneck problems of data transmission and data storage. However, security and privacy challenges are also arising in the fog-cloud computing environment. Ciphertext-policy attribute-based encryption (CP-ABE) can be adopted to realize data access control in fog-cloud computing systems. In this paper, we propose a verifiable outsourced multi-authority access control scheme, named VO-MAACS. In our construction, most encryption and decryption computations are outsourced to fog devices and the computation results can be verified by using our verification method. Meanwhile, to address the revocation issue, we design an efficient user and attribute revocation method for it. Finally, analysis and simulation results show that our scheme is both secure and highly efficient. Full article
(This article belongs to the Special Issue Security and Privacy Challenges in Emerging Fog Computing)
Show Figures

Figure 1

Figure 1
<p>System model in fog-cloud computing environment.</p>
Full article ">Figure 2
<p>System model of multi-authority access control in fog-cloud system.</p>
Full article ">Figure 3
<p>Comparison of encryption time with different number of authorities. (<b>a</b>) Encrypt_out time; (<b>b</b>) Encrypt_user time.</p>
Full article ">Figure 4
<p>Comparison of decryption time with different number of authorities. (<b>a</b>) Decrypt_out time; (<b>b</b>) Decrypt_user time.</p>
Full article ">Figure 5
<p>Computing cost for verification of outsourced encryption.</p>
Full article ">Figure 6
<p>Comparison of computing cost of CSP, AA and DO in the attribute revocation process.</p>
Full article ">
843 KiB  
Article
Emotion Recognition from Chinese Speech for Smart Affective Services Using a Combination of SVM and DBN
by Lianzhang Zhu, Leiming Chen, Dehai Zhao, Jiehan Zhou and Weishan Zhang
Sensors 2017, 17(7), 1694; https://doi.org/10.3390/s17071694 - 24 Jul 2017
Cited by 108 | Viewed by 8695
Abstract
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to [...] Read more.
Accurate emotion recognition from speech is important for applications like smart health care, smart entertainment, and other smart services. High accuracy emotion recognition from Chinese speech is challenging due to the complexities of the Chinese language. In this paper, we explore how to improve the accuracy of speech emotion recognition, including speech signal feature extraction and emotion classification methods. Five types of features are extracted from a speech sample: mel frequency cepstrum coefficient (MFCC), pitch, formant, short-term zero-crossing rate and short-term energy. By comparing statistical features with deep features extracted by a Deep Belief Network (DBN), we attempt to find the best features to identify the emotion status for speech. We propose a novel classification method that combines DBN and SVM (support vector machine) instead of using only one of them. In addition, a conjugate gradient method is applied to train DBN in order to speed up the training process. Gender-dependent experiments are conducted using an emotional speech database created by the Chinese Academy of Sciences. The results show that DBN features can reflect emotion status better than artificial features, and our new classification approach achieves an accuracy of 95.8%, which is higher than using either DBN or SVM separately. Results also show that DBN can work very well for small training databases if it is properly designed. Full article
Show Figures

Figure 1

Figure 1
<p>Process of extracting speech features.</p>
Full article ">Figure 2
<p>Process of extracting Mel-Frequency Cepstral Coefficient (MFCC).</p>
Full article ">Figure 3
<p>Structure of Deep Belief Network (DBN).</p>
Full article ">Figure 4
<p>Structure of combining support vector machine (SVM) and DBN. Speech features are converted into deep features by a pre-trained DBN, which are feature vectors output by the last hidden layer of the DBN. The feature vectors act as the input of SVM and are used to train the SVM. The output of the SVM classifier is the emotion status corresponding to the input speech sample.</p>
Full article ">Figure 5
<p>Dataset structure.</p>
Full article ">Figure 6
<p>DBN training phase.</p>
Full article ">
1108 KiB  
Article
Efficient Retrieval of Massive Ocean Remote Sensing Images via a Cloud-Based Mean-Shift Algorithm
by Mengzhao Yang, Wei Song and Haibin Mei
Sensors 2017, 17(7), 1693; https://doi.org/10.3390/s17071693 - 23 Jul 2017
Cited by 7 | Viewed by 5321
Abstract
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for [...] Read more.
The rapid development of remote sensing (RS) technology has resulted in the proliferation of high-resolution images. There are challenges involved in not only storing large volumes of RS images but also in rapidly retrieving the images for ocean disaster analysis such as for storm surges and typhoon warnings. In this paper, we present an efficient retrieval of massive ocean RS images via a Cloud-based mean-shift algorithm. Distributed construction method via the pyramid model is proposed based on the maximum hierarchical layer algorithm and used to realize efficient storage structure of RS images on the Cloud platform. We achieve high-performance processing of massive RS images in the Hadoop system. Based on the pyramid Hadoop distributed file system (HDFS) storage method, an improved mean-shift algorithm for RS image retrieval is presented by fusion with the canopy algorithm via Hadoop MapReduce programming. The results show that the new method can achieve better performance for data storage than HDFS alone and WebGIS-based HDFS. Speedup and scaleup are very close to linear changes with an increase of RS images, which proves that image retrieval using our method is efficient. Full article
(This article belongs to the Special Issue Marine Sensing)
Show Figures

Figure 1

Figure 1
<p>Storage flow of remote sensing (RS) images tiles.</p>
Full article ">Figure 2
<p>The image tiles coding process.</p>
Full article ">Figure 3
<p>Two types of data for distributed storage. HDFS: Hadoop distributed file system.</p>
Full article ">Figure 4
<p>Finding procedure of mean-shift.</p>
Full article ">Figure 5
<p>Flow of creating canopy.</p>
Full article ">Figure 6
<p>Flow diagram of mean-shift algorithm by modifying the canopy algorithm.</p>
Full article ">Figure 7
<p>Storage structure of image data in MapReduce processing.</p>
Full article ">Figure 8
<p>Throughput rate comparison between two modes.</p>
Full article ">Figure 9
<p>Construction performance comparison between two modes.</p>
Full article ">Figure 10
<p>Reading time comparison of different tile data files.</p>
Full article ">Figure 11
<p>Speedup ratio in five groups of different size image sets.</p>
Full article ">Figure 12
<p>Scaleup in three group datasets.</p>
Full article ">
2567 KiB  
Article
Angular Rate Sensing with GyroWheel Using Genetic Algorithm Optimized Neural Networks
by Yuyu Zhao, Hui Zhao, Xin Huo and Yu Yao
Sensors 2017, 17(7), 1692; https://doi.org/10.3390/s17071692 - 22 Jul 2017
Cited by 13 | Viewed by 4742
Abstract
GyroWheel is an integrated device that can provide three-axis control torques and two-axis angular rate sensing for small spacecrafts. Large tilt angle of its rotor and de-tuned spin rate lead to a complex and non-linear dynamics as well as difficulties in measuring angular [...] Read more.
GyroWheel is an integrated device that can provide three-axis control torques and two-axis angular rate sensing for small spacecrafts. Large tilt angle of its rotor and de-tuned spin rate lead to a complex and non-linear dynamics as well as difficulties in measuring angular rates. In this paper, the problem of angular rate sensing with the GyroWheel is investigated. Firstly, a simplified rate sensing equation is introduced, and the error characteristics of the method are analyzed. According to the analysis results, a rate sensing principle based on torque balance theory is developed, and a practical way to estimate the angular rates within the whole operating range of GyroWheel is provided by using explicit genetic algorithm optimized neural networks. The angular rates can be determined by the measurable values of the GyroWheel (including tilt angles, spin rate and torque coil currents), the weights and the biases of the neural networks. Finally, the simulation results are presented to illustrate the effectiveness of the proposed angular rate sensing method with GyroWheel. Full article
(This article belongs to the Special Issue Inertial Sensors for Positioning and Navigation)
Show Figures

Figure 1

Figure 1
<p>Cross-sectional view of the GyroWheel system.</p>
Full article ">Figure 2
<p>Reference frames and gimbal angles.</p>
Full article ">Figure 3
<p>Relationship between rate sensing errors and tilt angles: (<b>a</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt; (<b>b</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>c</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>d</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt.</p>
Full article ">Figure 4
<p>Schematic of angular rate test.</p>
Full article ">Figure 5
<p>A simple MLP ANN.</p>
Full article ">Figure 6
<p>GA optimized ANN algorithm: (<b>a</b>) Flowchart; (<b>b</b>) An example of storing weights and biases of an ANN model in the genes of a chromosome.</p>
Full article ">Figure 7
<p>Schematic of the simulation platform.</p>
Full article ">Figure 8
<p>GAANN architecture for GyroWheel rate sensing.</p>
Full article ">Figure 9
<p>GAANN correlation performance: (<b>a</b>) ANN models for predicting equivalent rates; (<b>b</b>) ANN models for predicting torque factors.</p>
Full article ">Figure 9 Cont.
<p>GAANN correlation performance: (<b>a</b>) ANN models for predicting equivalent rates; (<b>b</b>) ANN models for predicting torque factors.</p>
Full article ">Figure 10
<p>Relationship between rate sensing errors and tilt angles: (<b>a</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt; (<b>b</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>c</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>d</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt.</p>
Full article ">Figure 10 Cont.
<p>Relationship between rate sensing errors and tilt angles: (<b>a</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt; (<b>b</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>c</b>) <span class="html-italic">X</span>-axis rate sensing error versus <span class="html-italic">y</span>-axis tilt; (<b>d</b>) <span class="html-italic">Y</span>-axis rate sensing error versus <span class="html-italic">x</span>-axis tilt.</p>
Full article ">Figure 11
<p>Histograms of Rate sensing errors: (<b>a</b>) <span class="html-italic">X</span>-axis; (<b>b</b>) <span class="html-italic">Y</span>-axis.</p>
Full article ">
5196 KiB  
Article
A Context-Driven Model for the Flat Roofs Construction Process through Sensing Systems, Internet-of-Things and Last Planner System
by María Dolores Andújar-Montoya, Diego Marcos-Jorquera, Francisco Manuel García-Botella and Virgilio Gilart-Iglesias
Sensors 2017, 17(7), 1691; https://doi.org/10.3390/s17071691 - 22 Jul 2017
Cited by 9 | Viewed by 6768
Abstract
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of [...] Read more.
The main causes of building defects are errors in the design and the construction phases. These causes related to construction are mainly due to the general lack of control of construction work and represent approximately 75% of the anomalies. In particular, one of the main causes of such anomalies, which end in building defects, is the lack of control over the physical variables of the work environment during the execution of tasks. Therefore, the high percentage of defects detected in buildings that have the root cause in the construction phase could be avoidable with a more accurate and efficient control of the process. The present work proposes a novel integration model based on information and communications technologies for the automation of both construction work and its management at the execution phase, specifically focused on the flat roof construction process. Roofs represent the second area where more defects are claimed. The proposed model is based on a Web system, supported by a service oriented architecture, for the integral management of tasks through the Last Planner System methodology, but incorporating the management of task restrictions from the physical environment variables by designing specific sensing systems. Likewise, all workers are integrated into the management process by Internet-of-Things solutions that guide them throughout the execution process in a non-intrusive and transparent way. Full article
(This article belongs to the Special Issue New Generation Sensors Enabling and Fostering IoT)
Show Figures

Figure 1

Figure 1
<p>State of the art of smart sensors and data collection technologies applied to the construction of residential buildings.</p>
Full article ">Figure 2
<p>Modelling of the sub processes for achieving automated construction system through Eriksson-Penker notation.</p>
Full article ">Figure 3
<p>General architecture of the proposed model.</p>
Full article ">Figure 4
<p>Variables that cause most of the defects in walkable flat roofs.</p>
Full article ">Figure 5
<p>Classification of variables by extension and temporal distribution.</p>
Full article ">Figure 6
<p>Distribution of sensors on a roof.</p>
Full article ">Figure 7
<p>Architectural model of the sensors.</p>
Full article ">Figure 8
<p>Monitoring flat roof service defined with RAML.</p>
Full article ">Figure 9
<p>Architecture for LPS module, Business Rule Engine and interaction modules.</p>
Full article ">Figure 10
<p>Picture of the implemented sensing system prototype.</p>
Full article ">Figure 11
<p>Layered architecture of the prototype.</p>
Full article ">Figure 12
<p>Prototype LPS.</p>
Full article ">Figure 13
<p>Test scenario in a real environment.</p>
Full article ">Figure 14
<p>Box plot of Request-Response time results.</p>
Full article ">Figure 15
<p>Power consumption by prototype.</p>
Full article ">Figure 16
<p>Request-response time of the service implemented in the prototype.</p>
Full article ">Figure 17
<p>Screenshot from the LPS WebApp of the weekly work plan at week 31.</p>
Full article ">Figure 18
<p>Screenshot from the LPS WebApp with the result of monitoring process.</p>
Full article ">
4441 KiB  
Article
Pinhole Zone Plate Lens for Ultrasound Focusing
by Constanza Rubio, José Miguel Fuster, Sergio Castiñeira-Ibáñez, Antonio Uris, Francisco Belmar and Pilar Candelas
Sensors 2017, 17(7), 1690; https://doi.org/10.3390/s17071690 - 22 Jul 2017
Cited by 16 | Viewed by 8154
Abstract
The focusing capabilities of a pinhole zone plate lens are presented and compared with those of a conventional Fresnel zone plate lens. The focusing properties are examined both experimentally and numerically. The results confirm that a pinhole zone plate lens can be an [...] Read more.
The focusing capabilities of a pinhole zone plate lens are presented and compared with those of a conventional Fresnel zone plate lens. The focusing properties are examined both experimentally and numerically. The results confirm that a pinhole zone plate lens can be an alternative to a Fresnel lens. A smooth filtering effect is created in pinhole zone plate lenses, giving rise to a reduction of the side lobes around the principal focus associated with the conventional Fresnel zone plate lens. The manufacturing technique of the pinhole zone plate lens allows the designing and constructing of lenses for different focal lengths quickly and economically and without the need to drill new plates. Full article
(This article belongs to the Special Issue Acoustic Wave Resonator-Based Sensors)
Show Figures

Figure 1

Figure 1
<p>Fresnel zone plate lens considered.</p>
Full article ">Figure 2
<p>Pinhole zone plate lens considered.</p>
Full article ">Figure 3
<p>Experimental set-up.</p>
Full article ">Figure 4
<p>Measured normalized pressure amplitude <math display="inline"> <semantics> <mrow> <mrow> <mo stretchy="false">|</mo> <mi>p</mi> <mo stretchy="false">|</mo> </mrow> <mo>/</mo> <msub> <mrow> <mrow> <mo stretchy="false">|</mo> <mi>p</mi> <mo stretchy="false">|</mo> </mrow> </mrow> <mi>max</mi> </msub> </mrow> </semantics> </math> at 200 kHz in a XZ plane collinear with center-line plane of the piston transducer and sample lenses. (<b>a</b>) Fresnel zone plate (FZP) lens; (<b>b</b>) Pinhole zone plate (PZP) lens.</p>
Full article ">Figure 5
<p>Simulated intensity fields of the normalized acoustic pressure distribution along the x-axis at z = 0 of the Fresnel zone plate lens using plane wave and piston wave.</p>
Full article ">Figure 6
<p>Comparison of the simulated and measured intensity fields of the normalized acoustic pressure amplitude along the z-axis at x = 0.104 m of Fresnel zone plane lens (FZP).</p>
Full article ">Figure 7
<p>Measured intensity fields of the normalized acoustic pressure amplitude along the x-axis at z = 0 of Fresnel zone plane lens (FZP) and pinhole zone plate lens (PZP).</p>
Full article ">Figure 8
<p>Measured (<b>a</b>) and simulated (<b>b</b>) intensity distributions along z-axis at the focal point in Fresnel zone plate (FZP) lens and pinhole zone plate (PZP) lens.</p>
Full article ">
21411 KiB  
Article
3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor
by Haopeng Zhang, Quanmao Wei and Zhiguo Jiang
Sensors 2017, 17(7), 1689; https://doi.org/10.3390/s17071689 - 22 Jul 2017
Cited by 23 | Viewed by 6403
Abstract
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers [...] Read more.
In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. Full article
(This article belongs to the Special Issue Imaging Depth Sensors—Sensors, Algorithms and Applications)
Show Figures

Figure 1

Figure 1
<p>Framework for the reconstruction of space objects. PMVS, patch-based multi-view stereo.</p>
Full article ">Figure 2
<p>Incorrect reconstruction caused by symmetric structures and repeated textures. (<b>a</b>) Matches; (<b>b</b>) poses; (<b>c</b>) point cloud.</p>
Full article ">Figure 3
<p>Correct reconstruction with modified image adding strategy. (<b>a</b>) Poses; (<b>b</b>) point cloud (sparse); (<b>c</b>) point cloud (dense).</p>
Full article ">Figure 4
<p>Illustration of normal vectors. All of the possible normal vectors of <math display="inline"> <semantics> <mo>Π</mo> </semantics> </math> are approximately uniformly distributed in the first octant. Here, the discretized resolution of the plane parameters is 0.1.</p>
Full article ">Figure 5
<p>Illustration of point adjustment for rotationally-symmetric structures. For layer <math display="inline"> <semantics> <msub> <mi>L</mi> <mi>i</mi> </msub> </semantics> </math>, move the point at <math display="inline"> <semantics> <msub> <mi mathvariant="bold">r</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics> </math> along the radius direction to <math display="inline"> <semantics> <msubsup> <mrow> <mi mathvariant="bold">r</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>′</mo> </msubsup> </semantics> </math>, and set its normal as <math display="inline"> <semantics> <mrow> <msubsup> <mrow> <mi mathvariant="bold">n</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>′</mo> </msubsup> <mo>=</mo> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> <mo>−</mo> <msubsup> <mrow> <mi mathvariant="bold">r</mi> </mrow> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mo>′</mo> </msubsup> </mrow> </semantics> </math>, where <math display="inline"> <semantics> <msub> <mi mathvariant="bold">S</mi> <mi>i</mi> </msub> </semantics> </math> is the trimmed mean location of {<math display="inline"> <semantics> <msub> <mi mathvariant="bold">l</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics> </math>}.</p>
Full article ">Figure 6
<p>Samples of image data used in our experiments. From top to bottom: image samples of Shenzhou-6 (<b>the first row</b>) and Tiangong-1 (<b>the second row</b>), rendered images of two 3D CAD models (<b>the third row</b>) and real images taken from two actual packages (<b>the fourth row</b>).</p>
Full article ">Figure 7
<p>Results of plane detection with different verification thresholds <span class="html-italic">n</span> (rows) and <math display="inline"> <semantics> <mi>θ</mi> </semantics> </math> (columns). <math display="inline"> <semantics> <mrow> <mi>M</mi> <mi>a</mi> <msub> <mi>x</mi> <mrow> <mi>n</mi> <mi>p</mi> </mrow> </msub> </mrow> </semantics> </math> in Algorithm 1 is six. Multiple planes are detected with different colors; point clouds at the lower left corner are the points filtered out.</p>
Full article ">Figure 8
<p>Results of rotationally-symmetric structure detection with different verification threshold <span class="html-italic">d</span>. Point clouds at the lower left corner, which are rendered with normals, are the points filtered out. The bar chart below the point cloud shows the histogram of the angle error of these filtered points, along with a horizontal bar indicating the average and RMS.</p>
Full article ">Figure 9
<p>Results of reconstruction and refinement for the box model (<b>a</b>), the packing box (<b>b</b>), the cylinder model (<b>c</b>) and the packing canister (<b>d</b>). From left to right (top to bottom): the recovered point cloud, point cloud after being refined and points filtered by refinement.</p>
Full article ">Figure 10
<p>Incorrect reconstruction caused by the erasure of the supporter and turntable.</p>
Full article ">Figure 11
<p>Outline of the reconstructed point cloud for (<b>a</b>) cylinder model and (<b>b</b>) packing canister.</p>
Full article ">Figure 12
<p>Results of reconstruction and refinement for Shenzhou-6 and Tiangong-1. From left to right: the recovered point cloud, the point cloud after being refined and points filtered by refinement.</p>
Full article ">Figure 13
<p>Outline of the reconstructed point cloud for (<b>a</b>) Shenzhou-6 and (<b>b</b>) Tiangong-1.</p>
Full article ">
1421 KiB  
Article
Double-Layer Compressive Sensing Based Efficient DOA Estimation in WSAN with Block Data Loss
by Peng Sun, Liantao Wu, Kai Yu, Huajie Shao and Zhi Wang
Sensors 2017, 17(7), 1688; https://doi.org/10.3390/s17071688 - 22 Jul 2017
Cited by 2 | Viewed by 4560
Abstract
Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a [...] Read more.
Accurate information acquisition is of vital importance for wireless sensor array network (WSAN) direction of arrival (DOA) estimation. However, due to the lossy nature of low-power wireless links, data loss, especially block data loss induced by adopting a large packet size, has a catastrophic effect on DOA estimation performance in WSAN. In this paper, we propose a double-layer compressive sensing (CS) framework to eliminate the hazards of block data loss, to achieve high accuracy and efficient DOA estimation. In addition to modeling the random packet loss during transmission as a passive CS process, an active CS procedure is introduced at each array sensor to further enhance the robustness of transmission. Furthermore, to avoid the error propagation from signal recovery to DOA estimation in conventional methods, we propose a direct DOA estimation technique under the double-layer CS framework. Leveraging a joint frequency and spatial domain sparse representation of the sensor array data, the fusion center (FC) can directly obtain the DOA estimation results according to the received data packets, skipping the phase of signal recovery. Extensive simulations demonstrate that the double-layer CS framework can eliminate the adverse effects induced by block data loss and yield a superior DOA estimation performance in WSAN. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The spatial characteristic of wireless links.</p>
Full article ">Figure 2
<p>Data packet structure of IEEE 802.15.4. MFR, MAC footer.</p>
Full article ">Figure 3
<p>Packet reception rate (<span class="html-italic">PRR</span>) and normalized transmission efficiency <span class="html-italic">E</span> versus payload length under different <span class="html-italic">BER</span>.</p>
Full article ">Figure 4
<p>Histograms of the occurrence number for different sizes of block data loss under different packet sizes.</p>
Full article ">Figure 5
<p>Schematic diagram of the double-layer compressive sensing (CS) framework and the DOA estimation process of DCS-DOA and DCS-direct DOA (DDOA).</p>
Full article ">Figure 6
<p>Examples of the passive CS measurement matrix under different packet sizes, with blue grids denoting one and blank grids denoting zero.</p>
Full article ">Figure 7
<p>Histograms of the absolute off-diagonal elements of the corresponding Gram matrix to the equivalent measurement matrices under the single-layer CS and double-layer CS framework when <span class="html-italic">N</span> = 512 and <span class="html-italic">M</span> = 64.</p>
Full article ">Figure 8
<p>The active CS process at each array sensor using a projection matrix constructed from a permutation or Gaussian matrix.</p>
Full article ">Figure 9
<p>The signal recovery error, DOA estimation RMSE and detection frequency versus the number of received data samples <span class="html-italic">M</span> under different packet sizes when projection (active CS) is not introduced at each array sensor.</p>
Full article ">Figure 10
<p>The comparison of DOA spatial spectrum between CS-DOA and DCS-DOA with packet size = 32.</p>
Full article ">Figure 11
<p>The comparison of DOA spatial spectrum between CS-DOA and DCS-DOA with packet size = 64.</p>
Full article ">Figure 12
<p>The comparison of DOA RMSE between CS-DOA and DCS-DOA under different packet sizes.</p>
Full article ">Figure 13
<p>The comparison of detection frequency between CS-DOA and DCS-DOA under different packet sizes.</p>
Full article ">Figure 14
<p>The DOA RMSE and detection frequency for DCS-DOA versus <span class="html-italic">M</span> under different packet sizes.</p>
Full article ">Figure 15
<p>The DOA spatial spectrum for DCS-DDOA with the packet size being 32 and 64.</p>
Full article ">Figure 16
<p>The comparison of DOA estimation RMSE among CS-DOA, DCS-DOA, DCS-DDOA under different packet sizes.</p>
Full article ">Figure 17
<p>The comparison of detection frequency among CS-DOA, DCS-DOA, DCS-DDOA under different packet sizes.</p>
Full article ">Figure 18
<p>The comparison of DOA estimation RMSE and detection frequency between DCS-DOA and DCS-DDOA under a varying SNR with the packet size being 64 and <span class="html-italic">M</span> being 64.</p>
Full article ">Figure 19
<p>The computation time spent on one DOA estimate for DCS-DOA and DCS-DDOA.</p>
Full article ">
6148 KiB  
Article
Motor Control Training for the Shoulder with Smart Garments
by Qi Wang, Liesbet De Baets, Annick Timmermans, Wei Chen, Luca Giacolini, Thomas Matheve and Panos Markopoulos
Sensors 2017, 17(7), 1687; https://doi.org/10.3390/s17071687 - 22 Jul 2017
Cited by 19 | Viewed by 12138
Abstract
Wearable technologies for posture monitoring and posture correction are emerging as a way to support and enhance physical therapy treatment, e.g., for motor control training in neurological disorders or for treating musculoskeletal disorders, such as shoulder, neck, or lower back pain. Among the [...] Read more.
Wearable technologies for posture monitoring and posture correction are emerging as a way to support and enhance physical therapy treatment, e.g., for motor control training in neurological disorders or for treating musculoskeletal disorders, such as shoulder, neck, or lower back pain. Among the various technological options for posture monitoring, wearable systems offer potential advantages regarding mobility, use in different contexts and sustained tracking in daily life. We describe the design of a smart garment named Zishi to monitor compensatory movements and evaluate its applicability for shoulder motor control training in a clinical setting. Five physiotherapists and eight patients with musculoskeletal shoulder pain participated in the study. The attitudes of patients and therapists towards the system were measured using standardized survey instruments. The results indicate that patients and their therapists consider Zishi a credible aid for rehabilitation and patients expect it will help towards their recovery. The system was perceived as highly usable and patients were motivated to train with the system. Future research efforts on the improvement of the customization of feedback location and modality, and on the evaluation of Zishi as support for motor learning in shoulder patients, should be made. Full article
(This article belongs to the Collection Sensors for Globalized Healthy Living and Wellbeing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Schematic anatomy of shoulder complex; (<b>b</b>) The compensatory movement in the task execution.</p>
Full article ">Figure 2
<p>Calibration model of the compensatory movement from shoulder girdle.</p>
Full article ">Figure 3
<p>Calibration model of flexion and extension movement of torso.</p>
Full article ">Figure 4
<p>System concept overview of <span class="html-italic">Zishi</span>: (<b>a</b>) Concept illustration; (<b>b</b>) User wearing the garment, blue dots show the sensor positions.</p>
Full article ">Figure 5
<p>Conductive textile-based flexible traces.</p>
Full article ">Figure 6
<p>Garment Implementation: (<b>a</b>) Sensor embedded in a Velcro strap by coated conductive yarn; (<b>b</b>) Velcro adjustments.</p>
Full article ">Figure 7
<p>Feedback strategy.</p>
Full article ">Figure 8
<p>Interface Design.</p>
Full article ">Figure 9
<p>Interfaces design in different stages. (<b>a</b>) Automatically connected. (<b>b</b>) Set personalized value of start position and threshold. (<b>c</b>) Visual feedback when the shoulder value is over range.</p>
Full article ">Figure 10
<p>Movement description: (<b>a</b>) Shoulder flexion; (<b>b</b>) Elevation in scapula plane.</p>
Full article ">Figure 11
<p>Task Execution: (<b>a</b>) Standardized calibration of arm movement with goniometer; (<b>b</b>) The subject is performing task 4, lifting the bottle to the board.</p>
Full article ">Figure 12
<p>Subscale findings of the Intrinsic Motivation Inventory questionnaire evaluated in patients with shoulder pain.</p>
Full article ">Figure 13
<p>Technology acceptance was measured with the UTAUT questionnaire, achieving positive evaluations by the participants.</p>
Full article ">
4535 KiB  
Article
Aptamer-Based Carboxyl-Terminated Nanocrystalline Diamond Sensing Arrays for Adenosine Triphosphate Detection
by Evi Suaebah, Takuro Naramura, Miho Myodo, Masataka Hasegawa, Shuichi Shoji, Jorge J. Buendia and Hiroshi Kawarada
Sensors 2017, 17(7), 1686; https://doi.org/10.3390/s17071686 - 21 Jul 2017
Cited by 5 | Viewed by 6434
Abstract
Here, we propose simple diamond functionalization by carboxyl termination for adenosine triphosphate (ATP) detection by an aptamer. The high-sensitivity label-free aptamer sensor for ATP detection was fabricated on nanocrystalline diamond (NCD). Carboxyl termination of the NCD surface by vacuum ultraviolet excimer laser and [...] Read more.
Here, we propose simple diamond functionalization by carboxyl termination for adenosine triphosphate (ATP) detection by an aptamer. The high-sensitivity label-free aptamer sensor for ATP detection was fabricated on nanocrystalline diamond (NCD). Carboxyl termination of the NCD surface by vacuum ultraviolet excimer laser and fluorine termination of the background region as a passivated layer were investigated by X-ray photoelectron spectroscopy. Single strand DNA (amide modification) was used as the supporting biomolecule to immobilize into the diamond surface via carboxyl termination and become a double strand with aptamer. ATP detection by aptamer was observed as a 66% fluorescence signal intensity decrease of the hybridization intensity signal. The sensor operation was also investigated by the field-effect characteristics. The shift of the drain current–drain voltage characteristics was used as the indicator for detection of ATP. From the field-effect characteristics, the shift of the drain current–drain voltage was observed in the negative direction. The negative charge direction shows that the aptamer is capable of detecting ATP. The ability of the sensor to detect ATP was investigated by fabricating a field-effect transistor on the modified NCD surface. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of micropattern fabrication on the nanocrystalline diamond (NCD) surface. (<b>a</b>) Hydrogen termination; (<b>b</b>) Vacuum ultraviolet (VUV)/Ozone treatment; (<b>c</b>) Gold deposition; (<b>d</b>) Photolithography process; (<b>e</b>) Fluorine termination; (<b>f</b>) Au mask etching; (<b>g</b>) Optical microscopy image of the dot pattern formed on the NCD surface. The dots are terminated by carboxyl groups and regions outside the dots are fluorine-terminated as a background.</p>
Full article ">Figure 2
<p>Schematic drawing of adenosine triphosphate (ATP) detection in solution gate field effect transistor (SGFET) based on a change in surface charge. PBS = Phosphate-buffered saline.</p>
Full article ">Figure 3
<p>Experimental setup for data measurement. (<b>a</b>) Biomolecule dropped onto the surface and placed in the humid chamber; (<b>b</b>) Incubation for 2 h at 38 °C for the immobilization process; (<b>c</b>) Washing treatment; (<b>d</b>) Hybridization; (<b>e</b>) Washing treatment; (<b>f</b>) Fluorescence data collection for hybridization; (<b>g</b>) Incubate for 1 h at 25 °C for ATP detection; (<b>h</b>) Washing treatment; (<b>i</b>) Fluorescence data collection for ATP detection; (<b>j</b>) Denaturation treatment; (<b>k</b>) Washing treatment; (<b>l</b>) Fluorescence data collection for denaturation.</p>
Full article ">Figure 4
<p>X-ray photoelectron spectroscopy (XPS) data of the oxidized diamond surface following VUV irradiation for 45 min. The insert shows the wide graph for the carboxyl coverage.</p>
Full article ">Figure 5
<p>Fluorescence signals at different steps of the sensor operation. (<b>a</b>) Hybridization of the supporting DNA with the aptamer modified with Cy-5 as a fluorescent indicator; (<b>b</b>) ATP detection via the aptamer; (<b>c</b>) Denaturation, which occurred when ATP and the aptamer were removed from the sensor and the DNA became single stranded; (<b>d</b>) Comparison of the relative fluorescence intensities at the three observation points for similar areas.</p>
Full article ">Figure 6
<p>Relationship between the fluorescence intensity for different ATP concentrations. Every single data point is the average of three measurements.</p>
Full article ">Figure 7
<p><span class="html-italic">I</span>–<span class="html-italic">V</span> (drain current <span class="html-italic">I<sub>d</sub></span> and drain–source voltage <span class="html-italic">V<sub>d</sub></span>) characteristics of the modified diamond-based field-effect transistor (FET) showing ATP activity (gate voltage −0.2 V).</p>
Full article ">Figure 8
<p>Reusability of the aptasensor for seven cycles over two weeks.</p>
Full article ">Figure 9
<p>Fluorescence intensity variation during aptasensor reuse over two weeks.</p>
Full article ">
1244 KiB  
Article
Noncontact Sleep Study by Multi-Modal Sensor Fusion
by Ku-young Chung, Kwangsub Song, Kangsoo Shin, Jinho Sohn, Seok Hyun Cho and Joon-Hyuk Chang
Sensors 2017, 17(7), 1685; https://doi.org/10.3390/s17071685 - 21 Jul 2017
Cited by 30 | Viewed by 6828
Abstract
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven [...] Read more.
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

Figure 1
<p>Block diagram of multi-modal sensors. LNA, low noise amplifier; VCO, voltage-controlled oscillator.</p>
Full article ">Figure 2
<p>Block diagram of the proposed method.</p>
Full article ">Figure 3
<p>The wake-related movement existence and absence detection (WRMEnAD) algorithm: (<b>a</b>) absence of the patient during sleep; (<b>b</b>) performance of the WRMEnAD algorithm.</p>
Full article ">Figure 4
<p>The effect of sound features on wake/sleep classification: (<b>a</b>) The detection of snore event features during the wake and sleep class. (<b>b</b>) The distribution of snore event feature with respect to the wake and sleep (NREM and REM) class. (<b>c</b>) The distribution of the normalized cycle intensity (CI) feature during the wake and sleep class. The solid line describes that the post-processing clearly separates the distribution of the normalized CI feature in the wake class with respect to the sleep class.</p>
Full article ">Figure 5
<p>First occurrence of REM.</p>
Full article ">Figure 6
<p>The devices used in experiment: (<b>a</b>) RFbeam K-LC5 Doppler radar sensor and ST-200 evaluation system; (<b>b</b>) OPERA MP-06 directional microphone sensor; (<b>c</b>) ResMed S+.</p>
Full article ">Figure 7
<p>The experimental environment at Hanyang University Hospital: (<b>a</b>) Room A; (<b>b</b>) Room B.</p>
Full article ">Figure 8
<p>The box plots of: (<b>a</b>) wake; (<b>b</b>) REM sleep; (<b>c</b>) NREM sleep; (<b>d</b>) total sleep stage classification of 13 tested subjects. The <span class="html-italic">x</span> axis and <span class="html-italic">y</span> axis respectively represent the tested sleep monitoring algorithms, and the accuracy (%) with respect to the polysomnographic reference data. The line at the center of the box represents the median, and the upper and lower edges of the box represent the 75th and 25th percentiles. The ends of the whiskers represent 90th and 10th percentiles.</p>
Full article ">Figure 9
<p>The Bland–Altman plot of: (<b>a</b>) wake; (<b>b</b>) REM sleep; (<b>c</b>) NREM sleep; (<b>d</b>) total sleep stage classification accuracy comparing the performance of sleep stages via the sensor-fusion algorithm (SSSF) and the sleep stages via ResMed S+ (SSR). The means of the accuracy of SSSF and SSR are on the <span class="html-italic">x</span> axis, and the differences (SSSF-SSR) are on the <span class="html-italic">y</span> axis. Indicated in the right are the values, the bias (mean) ± the standard deviation (SD). The bold line represents the mean (bias) difference in the accuracies, and the dotted lines represent the limits of agreement (from −1.96 to +1.96 standard deviations with respect to the mean).</p>
Full article ">
1696 KiB  
Article
Selectively Enhanced UV-A Photoresponsivity of a GaN MSM UV Photodetector with a Step-Graded AlxGa1−xN Buffer Layer
by Chang-Ju Lee, Chul-Ho Won, Jung-Hee Lee, Sung-Ho Hahm and Hongsik Park
Sensors 2017, 17(7), 1684; https://doi.org/10.3390/s17071684 - 21 Jul 2017
Cited by 25 | Viewed by 6855
Abstract
The UV-to-visible rejection ratio is one of the important figure of merits of GaN-based UV photodetectors. For cost-effectiveness and large-scale fabrication of GaN devices, we tried to grow a GaN epitaxial layer on silicon substrate with complicated buffer layers for a stress-release. It [...] Read more.
The UV-to-visible rejection ratio is one of the important figure of merits of GaN-based UV photodetectors. For cost-effectiveness and large-scale fabrication of GaN devices, we tried to grow a GaN epitaxial layer on silicon substrate with complicated buffer layers for a stress-release. It is known that the structure of the buffer layers affects the performance of devices fabricated on the GaN epitaxial layers. In this study, we show that the design of a buffer layer structure can make effect on the UV-to-visible rejection ratio of GaN UV photodetectors. The GaN photodetector fabricated on GaN-on-silicon substrate with a step-graded AlxGa−xN buffer layer has a highly-selective photoresponse at 365-nm wavelength. The UV-to-visible rejection ratio of the GaN UV photodetector with the step-graded AlxGa1−xN buffer layer was an order-of-magnitude higher than that of a photodetector with a conventional GaN/AlN multi buffer layer. The maximum photoresponsivity was as high as 5 × 102 A/W. This result implies that the design of buffer layer is important for photoresponse characteristics of GaN UV photodetectors as well as the crystal quality of the GaN epitaxial layers. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic cross-sectional views of GaN metal-semiconductor-metal (MSM) UV photodetectors showing epitaxial layer structures with different buffer layers and top view image of the fabricated UV photodetector. (<b>a</b>) Epitaxial layer structure with a step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer and (<b>b</b>) epitaxial layer structure with a conventionally-used high-temperature(HT)-GaN/low-temperature(LT)-AlN buffer layer; (<b>c</b>) Optical image of the GaN MSM UV photodetector. The design of Schottky electrode structure was identical for both types of devices. The thickness of the GaN active layer, material for the Schottky contact, and device fabrication process for both devices were also the same.</p>
Full article ">Figure 2
<p>Transmission electron microscope (TEM) images of GaN epitaxial layer structures with different buffer layers. (<b>a</b>) Epitaxial layer structure with a step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer and (<b>b</b>) epitaxial layer structure with a conventionally-used HT-GaN/LT-AlN buffer layer (scale bar: 100 nm); (<b>c</b>) high-resolution X-ray diffraction (HR-XRD) 2θ−ω scan of the GaN epitaxial layer grown on silicon substrate with the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer, showing the Al mole fractions of the buffer structure.</p>
Full article ">Figure 3
<p>Electrical characteristics of GaN MSM UV photodetector with the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer. (<b>a</b>) Current-voltage (I-V) characteristics and (<b>b</b>) photo-to-dark extinction ratios of the GaN photodetector at dark condition and under UV illumination with different wavelengths.</p>
Full article ">Figure 4
<p>Spectral photoresponsivity characteristics of GaN MSM UV photodetectors with (<b>a</b>) the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer and (<b>b</b>) HT-GaN/LT-AlN buffer layer, indicating significantly increased photoresponsivity of the GaN MSM UV photodetector with the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer near 365-nm wavelength region (UV-A region).</p>
Full article ">Figure 5
<p>UV-to-visible rejection ratio of GaN MSM UV photodetectors with (<b>a</b>) the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer and (<b>b</b>) HT-GaN/LT-AlN buffer layer, showing significantly enhanced UV-to-visible rejection ratio of the GaN MSM UV photodetector with the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer at 365-nm wavelength.</p>
Full article ">Figure 6
<p>Energy band diagram of (<b>a</b>) the epitaxial structure of the GaN layer with the step-graded Al<sub>x</sub>Ga<sub>1−x</sub>N buffer layer; (<b>b</b>) GaN MSM UV photodetector under 365-nm UV irradiation with reflection and reabsorption effect of incident light.</p>
Full article ">
8678 KiB  
Article
Demonstration and Methodology of Structural Monitoring of Stringer Runs out Composite Areas by Embedded Optical Fiber Sensors and Connectors Integrated during Production in a Composite Plant
by Carlos Miguel Giraldo, Juan Zúñiga Sagredo, José Sánchez Gómez and Pedro Corredera
Sensors 2017, 17(7), 1683; https://doi.org/10.3390/s17071683 - 21 Jul 2017
Cited by 7 | Viewed by 7535
Abstract
Embedding optical fibers sensors into composite structures for Structural Health Monitoring purposes is not just one of the most attractive solutions contributing to smart structures, but also the optimum integration approach that insures maximum protection and integrity of the fibers. Nevertheless this intended [...] Read more.
Embedding optical fibers sensors into composite structures for Structural Health Monitoring purposes is not just one of the most attractive solutions contributing to smart structures, but also the optimum integration approach that insures maximum protection and integrity of the fibers. Nevertheless this intended integration level still remains an industrial challenge since today there is no mature integration process in composite plants matching all necessary requirements. This article describes the process developed to integrate optical fiber sensors in the Production cycle of a test specimen. The sensors, Bragg gratings, were integrated into the laminate during automatic tape lay-up and also by a secondary bonding process, both in the Airbus Composite Plant. The test specimen, completely representative of the root joint of the lower wing cover of a real aircraft, is comprised of a structural skin panel with the associated stringer run out. The ingress-egress was achieved through the precise design and integration of miniaturized optical connectors compatible with the manufacturing conditions and operational test requirements. After production, the specimen was trimmed, assembled and bolted to metallic plates to represent the real triform and buttstrap, and eventually installed into the structural test rig. The interrogation of the sensors proves the effectiveness of the integration process; the analysis of the strain results demonstrate the good correlation between fiber sensors and electrical gauges in those locations where they are installed nearby, and the curvature and load transfer analysis in the bolted stringer run out area enable demonstration of the consistency of the fiber sensors measurements. In conclusion, this work presents strong evidence of the performance of embedded optical sensors for structural health monitoring purposes, where in addition and most importantly, the fibers were integrated in a real production environment and the ingress-egress issue was solved by the design and integration of miniaturized connectors compatible with the manufacturing and structural test phases. Full article
Show Figures

Figure 1

Figure 1
<p>Physical principal of Fiber Bragg Grating sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Full aircraft, (<b>b</b>) Wing Lower Cover Root Joint Area, (<b>c</b>) Section A-A of Root Joint Area and (<b>d</b>) Test specimen</p>
Full article ">Figure 3
<p>FBG arrays configurations and locations.</p>
Full article ">Figure 4
<p>Optical fiber connector solution.</p>
Full article ">Figure 5
<p>Optical fiber connector at different times during the manufacturing process.</p>
Full article ">Figure 6
<p>Installation of FBG arrays on the stringer foot.</p>
Full article ">Figure 7
<p>Locations of the strain gauges and FBG sensors.</p>
Full article ">Figure 8
<p>Specimen 1-Root joint -Before fatigue -tensile LL- Channel A.</p>
Full article ">Figure 9
<p>Specimen 1-Root joint -Before fatigue -tensile LL- Channel B.</p>
Full article ">Figure 10
<p>Specimen 1-Root joint -Before fatigue -tensile LL- Channel C.</p>
Full article ">Figure 11
<p>Specimen 1-Root joint -Before fatigue -tensile LL- Channel D.</p>
Full article ">Figure 12
<p>Specimen 1-Root joint -After fatigue -compression UL- Channel A.</p>
Full article ">Figure 13
<p>Specimen 1-Root joint -After fatigue -compression UL- Channel B.</p>
Full article ">Figure 14
<p>Specimen 1-Root joint -After fatigue -compression UL- Channel C.</p>
Full article ">Figure 15
<p>Specimen 1-Root joint -After fatigue -compression UL- Channel D.</p>
Full article ">Figure 16
<p>Specimen 1- FBGs embedded on stringer foot-tensile &amp; compression after fatigue.</p>
Full article ">Figure 17
<p>Limit Load Test Data Comparison.</p>
Full article ">Figure 18
<p>Limit Load Test Data Correlation.</p>
Full article ">Figure 19
<p>Failure Load Test Data Comparison.</p>
Full article ">Figure 20
<p>Failure Load Test Data Correlation.</p>
Full article ">Figure 21
<p>OF Sensor Location.</p>
Full article ">Figure 22
<p>Strain Data along OF in Compression (−300 kN).</p>
Full article ">Figure 23
<p>Strain Data along OF in Tension (650 kN).</p>
Full article ">Figure 24
<p>Calculated Curvature in Compression (−300 kN).</p>
Full article ">Figure 25
<p>Calculated Curvature in Tension (650 kN).</p>
Full article ">Figure 26
<p>Predicted Specimen Deflection vs. OF Sensor Curvature.</p>
Full article ">Figure 27
<p>Load Split in Compression (−300 kN).</p>
Full article ">Figure 28
<p>Load Split in Tension (650 kN).</p>
Full article ">
4522 KiB  
Article
4SM: A Novel Self-Calibrated Algebraic Ratio Method for Satellite-Derived Bathymetry and Water Column Correction
by Yann G. Morel and Fabio Favoretto
Sensors 2017, 17(7), 1682; https://doi.org/10.3390/s17071682 - 21 Jul 2017
Cited by 7 | Viewed by 7084
Abstract
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to [...] Read more.
All empirical water column correction methods have consistently been reported to require existing depth sounding data for the purpose of calibrating a simple depth retrieval model; they yield poor results over very bright or very dark bottoms. In contrast, we set out to (i) use only the relative radiance data in the image along with published data, and several new assumptions; (ii) in order to specify and operate the simplified radiative transfer equation (RTE); (iii) for the purpose of retrieving both the satellite derived bathymetry (SDB) and the water column corrected spectral reflectance over shallow seabeds. Sea truth regressions show that SDB depths retrieved by the method only need tide correction. Therefore it shall be demonstrated that, under such new assumptions, there is no need for (i) formal atmospheric correction; (ii) conversion of relative radiance into calibrated reflectance; or (iii) existing depth sounding data, to specify the simplified RTE and produce both SDB and spectral water column corrected radiance ready for bottom typing. Moreover, the use of the panchromatic band for that purpose is introduced. Altogether, we named this process the Self-Calibrated Supervised Spectral Shallow-sea Modeler (4SM). This approach requires a trained practitioner, though, to produce its results within hours of downloading the raw image. The ideal raw image should be a “near-nadir” view, exhibit homogeneous atmosphere and water column, include some coverage of optically deep waters and bare land, and lend itself to quality removal of haze, atmospheric adjacency effect, and sun/sky glint. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Operating the RTE (Equation (5)) at the top of the atmosphere: A plot of <span class="html-italic">Ls</span><sub>blue</sub> versus <span class="html-italic">Ls</span><sub>green</sub> in relative units. (<b>b</b>) Operating the RTE (Equation (8)) for linearized data: A plot of <span class="html-italic">X</span><sub>blue</sub> versus <span class="html-italic">X</span><sub>green</sub> in logarithmic values.</p>
Full article ">Figure 2
<p>(<b>a</b>) LANDSAT-8 over Lee Stocking island, Bahamas (Tongue of the Ocean). (<b>b</b>) LANDSAT-8 over Caicos Bank, Bahamas. This view shows the location of three depth datasets combined by Morgan and Harris [<a href="#B24-sensors-17-01682" class="html-bibr">24</a>] to prepare their DTM, as detailed in <a href="#sec5-sensors-17-01682" class="html-sec">Section 5</a>: (i) yellow points show actual depth sounding points; (ii) yellow lines show isobaths lines extracted from the nautical chart; (iii) purple points show depths retrieved from Landsat TM imagery.</p>
Full article ">Figure 3
<p>(<b>a</b>) Calibration diagram for natural data. (<b>b</b>) Calibration diagram for linearized data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sea truth regression of SDB versus DTM depths for the Caicos Bank study case: Plot of 4SM depth versus DTM depth, allowing for a 0.5 m tide height. Outliers: Thin gray points are excluded from the regression. (<b>b</b>) Sea truth regression of SDB versus DTM depths for the Caicos Bank study case: Overlay of histograms of retrieved depths and sea truth depths; both histograms account for all depth points used in the regression.</p>
Full article ">Figure 5
<p>(<b>a</b>) Depth residuals ZDTM-ZSDB. Display of the overlay of the depth residuals ZDTM-ZSDB over a false color composite. Please beware that green areas represent the backdrop of false color composite over extremely shallow areas, while deep red areas represent dense vegetation on land. (<b>b</b>) Color pallet for depth residuals ZDTM-ZSDB.</p>
Full article ">Figure 6
<p>Satellite-derived bathymetry maps. (<b>a</b>) Combined depth for a time series of 14 Landsat-8 scenes over Lee Stocking Island, Bahamas. (<b>b</b>) SDB for a Landsat 8 image 19 April 2016 over Caicos Bank, Bahamas. (<b>c</b>) Color palette for bathymetry, depths are in centimeters.</p>
Full article ">Figure 7
<p>(<b>a</b>) 4 February 2016 over Lee Stocking Island; (<b>b</b>) 29 March 2014 over Caicos Bank.</p>
Full article ">Figure 8
<p>Computed depth versus distance.</p>
Full article ">
2407 KiB  
Article
A Noncontact Dibutyl Phthalate Sensor Based on a Wireless-Electrodeless QCM-D Modified with Nano-Structured Nickel Hydroxide
by Daqi Chen, Xiyang Sun, Kaihuan Zhang, Guokang Fan, You Wang, Guang Li and Ruifen Hu
Sensors 2017, 17(7), 1681; https://doi.org/10.3390/s17071681 - 21 Jul 2017
Cited by 17 | Viewed by 5004
Abstract
Dibutyl phthalate (DBP) is a widely used plasticizer which has been found to be a reproductive and developmental toxicant and ubiquitously existing in the air. A highly sensitive method for DBP monitoring in the environment is urgently needed. A DBP sensor based on [...] Read more.
Dibutyl phthalate (DBP) is a widely used plasticizer which has been found to be a reproductive and developmental toxicant and ubiquitously existing in the air. A highly sensitive method for DBP monitoring in the environment is urgently needed. A DBP sensor based on a homemade wireless-electrodeless quartz crystal microbalance with dissipation (QCM-D) coated with nano-structured nickel hydroxide is presented. With the noncontact configuration, the sensing system could work at a higher resonance frequency (the 3rd overtone) and the response of the system was even more stable compared with a conventional quartz crystal microbalance (QCM). The sensor achieved a sensitivity of 7.3 Hz/ppb to DBP in a concentration range of 0.4–40 ppb and an ultra-low detection limit of 0.4 ppb of DBP has also been achieved. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The homebuilt electrodeless quartz crystal microbalance (QCM) gas chamber. Two spiral coils are placed outside the chamber and right below the quartz plate. They are used to radiate the electric field, which excites the quartz oscillator and receives the vibrational signals of the quartz oscillator. The quartz oscillator is placed at the bottom of the chamber and above the middle of two coils. (<b>b</b>) The whole structure of the wireless-electrodeless quartz crystal microbalance with dissipation (QCM-D) system. The signal generator generates the burst radio frequency signal and finally sends it to the transmitting coil. The other coil receives the signal and then sends it to the narrowband amplifier then to the oscilloscope and finally to the PC for analyzing.</p>
Full article ">Figure 2
<p>SEM morphology of the nano–Ni(OH)<sub>2</sub> sample at different scales. (<b>a</b>) ×20,000, (<b>b</b>) ×50,000.</p>
Full article ">Figure 3
<p>Response curve of the nano-Ni(OH)<sub>2</sub> QCM sensors to 24 ppb dibutyl phthalate (DBP) with different loaded mass of sensing material.</p>
Full article ">Figure 4
<p>Responses of the nano-Ni(OH)<sub>2</sub> QCM sensor to various organic vapors. The concentration of DBP, diethyl phthalate (DEP) and dimethyl phthalate (DMP) was 40 ppb, while other vapors were 20 ppm. The response of DBP was far bigger than the inferences.</p>
Full article ">Figure 5
<p>Response of the nano-Ni(OH)<sub>2</sub>-coated QCM sensor to 8, 24, and 40 ppb (from bottom to top) of DBP.</p>
Full article ">Figure 6
<p>Calibration curve of the nano-Ni(OH)<sub>2</sub>-coated QCM sensor to different concentrations of DBP vapor. The inset indicates the calibration curve of the linear range.</p>
Full article ">Figure 7
<p>Responses of the nano-Ni(OH)<sub>2</sub> QCM sensor to 24 ppb DBP. The red line is the response when the quartz oscillator works at the fundamental frequency, while the blue one works at 3rd harmonics.</p>
Full article ">Figure 8
<p>Calibration curve of the nano-Ni(OH)<sub>2</sub>-coated QCM sensor to different concentrations of DBP vapor. The red line is the response when the quartz oscillator works at the fundamental frequency, while the blue one works at 3rd harmonic. The inset indicates the calibration curve of the linear range.</p>
Full article ">
5053 KiB  
Article
Hierarchical Stereo Matching in Two-Scale Space for Cyber-Physical System
by Eunah Choi, Sangyoon Lee and Hyunki Hong
Sensors 2017, 17(7), 1680; https://doi.org/10.3390/s17071680 - 21 Jul 2017
Cited by 4 | Viewed by 5496
Abstract
Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results [...] Read more.
Dense disparity map estimation from a high-resolution stereo image is a very difficult problem in terms of both matching accuracy and computation efficiency. Thus, an exhaustive disparity search at full resolution is required. In general, examining more pixels in the stereo view results in more ambiguous correspondences. When a high-resolution image is down-sampled, the high-frequency components of the fine-scaled image are at risk of disappearing in the coarse-resolution image. Furthermore, if erroneous disparity estimates caused by missing high-frequency components are propagated across scale space, ultimately, false disparity estimates are obtained. To solve these problems, we introduce an efficient hierarchical stereo matching method in two-scale space. This method applies disparity estimation to the reduced-resolution image, and the disparity result is then up-sampled to the original resolution. The disparity estimation values of the high-frequency (or edge component) regions of the full-resolution image are combined with the up-sampled disparity results. In this study, we extracted the high-frequency areas from the scale-space representation by using difference of Gaussian (DoG) or found edge components, using a Canny operator. Then, edge-aware disparity propagation was used to refine the disparity map. The experimental results show that the proposed algorithm outperforms previous methods. Full article
Show Figures

Figure 1

Figure 1
<p>Proposed block diagram.</p>
Full article ">Figure 2
<p>(<b>a</b>) Example of census transform; (<b>b</b>) Its hamming distance result.</p>
Full article ">Figure 3
<p>Initial matching cost results of (<b>a</b>) absolute difference (AD)-census and (<b>b</b>) sum of absolute difference (SAD)-census.</p>
Full article ">Figure 4
<p>(<b>a</b>) High-frequency regions produced by difference of Gaussian (DoG) and (<b>b</b>) edge components produced by Canny operator.</p>
Full article ">Figure 5
<p>(<b>a</b>) Initial matching cost; (<b>b</b>) new quadric cost.</p>
Full article ">Figure 6
<p>(<b>a</b>) First pass, from left to right; and (<b>b</b>) second pass, from right to left [<a href="#B19-sensors-17-01680" class="html-bibr">19</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Reference images (Middlebury) and (<b>b</b>) ground truth disparity maps; Disparity maps by (<b>c</b>) dual-cross-bilateral (DCB) grid [<a href="#B23-sensors-17-01680" class="html-bibr">23</a>]; (<b>d</b>) adaptive weight [<a href="#B21-sensors-17-01680" class="html-bibr">21</a>]; (<b>e</b>) MGM [<a href="#B24-sensors-17-01680" class="html-bibr">24</a>]; and (<b>f</b>) proposed method.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Reference images (Middlebury) and (<b>b</b>) ground truth disparity maps; Disparity maps by (<b>c</b>) dual-cross-bilateral (DCB) grid [<a href="#B23-sensors-17-01680" class="html-bibr">23</a>]; (<b>d</b>) adaptive weight [<a href="#B21-sensors-17-01680" class="html-bibr">21</a>]; (<b>e</b>) MGM [<a href="#B24-sensors-17-01680" class="html-bibr">24</a>]; and (<b>f</b>) proposed method.</p>
Full article ">Figure 8
<p>Comparison of error rates with respect to the threshold values of (<b>a</b>) Canny detector and (<b>b</b>) DoG method.</p>
Full article ">Figure 9
<p>(<b>a</b>) Outdoor and indoor scene images; Disparity maps produced by (<b>b</b>) DCB grid [<a href="#B23-sensors-17-01680" class="html-bibr">23</a>]; (<b>c</b>) adaptive weight [<a href="#B21-sensors-17-01680" class="html-bibr">21</a>]; (<b>d</b>) MGM [<a href="#B24-sensors-17-01680" class="html-bibr">24</a>]; and (<b>e</b>) proposed method.</p>
Full article ">Figure 10
<p>(<b>a</b>) Computer generated images and (<b>b</b>) ground truth disparity maps [<a href="#B23-sensors-17-01680" class="html-bibr">23</a>]; Disparity maps by (<b>c</b>) DCB grid [<a href="#B23-sensors-17-01680" class="html-bibr">23</a>]; (<b>d</b>) adaptive weight [<a href="#B21-sensors-17-01680" class="html-bibr">21</a>]; (<b>e</b>) MGM [<a href="#B24-sensors-17-01680" class="html-bibr">24</a>]; and (<b>f</b>) proposed method.</p>
Full article ">
2001 KiB  
Article
Classification of Alzheimer’s Patients through Ubiquitous Computing
by Alicia Nieto-Reyes, Rafael Duque, José Luis Montaña and Carmen Lage
Sensors 2017, 17(7), 1679; https://doi.org/10.3390/s17071679 - 21 Jul 2017
Cited by 13 | Viewed by 5674
Abstract
Functional data analysis and artificial neural networks are the building blocks of the proposed methodology that distinguishes the movement patterns among c’s patients on different stages of the disease and classifies new patients to their appropriate stage of the disease. The movement patterns [...] Read more.
Functional data analysis and artificial neural networks are the building blocks of the proposed methodology that distinguishes the movement patterns among c’s patients on different stages of the disease and classifies new patients to their appropriate stage of the disease. The movement patterns are obtained by the accelerometer device of android smartphones that the patients carry while moving freely. The proposed methodology is relevant in that it is flexible on the type of data to which it is applied. To exemplify that, it is analyzed a novel real three-dimensional functional dataset where each datum is observed in a different time domain. Not only is it observed on a difference frequency but also the domain of each datum has different length. The obtained classification success rate of 83 % indicates the potential of the proposed methodology. Full article
(This article belongs to the Special Issue Selected Papers from UCAmI 2016)
Show Figures

Figure 1

Figure 1
<p>Representation of the three axes into which the accelerations are measured; and shown in two different positions, panels (<b>a</b>) and (<b>b</b>).</p>
Full article ">Figure 2
<p>Measured acceleration on the <span class="html-italic">x</span>-axis for the accelerations of the patients in m/s<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math> versus the time in s, at different stages of the disease: early (<b>left</b>), middle (<b>central</b>) and late (<b>right</b>).</p>
Full article ">Figure 3
<p>Histograms of 1000 <span class="html-italic">p</span>-values resulting from applying the two functional ANOVA tests based on random projections: based on the Bonferroni correction (<b>top row</b>) and on the FDR (<b>middle row</b>) and the functional ANOVA based on the F-statistic (<b>bottom row</b>).</p>
Full article ">Figure 4
<p>Histograms of 1000 <span class="html-italic">p</span>-values resulting from applying the test of equality of means based on the <math display="inline"> <semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics> </math> norm for the null hypothesis: <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>0</mn> </msub> <mo>:</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>μ</mi> <mn>2</mn> </msub> </mrow> </semantics> </math> (<b>top row</b>), <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>0</mn> </msub> <mo>:</mo> <msub> <mi>μ</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>μ</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> (<b>middle row</b>) and <math display="inline"> <semantics> <mrow> <msub> <mi mathvariant="normal">H</mi> <mn>0</mn> </msub> <mo>:</mo> <msub> <mi>μ</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>μ</mi> <mn>3</mn> </msub> </mrow> </semantics> </math> (<b>bottom row</b>) on the <span class="html-italic">x</span>-axis (<b>left column</b>), <span class="html-italic">y</span>-axis (<b>middle column</b>) and <span class="html-italic">z</span>-axis (<b>right column</b>).</p>
Full article ">Figure 5
<p>Boxplot of the ercentage of misclassified patients. The test data corresponding to the splitting, on training and test sample, under study (<b>left</b>), to Splitting 1 (<b>middle</b>) and to Splitting 2 (<b>right</b>).</p>
Full article ">
7088 KiB  
Article
Interference Effects Redress over Power-Efficient Wireless-Friendly Mesh Networks for Ubiquitous Sensor Communications across Smart Cities
by Jose Santana, Domingo Marrero, Elsa Macías, Vicente Mena and Álvaro Suárez
Sensors 2017, 17(7), 1678; https://doi.org/10.3390/s17071678 - 21 Jul 2017
Cited by 9 | Viewed by 7141
Abstract
Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., [...] Read more.
Ubiquitous sensing allows smart cities to take control of many parameters (e.g., road traffic, air or noise pollution levels, etc.). An inexpensive Wireless Mesh Network can be used as an efficient way to transport sensed data. When that mesh is autonomously powered (e.g., solar powered), it constitutes an ideal portable network system which can be deployed when needed. Nevertheless, its power consumption must be restrained to extend its operational cycle and for preserving the environment. To this end, our strategy fosters wireless interface deactivation among nodes which do not participate in any route. As we show, this contributes to a significant power saving for the mesh. Furthermore, our strategy is wireless-friendly, meaning that it gives priority to deactivation of nodes receiving (and also causing) interferences from (to) the rest of the smart city. We also show that a routing protocol can adapt to this strategy in which certain nodes deactivate their own wireless interfaces. Full article
(This article belongs to the Special Issue Selected Papers from UCAmI 2016)
Show Figures

Figure 1

Figure 1
<p>System architecture for sensor data traffic in a smart city.</p>
Full article ">Figure 2
<p>MAP5 is being interfered generating service disruptions due to traffic from building B.</p>
Full article ">Figure 3
<p>State diagram for our proposal.</p>
Full article ">Figure 4
<p>Power consumption variation related to D(t).</p>
Full article ">Figure 5
<p>Block diagram for our proposal.</p>
Full article ">Figure 6
<p>Sequence of messages and state changes for MAP5.</p>
Full article ">Figure 7
<p>Regular 3 × 3 mesh with city block routes.</p>
Full article ">Figure 8
<p>(<b>a</b>) S(t) = 1, R(t) = 0.1; (<b>b</b>) S(t) = 0.5, R(t) = 0; (<b>c</b>) S(t) = 0, R(t) = 0.</p>
Full article ">Figure 9
<p>(<b>a</b>) R(t) = 1, S(t) = 0.3; (<b>b</b>) R(t) = 0.4, S(t) = 0.1; (<b>c</b>) R(t) = 0, S(t) = 0.</p>
Full article ">Figure 10
<p>Relationship between R(t) and S(t) in a 3 × 3 PWMNS.</p>
Full article ">Figure 11
<p>Number of routes passing through each node in a 3 × 3 PWMNS.</p>
Full article ">Figure 12
<p>Relationship between R(t) and node proximity to the centre in a 3 × 3 PWMNS.</p>
Full article ">Figure 13
<p>Image of our test platform.</p>
Full article ">Figure 14
<p>Deployed static topology.</p>
Full article ">Figure 15
<p>Chosen intermediate node for route from node6 to node1.</p>
Full article ">Figure 16
<p>Deactivation effect of intermediate node7 WiFi interface (t<sub>DOWN</sub> = 2 s) at instant <span class="html-italic">t</span> = 10.</p>
Full article ">Figure 17
<p>Effect on LFT packet sequence between node6 and node1.</p>
Full article ">Figure 18
<p>Chosen intermediate node for route from node6 to node1 in node6 for 3 HFT sessions of 60 s with node7 WiFi interface deactivation at <span class="html-italic">t</span> = 10 and <span class="html-italic">t<sub>DOWN</sub></span> = 4 s.</p>
Full article ">Figure 19
<p>Main additional devices which generated interfering traffic.</p>
Full article ">Figure 20
<p>OLSR messages detected by node6.</p>
Full article ">Figure 21
<p>OLSR messages detected by node6 with channel 7 interferences at <span class="html-italic">t</span> = 10, <span class="html-italic">t</span> = 30 and <span class="html-italic">t</span> = 50.</p>
Full article ">Figure 22
<p>LFT messages confirmed by node6 from node1.</p>
Full article ">Figure 23
<p>Location values with mobile phone outdoors.</p>
Full article ">Figure 24
<p>Satellite view of the route.</p>
Full article ">Figure 25
<p>Average location values that are sent to the MAP every minute.</p>
Full article ">Figure 26
<p>Wattmeter used in our experiment.</p>
Full article ">
503 KiB  
Article
Lifetime Maximization via Hole Alleviation in IoT Enabling Heterogeneous Wireless Sensor Networks
by Zahid Wadud, Nadeem Javaid, Muhammad Awais Khan, Nabil Alrajeh, Mohamad Souheil Alabed and Nadra Guizani
Sensors 2017, 17(7), 1677; https://doi.org/10.3390/s17071677 - 21 Jul 2017
Cited by 16 | Viewed by 6304
Abstract
In Internet of Things (IoT) enabled Wireless Sensor Networks (WSNs), there are two major factors which degrade the performance of the network. One is the void hole which occurs in a particular region due to unavailability of forwarder nodes. The other is the [...] Read more.
In Internet of Things (IoT) enabled Wireless Sensor Networks (WSNs), there are two major factors which degrade the performance of the network. One is the void hole which occurs in a particular region due to unavailability of forwarder nodes. The other is the presence of energy hole which occurs due to imbalanced data traffic load on intermediate nodes. Therefore, an optimum transmission strategy is required to maximize the network lifespan via hole alleviation. In this regard, we propose a heterogeneous network solution that is capable to balance energy dissipation among network nodes. In addition, the divide and conquer approach is exploited to evenly distribute number of transmissions over various network areas. An efficient forwarder node selection is performed to alleviate coverage and energy holes. Linear optimization is performed to validate the effectiveness of our proposed work in term of energy minimization. Furthermore, simulations are conducted to show that our claims are well grounded. Results show the superiority of our work as compared to the baseline scheme in terms of energy consumption and network lifetime. Full article
(This article belongs to the Special Issue Sensor Networks for Collaborative and Secure Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Proposed Network Model.</p>
Full article ">Figure 2
<p>Traffic Load Estimation and Data Transmission.</p>
Full article ">Figure 3
<p>Feasible Region: Energy.</p>
Full article ">Figure 4
<p>Feasible Region: Bandwidth.</p>
Full article ">Figure 5
<p>Network Lifetime at Various Radii.</p>
Full article ">Figure 6
<p>Energy Tax with Different Node <math display="inline"> <semantics> <mi>ρ</mi> </semantics> </math>.</p>
Full article ">Figure 7
<p>End 2 End delay.</p>
Full article ">
2462 KiB  
Article
Identification of Load Categories in Rotor System Based on Vibration Analysis
by Kun Zhang and Zhaojian Yang
Sensors 2017, 17(7), 1676; https://doi.org/10.3390/s17071676 - 20 Jul 2017
Cited by 7 | Viewed by 6626
Abstract
Rotating machinery is often subjected to variable loads during operation. Thus, monitoring and identifying different load types is important. Here, five typical load types have been qualitatively studied for a rotor system. A novel load category identification method for rotor system based on [...] Read more.
Rotating machinery is often subjected to variable loads during operation. Thus, monitoring and identifying different load types is important. Here, five typical load types have been qualitatively studied for a rotor system. A novel load category identification method for rotor system based on vibration signals is proposed. This method is a combination of ensemble empirical mode decomposition (EEMD), energy feature extraction, and back propagation (BP) neural network. A dedicated load identification test bench for rotor system was developed. According to loads characteristics and test conditions, an experimental plan was formulated, and loading tests for five loads were conducted. Corresponding vibration signals of the rotor system were collected for each load condition via eddy current displacement sensor. Signals were reconstructed using EEMD, and then features were extracted followed by energy calculations. Finally, characteristics were input to the BP neural network, to identify different load types. Comparison and analysis of identifying data and test data revealed a general identification rate of 94.54%, achieving high identification accuracy and good robustness. This shows that the proposed method is feasible. Due to reliable and experimentally validated theoretical results, this method can be applied to load identification and fault diagnosis for rotor equipment used in engineering applications. Full article
(This article belongs to the Special Issue Mechatronic Systems for Automatic Vehicles)
Show Figures

Figure 1

Figure 1
<p>Flowchart of identification method of load categories for rotor system.</p>
Full article ">Figure 2
<p>Flowchart of EEMD.</p>
Full article ">Figure 3
<p>Flowchart combining EEMD and BP neural network.</p>
Full article ">Figure 4
<p>Load-identification test bench of rotor system. (<b>a</b>) Design schematic: 1—motor; 2—eddy current displacement sensor; 3—rotary disc; 4—bearing; 5—torque speed sensor; 6—magnetic powder brake; (<b>b</b>) Main part of load-identification test bench.</p>
Full article ">Figure 5
<p>Ensemble empirical mode decomposition.</p>
Full article ">Figure 6
<p>Energy distribution of nodes.</p>
Full article ">Figure 7
<p>Prediction effect of BP network.</p>
Full article ">
2239 KiB  
Article
An Adaptive Feature Learning Model for Sequential Radar High Resolution Range Profile Recognition
by Xuan Peng, Xunzhang Gao, Yifan Zhang and Xiang Li
Sensors 2017, 17(7), 1675; https://doi.org/10.3390/s17071675 - 20 Jul 2017
Cited by 17 | Viewed by 4907
Abstract
This paper proposes a new feature learning method for the recognition of radar high resolution range profile (HRRP) sequences. HRRPs from a period of continuous changing aspect angles are jointly modeled and discriminated by a single model named the discriminative infinite restricted Boltzmann [...] Read more.
This paper proposes a new feature learning method for the recognition of radar high resolution range profile (HRRP) sequences. HRRPs from a period of continuous changing aspect angles are jointly modeled and discriminated by a single model named the discriminative infinite restricted Boltzmann machine (Dis-iRBM). Compared with the commonly used hidden Markov model (HMM)-based recognition method for HRRP sequences, which requires efficient preprocessing of the HRRP signal, the proposed method is an end-to-end method of which the input is the raw HRRP sequence, and the output is the label of the target. The proposed model can efficiently capture the global pattern in a sequence, while the HMM can only model local dynamics, which suffers from information loss. Last but not least, the proposed model learns the features of HRRP sequences adaptively according to the complexity of a single HRRP and the length of a HRRP sequence. Experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) database indicate that the proposed method is efficient and robust under various conditions. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Graphical structure of the RBM.</p>
Full article ">Figure 2
<p>Graphical structure of the Dis-iRBM for sequential HRRP.</p>
Full article ">Figure 3
<p>Some examples of training and testing HRRP sequences, (<b>a</b>) training; (<b>b</b>) testing.</p>
Full article ">Figure 4
<p>Recognition performance on models trained with different sequence length data. The result of HMM is provided by [<a href="#B20-sensors-17-01675" class="html-bibr">20</a>] in which a full-aspect HMM containing 55 states (aspect frames) was trained.</p>
Full article ">Figure 5
<p>Effective hidden layer sizes of Dis-iRBMs trained on the data with different sequence lengths.</p>
Full article ">Figure 6
<p>Weight matrices <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>W</mi> </mstyle> </semantics> </math> or filters learnt by Dis-iRBMs, (<b>a</b>) <span class="html-italic">L</span> = 1; (<b>b</b>) <span class="html-italic">L</span> = 5; (<b>c</b>) <span class="html-italic">L</span> = 10.</p>
Full article ">Figure 7
<p>A illustration of the difference of angular sampling intervals between training and testing phase. Where the ratio between testing and training sampling intervals is 1/2.</p>
Full article ">Figure 8
<p>Recognition performance on different ratios between testing and training angular sampling intervals.</p>
Full article ">
1065 KiB  
Article
Scheduling for Emergency Tasks in Industrial Wireless Sensor Networks
by Changqing Xia, Xi Jin, Linghe Kong and Peng Zeng
Sensors 2017, 17(7), 1674; https://doi.org/10.3390/s17071674 - 20 Jul 2017
Cited by 8 | Viewed by 4820
Abstract
Wireless sensor networks (WSNs) are widely applied in industrial manufacturing systems. By means of centralized control, the real-time requirement and reliability can be provided by WSNs in industrial production. Furthermore, many approaches reserve resources for situations in which the controller cannot perform centralized [...] Read more.
Wireless sensor networks (WSNs) are widely applied in industrial manufacturing systems. By means of centralized control, the real-time requirement and reliability can be provided by WSNs in industrial production. Furthermore, many approaches reserve resources for situations in which the controller cannot perform centralized resource allocation. The controller assigns these resources as it becomes aware of when and where accidents have occurred. However, the reserved resources are limited, and such incidents are low-probability events. In addition, resource reservation may not be effective since the controller does not know when and where accidents will actually occur. To address this issue, we improve the reliability of scheduling for emergency tasks by proposing a method based on a stealing mechanism. In our method, an emergency task is transmitted by stealing resources allocated to regular flows. The challenges addressed in our work are as follows: (1) emergencies occur only occasionally, but the industrial system must deliver the corresponding flows within their deadlines when they occur; (2) we wish to minimize the impact of emergency flows by reducing the number of stolen flows. The contributions of this work are two-fold: (1) we first define intersections and blocking as new characteristics of flows; and (2) we propose a series of distributed routing algorithms to improve the schedulability and to reduce the impact of emergency flows. We demonstrate that our scheduling algorithm and analysis approach are better than the existing ones by extensive simulations. Full article
(This article belongs to the Collection Smart Industrial Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>An example of an emergency.</p>
Full article ">Figure 2
<p>An example of the stealing mechanism.</p>
Full article ">Figure 3
<p>An example of stealing-first scheduling algorithm (SfSA).</p>
Full article ">Figure 4
<p>An example of indirect routing.</p>
Full article ">Figure 5
<p>An example of intersections.</p>
Full article ">Figure 6
<p>An example for selecting the path based only on blocking and intersections.</p>
Full article ">Figure 7
<p>Data structure.</p>
Full article ">Figure 8
<p>An example of one test case.</p>
Full article ">Figure 9
<p>The relationship between the schedulability ratio and the number of nodes. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 9 Cont.
<p>The relationship between the schedulability ratio and the number of nodes. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 10
<p>The relationship between the schedulability ratio and the number of flows. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 11
<p>The relationship between the schedulability ratio and the network utilization.</p>
Full article ">Figure 12
<p>The relationships between the number of stolen flows and the number of nodes. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 13
<p>The relationships between the number of stolen flows and the number of flows. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 13 Cont.
<p>The relationships between the number of stolen flows and the number of flows. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>4</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>U</mi> <mo>=</mo> <mn>0</mn> <mo>.</mo> <mn>5</mn> <mo>.</mo> </mrow> </semantics> </math></p>
Full article ">Figure 14
<p>The relationships between the number of stolen flows and the network utilization. (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>50</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>; (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>70</mn> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mrow> <mi>F</mi> <mo>=</mo> <mn>15</mn> </mrow> </semantics> </math>.</p>
Full article ">
10526 KiB  
Article
LEDs: Sources and Intrinsically Bandwidth-Limited Detectors
by Roberto Filippo, Emanuele Taralli and Mauro Rajteri
Sensors 2017, 17(7), 1673; https://doi.org/10.3390/s17071673 - 20 Jul 2017
Cited by 60 | Viewed by 6412
Abstract
The increasing demand for light emitting diodes (LEDs) is driven by a number of application categories, including display backlighting, communications, signage, and general illumination. Nowadays, they have also become attractive candidates as new photometric standards. In recent years, LEDs have started to be [...] Read more.
The increasing demand for light emitting diodes (LEDs) is driven by a number of application categories, including display backlighting, communications, signage, and general illumination. Nowadays, they have also become attractive candidates as new photometric standards. In recent years, LEDs have started to be applied as wavelength-selective photo-detectors as well. Nevertheless, manufacturers’ datasheets are limited about LEDs used as sources in terms of degradation with operating time (aging) or shifting of the emission spectrum as a function of the forward current. On the contrary, as far as detection is concerned, information about spectral responsivity of LEDs is missing. We investigated, mainly from a radiometric point of view, more than 50 commercial LEDs of a wide variety of wavelength bands, ranging from ultraviolet (UV) to near infrared (NIR). Originally, the final aim was to find which LEDs could better work together as detector-emitter pairs for the creation of self-calibrating ground-viewing LED radiometers; however, the findings that we are sharing here following, have a general validity that could be exploited in several sensing applications. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental measurement diagram for emission parameters.</p>
Full article ">Figure 2
<p>Experimental setup for the detection parameters measurement.</p>
Full article ">Figure 3
<p>Examples from <a href="#sensors-17-01673-t001" class="html-table">Table 1</a> of normalized emission spectra of LED sources with lambda peak ranging from: (<b>a</b>) 350 nm to 490 nm; (<b>b</b>) 490 nm to 650 nm (first group); (<b>c</b>) 490 nm to 650 nm (second group); (<b>d</b>) 600 nm to 700 nm; and (<b>e</b>) 700 nm to 830 nm.</p>
Full article ">Figure 4
<p>Normalized spectral detection of LED detectors in the peak wavelength range from: (<b>a</b>) 350 nm to 450 nm; (<b>b</b>) 400 nm to 600 nm (first group); (<b>c</b>) 400 nm to 600 nm (second group); (<b>d</b>) 550 nm to 650 nm; and (<b>e</b>) 650 nm to 850 nm.</p>
Full article ">Figure 4 Cont.
<p>Normalized spectral detection of LED detectors in the peak wavelength range from: (<b>a</b>) 350 nm to 450 nm; (<b>b</b>) 400 nm to 600 nm (first group); (<b>c</b>) 400 nm to 600 nm (second group); (<b>d</b>) 550 nm to 650 nm; and (<b>e</b>) 650 nm to 850 nm.</p>
Full article ">Figure 5
<p>Representation of: (<b>a</b>) the <span class="html-italic">Radiant intensity</span> of LEDs used as radiation sources, and (<b>b</b>) the <span class="html-italic">Responsivity index</span> of LEDs used as radiation sensors. In both diagrams, the length of the segments corresponds to the FWHM, while in (<b>b</b>) the numerical value is the integral of the photocurrent over the FWHM.</p>
Full article ">Figure 6
<p>Normalized emission and detection spectra of the selected LEDs for: (<b>a</b>) one of the final radiometers; and (<b>b</b>) all five of the radiometers. Each radiometer mounts LED samples of the same family and they show good repeatability in terms of spectrum shape. The narrowing in the third band is due to the presence of a dome on top of the LED detector mounted in three of the radiometers.</p>
Full article ">Figure 7
<p>Radiance intensity as a function of the operating time in hours of the LED sources normalized to the initial value.</p>
Full article ">
4553 KiB  
Article
Theoretical Studies on Two-Photon Fluorescent Hg2+ Probes Based on the Coumarin-Rhodamine System
by Yujin Zhang and Jiancai Leng
Sensors 2017, 17(7), 1672; https://doi.org/10.3390/s17071672 - 20 Jul 2017
Cited by 11 | Viewed by 5111
Abstract
The development of fluorescent sensors for Hg2+ has attracted much attention due to the well-known adverse effects of mercury on biological health. In the present work, the optical properties of two newly-synthesized Hg2+ chemosensors based on the coumarin-rhodamine system (named Pro1 [...] Read more.
The development of fluorescent sensors for Hg2+ has attracted much attention due to the well-known adverse effects of mercury on biological health. In the present work, the optical properties of two newly-synthesized Hg2+ chemosensors based on the coumarin-rhodamine system (named Pro1 and Pro2) were systematically investigated using time-dependent density functional theory. It is shown that Pro1 and Pro2 are effective ratiometric fluorescent Hg2+ probes, which recognize Hg2+ by Förster resonance energy transfer and through bond energy transfer mechanisms, respectively. To further understand the mechanisms of the two probes, we have developed an approach to predict the energy transfer rate between the donor and acceptor. Using this approach, it can be inferred that Pro1 has a six times higher energy transfer rate than Pro2. Thus the influence of spacer group between the donor and acceptor on the sensing performance of the probe is demonstrated. Specifically, two-photon absorption properties of these two probes are calculated. We have found that both probes show significant two-photon responses in the near-infrared light region. However, only the maximum two-photon absorption cross section of Pro1 is greatly enhanced with the presence of Hg2+, indicating that Pro1 can act as a potential two-photon excited fluorescent probe for Hg2+. The theoretical investigations would be helpful to build a relationship between the structure and the optical properties of the probes, providing information on the design of efficient two-photon fluorescent sensors that can be used for biological imaging of Hg2+ in vivo. Full article
(This article belongs to the Special Issue Fluorescent Probes and Sensors)
Show Figures

Figure 1

Figure 1
<p>Molecular structures of Pro1, Pro1 + Hg<sup>2+</sup>, Pro2 and Pro2 + Hg<sup>2+</sup>.</p>
Full article ">Figure 2
<p>Optimized ground state geometries of Pro1, Pro1 + Hg<sup>2+</sup>, Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 3
<p>The OPA spectra of (<b>a</b>) Pro1 and Pro1 + Hg<sup>2+</sup>, (<b>b</b>) Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 4
<p>Molecular orbitals involved in the transition of the OPA peaks for (<b>a</b>) Pro1 and Pro1 + Hg<sup>2+</sup>; (<b>b</b>) Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 5
<p>Optimized first excited state geometries of Pro1, Pro1 + Hg<sup>2+</sup>, Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 6
<p>The OPE spectra of (<b>a</b>) Pro1 and Pro1 + Hg<sup>2+</sup>, (<b>b</b>) Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 7
<p>Molecular orbitals involved in the transition of the OPE peaks for (<b>a</b>) Pro1 and Pro1 + Hg<sup>2+</sup>; (<b>b</b>) Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">Figure 8
<p>Schematic of coordinate direction.</p>
Full article ">Figure 9
<p>Molecular orbitals involved in the transition of the TPA peaks for (<b>a</b>) Pro1 and Pro1 + Hg<sup>2+</sup>, (<b>b</b>) Pro2 and Pro2 + Hg<sup>2+</sup> with PCM simulating the dielectric of water.</p>
Full article ">
6260 KiB  
Article
Control Measurements of Crane Rails Performed by Terrestrial Laser Scanning
by Klemen Kregar, Jan Možina, Tomaž Ambrožič, Dušan Kogoj, Aleš Marjetič, Gašper Štebe and Simona Savšek
Sensors 2017, 17(7), 1671; https://doi.org/10.3390/s17071671 - 20 Jul 2017
Cited by 13 | Viewed by 5380
Abstract
This article presents a method for measuring the geometry of crane rails with terrestrial laser scanning (TLS). Two sets of crane rails were divided into segments, their planes were adjusted, and the characteristic rail lines were defined. We used their profiles to define [...] Read more.
This article presents a method for measuring the geometry of crane rails with terrestrial laser scanning (TLS). Two sets of crane rails were divided into segments, their planes were adjusted, and the characteristic rail lines were defined. We used their profiles to define the positional and altitude deviations of the rails, the span and height difference between the two rails, and we also verified that they complied with the Eurocode 3 standard. We tested the method on crane rails at the hydroelectric power plant in Krško and the thermal power plant in Brestanica. We used two scanning techniques: “pure” TLS (Riegel VZ-400) and “hybrid” TLS (Leica MS50) scanning. This article’s original contribution lies in the detailed presentation of the computations used to define the characteristic lines of the rails without using the numeric procedures from existing software packages. We also analysed the influence of segment length and point density on the rail geometry results, and compared the two laser scanning techniques. We also compared the results obtained by terrestrial laser scanning with the results obtained from the classic polar method, which served as a reference point for its precision. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Span tolerance and height difference between the rails [<a href="#B17-sensors-17-01671" class="html-bibr">17</a>].</p>
Full article ">Figure 2
<p>A special platform with two precise prisms.</p>
Full article ">Figure 3
<p>Crane rails in the machine room in the hydroelectric power plant (HPP) in Krško.</p>
Full article ">Figure 4
<p>The classic TPS method for measuring crane rails.</p>
Full article ">Figure 5
<p>The scans of the sections, from both points.</p>
Full article ">Figure 6
<p>Crane rails in the gas block hall in the thermal power plant (TPP) in Brestanica.</p>
Full article ">Figure 7
<p>Point cloud in the gas block hall.</p>
Full article ">Figure 8
<p>The scan of the rails.</p>
Full article ">Figure 9
<p>The points that had to be manually eliminated from the scanned point cloud are marked in red.</p>
Full article ">Figure 10
<p>The algorithm used for calculating the point clouds and characteristic lines.</p>
Full article ">Figure 11
<p>Three-dimensional (3D) presentation of the part of the crane rails with profiles.</p>
Full article ">Figure 12
<p>(<b>a</b>) The rails and their positional deviations, and (<b>b</b>) the span between the rails, with the standard deviations, for the crane in the HPP in Krško measured with TPS.</p>
Full article ">Figure 13
<p>(<b>a</b>) The vertical deviations and (<b>b</b>) the height differences between the rails, with their standard deviations, for the crane in the HPP in Krško measured with TPS.</p>
Full article ">Figure 14
<p>(<b>a</b>) The positional deviation of the rails and (<b>b</b>) the span between them, with their standard deviations, for the crane in the HPP in Krško measured with TLS.</p>
Full article ">Figure 15
<p>(<b>a</b>) The vertical deviations and (<b>b</b>) the height differences between the rails, with their standard deviations, for the crane in the HPP in Krško measured with TLS.</p>
Full article ">Figure 16
<p>(<b>a</b>) The rails’ deviation and their positions, and (<b>b</b>) the span between the rails, with the standard deviations, for the crane in the TPP in Brestanica measured with TLS.</p>
Full article ">Figure 17
<p>(<b>a</b>) The vertical deviations and (<b>b</b>) the height differences between the rails, with their standard deviations, for the crane in the TPP in Brestanica measured with TLS.</p>
Full article ">Figure 18
<p>(<b>a</b>) The positional deviations and (<b>b</b>) the spans in relation to the segment length.</p>
Full article ">Figure 19
<p>(<b>a</b>) The vertical variances and (<b>b</b>) the height differences in relation to the length of the segment.</p>
Full article ">Figure 20
<p>Precision of the characteristic lines in relation to the length of the segment.</p>
Full article ">Figure 21
<p>(<b>a</b>) The positional deviations of the two rails and (<b>b</b>) the spans in relation to the point density.</p>
Full article ">Figure 22
<p>(<b>a</b>) The vertical deviations and (<b>b</b>) the height differences in relation to point density.</p>
Full article ">Figure 23
<p>(<b>a</b>) Comparison of the horizontal deviations and (<b>b</b>) the spans between the rails.</p>
Full article ">Figure 24
<p>(<b>a</b>) Comparison of the vertical deviations and (<b>b</b>) height differences between the rails.</p>
Full article ">
5027 KiB  
Article
Crack Detection in Concrete Tunnels Using a Gabor Filter Invariant to Rotation
by Roberto Medina, José Llamas, Jaime Gómez-García-Bermejo, Eduardo Zalama and Miguel José Segarra
Sensors 2017, 17(7), 1670; https://doi.org/10.3390/s17071670 - 20 Jul 2017
Cited by 75 | Viewed by 9593
Abstract
In this article, a system for the detection of cracks in concrete tunnel surfaces, based on image sensors, is presented. Both data acquisition and processing are covered. Linear cameras and proper lighting are used for data acquisition. The required resolution of the camera [...] Read more.
In this article, a system for the detection of cracks in concrete tunnel surfaces, based on image sensors, is presented. Both data acquisition and processing are covered. Linear cameras and proper lighting are used for data acquisition. The required resolution of the camera sensors and the number of cameras is discussed in terms of the crack size and the tunnel type. Data processing is done by applying a new method called Gabor filter invariant to rotation, allowing the detection of cracks in any direction. The parameter values of this filter are set by using a modified genetic algorithm based on the Differential Evolution optimization method. The detection of the pixels belonging to cracks is obtained to a balanced accuracy of 95.27%, thus improving the results of previous approaches. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2017)
Show Figures

Figure 1

Figure 1
<p>Mechanical defects in tunnels: (<b>a</b>) fissures; (<b>b</b>) detachments on precasts; (<b>c</b>) seen steel frames.</p>
Full article ">Figure 2
<p>Stages of the automatic visual inspection process.</p>
Full article ">Figure 3
<p>Two examples of camera position distribution, and the portion of the tunnel section covered by each camera. (The field-of-view of the cameras is shadowed in blue). (<b>a</b>) Working distance (<span class="html-italic">W<sub>D</sub></span>) is smaller than the radius of the tunnel (<span class="html-italic">R<sub>T</sub></span>) (<b>b</b>) Working distance (<span class="html-italic">W<sub>D</sub></span>) is greater than the radius of the tunnel (<span class="html-italic">R<sub>T</sub></span>), so only half of the cameras can be placed in the same plane.</p>
Full article ">Figure 4
<p>Example of the application of the rotation-invariant Gabor filter. The original image and the result of applying the rotation-invariant Gabor filter to it are shown on the left. The Gabor filters applied along 16 orientations (from 0° to 180°) are shown on the right, along with their maximum values (<b>a</b>–<b>p</b>).</p>
Full article ">Figure 5
<p>Prototype of the tunnel inspection platform.</p>
Full article ">Figure 6
<p>Evolution of the weighted error through the increase in the number of generations of the Gabor filter invariant to rotation.</p>
Full article ">Figure 7
<p>Example of application of the rotation-invariant Gabor filter. (<b>a</b>,<b>d</b>,<b>g</b>) are the normalized original images. (<b>b</b>,<b>e</b>,<b>h</b>) are the filtered images; (<b>c</b>,<b>f,i</b>) the segmented images.</p>
Full article ">Figure 7 Cont.
<p>Example of application of the rotation-invariant Gabor filter. (<b>a</b>,<b>d</b>,<b>g</b>) are the normalized original images. (<b>b</b>,<b>e</b>,<b>h</b>) are the filtered images; (<b>c</b>,<b>f,i</b>) the segmented images.</p>
Full article ">Figure 8
<p>Result of applying the proposed algorithm to a large area of the tunnel. (<b>a</b>) Original image; (<b>b</b>) filtered image; (<b>c</b>) segmented image.</p>
Full article ">
1419 KiB  
Article
User Interaction Modeling and Profile Extraction in Interactive Systems: A Groupware Application Case Study
by Cristina Tîrnăucă, Rafael Duque and José L. Montaña
Sensors 2017, 17(7), 1669; https://doi.org/10.3390/s17071669 - 20 Jul 2017
Cited by 5 | Viewed by 5028
Abstract
A relevant goal in human–computer interaction is to produce applications that are easy to use and well-adjusted to their users’ needs. To address this problem it is important to know how users interact with the system. This work constitutes a methodological contribution capable [...] Read more.
A relevant goal in human–computer interaction is to produce applications that are easy to use and well-adjusted to their users’ needs. To address this problem it is important to know how users interact with the system. This work constitutes a methodological contribution capable of identifying the context of use in which users perform interactions with a groupware application (synchronous or asynchronous) and provides, using machine learning techniques, generative models of how users behave. Additionally, these models are transformed into a text that describes in natural language the main characteristics of the interaction of the users with the system. Full article
(This article belongs to the Special Issue Selected Papers from UCAmI 2016)
Show Figures

Figure 1

Figure 1
<p>Main steps of the methodology.</p>
Full article ">Figure 2
<p>Hierarchical clustering for the group of users in Example 2, (<b>a</b>) dendogram for single linkage; (<b>b</b>) dendogram for complete linkage; (<b>c</b>) number of clusters against distance between clusters for single linkage; (<b>d</b>) number of clusters against distance between clusters for complete linkage.</p>
Full article ">Figure 3
<p>User interface of the groupware system. Main user interface (<b>left</b>), chat tool (<b>center</b>), voting panel (<b>right</b>).</p>
Full article ">Figure 4
<p>Distance against number of clusters for the complete linkage criteria.</p>
Full article ">Figure 5
<p>Weighted automata for two profiles; colors indicate the panel to which a given action belongs: orange for Chat, blue for New Bet, green for Proposals, pink for Tutorial and salmon for My bets.</p>
Full article ">
2488 KiB  
Article
Online Denoising Based on the Second-Order Adaptive Statistics Model
by Sheng-Lun Yi, Xue-Bo Jin, Ting-Li Su, Zhen-Yun Tang, Fa-Fa Wang, Na Xiang and Jian-Lei Kong
Sensors 2017, 17(7), 1668; https://doi.org/10.3390/s17071668 - 20 Jul 2017
Cited by 12 | Viewed by 4973
Abstract
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a [...] Read more.
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule–Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy. Full article
Show Figures

Figure 1

Figure 1
<p>The flow chart of the proposed online denoising method.</p>
Full article ">Figure 2
<p>The configuration of the experiment.</p>
Full article ">Figure 3
<p>The real data and measurement data.</p>
Full article ">Figure 4
<p>The denoised result of the adaptive statistics models.</p>
Full article ">Figure 5
<p>The error comparison of the adaptive statistics models for online denoising.</p>
Full article ">Figure 6
<p>Covariance and RMSE of the adaptive statistics models.</p>
Full article ">Figure 7
<p>The real data and measurement data.</p>
Full article ">Figure 8
<p>Covariance and RMSE of the adaptive statistics models of the last 5000 points.</p>
Full article ">Figure 9
<p>The denoised result and the reference value.</p>
Full article ">Figure 10
<p>The reference value and the denoised result.</p>
Full article ">Figure 11
<p>The signal with noise and the reference value.</p>
Full article ">Figure 12
<p>The reference value and the signal with noise.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop