[go: up one dir, main page]

Next Issue
Volume 18, February
Previous Issue
Volume 17, December
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 18, Issue 1 (January 2018) – 317 articles

Cover Story (view full-size image): The joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis, or detection, of different problems that cattle may present, can help to optimize and increase livestock yields. This work presents the design and implementation of an agent-based monitoring system for real farms and livestock. A set of applications allow farmers to remotely monitor the livestock. Parameters specific to each animal can be studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. View the paper here.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 4440 KiB  
Article
EKF–GPR-Based Fingerprint Renovation for Subset-Based Indoor Localization with Adjusted Cosine Similarity
by Junhua Yang, Yong Li, Wei Cheng, Yang Liu and Chenxi Liu
Sensors 2018, 18(1), 318; https://doi.org/10.3390/s18010318 - 22 Jan 2018
Cited by 18 | Viewed by 6640
Abstract
Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, [...] Read more.
Received Signal Strength Indicator (RSSI) localization using fingerprint has become a prevailing approach for indoor localization. However, the fingerprint-collecting work is repetitive and time-consuming. After the original fingerprint radio map is built, it is laborious to upgrade the radio map. In this paper, we describe a Fingerprint Renovation System (FRS) based on crowdsourcing, which avoids the use of manual labour to obtain the up-to-date fingerprint status. Extended Kalman Filter (EKF) and Gaussian Process Regression (GPR) in FRS are combined to calculate the current state based on the original fingerprinting radio map. In this system, a method of subset acquisition also makes an immediate impression to reduce the huge computation caused by too many reference points (RPs). Meanwhile, adjusted cosine similarity (ACS) is employed in the online phase to solve the issue of outliers produced by cosine similarity. Both experiments and analytical simulation in a real Wireless Fidelity (Wi-Fi) environment indicate the usefulness of our system to significant performance improvements. The results show that FRS improves the accuracy by 19.6% in the surveyed area compared to the radio map un-renovated. Moreover, the proposed subset algorithm can bring less computation. Full article
Show Figures

Figure 1

Figure 1
<p>Fingerprint heat maps of one access point (AP; Media Access Control (MAC): B8-F8-83-CF-F8-96) on two different days. (<b>a</b>) 27 September 2017; (<b>b</b>) 27 October 2017.</p>
Full article ">Figure 2
<p>System overview of the Fingerprint Renovation System (FRS). EKF, Extended Kalman Filter; GPR, Gaussian Process Regression; RSSI, received signal strength indicator; TD, target device; ACS, Adjusted Cosine Similarity.</p>
Full article ">Figure 3
<p>Block diagram of EKF–GPR algorithm.</p>
Full article ">Figure 4
<p>The process of choosing radio map subset. (<b>a</b>) The original reference points (RPs) in an indoor environment, which are shown as black circle dots; (<b>b</b>) Subset <b><span class="html-italic">S1</span></b> is shown as purple circle dots; (<b>c</b>) In the online phrase, five localization RPs in <b><span class="html-italic">S1</span></b>; (<b>d</b>) Subset <b><span class="html-italic">S2</span></b> is shown as red triangles.</p>
Full article ">Figure 5
<p>Experiment of ACS in the real environment, APs are shown as red circle dots, two test points is shown as blue crosses, and the five points are in one straight line.</p>
Full article ">Figure 6
<p>Plane graph of the fourth floor in the Northwestern Polytechnical University (NWPU) campus library.</p>
Full article ">Figure 7
<p>The distribution of RPs, target points (TPs), and crowdsourcing-based points in the experimental area.</p>
Full article ">Figure 8
<p>Localization error before (<b>a</b>) and after (<b>b</b>) radio map renovation.</p>
Full article ">Figure 9
<p>Localization results of four TPs. Original: The localization error obtained from Database_3; FAR: The localization error obtained from Database_4. Both of them are irrelative with Percentage of Removed Fingerprints. (<b>a</b>) TP1; (<b>b</b>) TP2; (<b>c</b>) TP3; (<b>d</b>) TP4.</p>
Full article ">Figure 10
<p>Ten crowdsourcing-based points around TP1 and TP2, away from TP3 and TP4.</p>
Full article ">Figure 11
<p>(<b>a</b>) Localization error of TP1 and TP2 compared to TP3 and TP4 with different situations about crowdsourcing-based points; (<b>b</b>) RMSEs of these four TPs.</p>
Full article ">Figure 12
<p>Localization error of TP1, TP2, TP3, and TP4 with different amounts of crowdsourcing-based points.</p>
Full article ">
8 pages, 2149 KiB  
Communication
Voltammetric Response of Alizarin Red S-Confined Film-Coated Electrodes to Diol and Polyol Compounds: Use of Phenylboronic Acid-Modified Poly(ethyleneimine) as Film Component
by Shigehiro Takahashi, Iwao Suzuki, Takuto Ojima, Daichi Minaki and Jun-ichi Anzai
Sensors 2018, 18(1), 317; https://doi.org/10.3390/s18010317 - 22 Jan 2018
Cited by 5 | Viewed by 5303
Abstract
Alizarin red S (ARS) was confined in layer-by-layer (LbL) films composed of phenylboronic acid-modified poly(ethyleneimine) (PBA-PEI) and carboxymethylcellulose (CMC) to study the voltammetric response to diol and polyol compounds. The LbL film-coated gold (Au) electrode and quartz slide were immersed in an ARS [...] Read more.
Alizarin red S (ARS) was confined in layer-by-layer (LbL) films composed of phenylboronic acid-modified poly(ethyleneimine) (PBA-PEI) and carboxymethylcellulose (CMC) to study the voltammetric response to diol and polyol compounds. The LbL film-coated gold (Au) electrode and quartz slide were immersed in an ARS solution to uptake ARS into the film. UV-visible absorption spectra of ARS-confined LbL film suggested that ARS formed boronate ester (ARS-PBS) in the film. The cyclic voltammetry of the ARS-confined LbL film-coated electrodes exhibited oxidation peaks at −0.50 and −0.62 V, which were ascribed to the oxidation reactions of ARS-PBS and free ARS, respectively, in the LbL film. The peak current at −0.62 V increased upon the addition of diol or polyol compounds such as L-dopa, glucose, and sorbitol into the solution, depending on the concentration, whereas the peak current at −0.50 V decreased. The results suggest a possible use of ARS-confined PBA-PEI/CMC LbL film-coated Au electrodes for the construction of voltammetric sensors for diol and polyol compounds. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Redox reactions of ARS.</p>
Full article ">Figure 2
<p>Chemical structure of PBA-PEI.</p>
Full article ">Figure 3
<p>UV-visible absorption spectra of (PBA-PEI/CMC)<sub>10</sub>PBA-PEI film (a); ARS-immobilized (PBA-PEI/CMC)<sub>10</sub>PBA-PEI film (b); PBA-PEI (0.05 mg∙mL<sup>−1</sup>) (c) and ARS (0.1 mM) (d).</p>
Full article ">Figure 4
<p>Formation of boronate ester ARS-PBA in the film.</p>
Full article ">Figure 5
<p>A typical CVs (<b>A</b>) and DPVs (<b>B</b>) of Au electrodes coated with (PBA-PEI/CMC)<sub>10</sub>PBA-PEI film with (b) and without (a) ARS. CVs were recorded at pH 9.0.</p>
Full article ">Figure 6
<p>(<b>A</b>) The effect of the scan rate on the CV of the ARS-confined (PBA-PEI/CMC)<sub>10</sub>PBA-PEI film-coated electrode at pH 9.0; (<b>B</b>) plots of the anodic and cathodic peak current (Ip) at −0.6 to −0.7 V of the CV vs. scan rate; and (<b>C</b>) plots of the peak current of the CV vs. square root of scan rate.</p>
Full article ">Figure 7
<p>(<b>A</b>) DPVs of the ARS-confined (PBA-PEI/CMC)<sub>10</sub>PBA-PEI film-coated electrodes in the presence of sorbitol and the binding equilibrium between ARS-PBA and sorbitol (inset); and (<b>B</b>) changes in the intensity of peak current in DPVs at ca. −0.67 V as a function of the concentration of L-dopa (a); sorbitol (b); and glucose (c). ∆Ip denotes the peak current around −0.67 V increased in the presence of L-dopa, sorbitol, and glucose. DPVs were recorded at pH 9.0.</p>
Full article ">
23 pages, 9989 KiB  
Article
Mapping of Rice Varieties and Sowing Date Using X-Band SAR Data
by Hoa Phan, Thuy Le Toan, Alexandre Bouvet, Lam Dao Nguyen, Tien Pham Duy and Mehrez Zribi
Sensors 2018, 18(1), 316; https://doi.org/10.3390/s18010316 - 22 Jan 2018
Cited by 57 | Viewed by 8795
Abstract
Rice is a major staple food for nearly half of the world’s population and has a considerable contribution to the global agricultural economy. While spaceborne Synthetic Aperture Radar (SAR) data have proved to have great potential to provide rice cultivation area, few studies [...] Read more.
Rice is a major staple food for nearly half of the world’s population and has a considerable contribution to the global agricultural economy. While spaceborne Synthetic Aperture Radar (SAR) data have proved to have great potential to provide rice cultivation area, few studies have been performed to provide practical information that meets the user requirements. In rice growing regions where the inter-field crop calendar is not uniform such as in the Mekong Delta in Vietnam, knowledge of the start of season on a field basis, along with the planted rice varieties, is very important for correct field management (timing of irrigation, fertilization, chemical treatment, harvest), and for market assessment of the rice production. The objective of this study is to develop methods using SAR data to retrieve in addition to the rice grown area, the sowing date, and the distinction between long and short cycle varieties. This study makes use of X-band SAR data from COSMO-SkyMed acquired from 19 August to 23 November 2013 covering the Chau Thanh and Thoai Son districts in An Giang province, Viet Nam, characterized by a complex cropping pattern. The SAR data have been analyzed as a function of rice parameters, and the temporal and polarization behaviors of the radar backscatter of different rice varieties have been interpreted physically. New backscatter indicators for the detection of rice paddy area, the estimation of the sowing date, and the mapping of the short cycle and long cycle rice varieties have been developed and assessed. Good accuracy has been found with 92% in rice grown area, 96% on rice long or short cycle, and a root mean square error of 4.3 days in sowing date. The results have been discussed regarding the generality of the methods with respect to the rice cultural practices and the SAR data characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Location of An Giang Province and Chau Thanh, Thoai Son communes in the Mekong Delta Vietnam.</p>
Full article ">Figure 2
<p>COSMO-SkyMed data available for the study and the rice crop calendar during this period in the An Giang Province.</p>
Full article ">Figure 3
<p>Forty rice field samples and 64 rice (blue points)/non-rice (yellow points) check points used for result validation. CSK image (magenta: HH, green: VV) of the Chau Thanh and Thoai Son districts, An Giang Province, 19 August 2013.</p>
Full article ">Figure 4
<p>Temporal variation of plant height (versus days after sowing): (<b>a</b>) 9 sampling fields for long-cycle rice plants; (<b>b</b>) 9 sampling fields for short-cycle rice plants. The fields have been chosen to have close sowing dates and same planting practices.</p>
Full article ">Figure 5
<p>Ground survey photos at 10, 20, 45, 64, 91 days after sowing: (<b>a</b>) Long cycle rice variety, (<b>b</b>) Short cycle rice variety.</p>
Full article ">Figure 6
<p>Example of RGB combinations of different dates (R: 19/08/2013, G: 04/09/2013, B: 20/09/2013) from CSK images, HH polarization over rice fields in the An Giang Province.</p>
Full article ">Figure 7
<p>Variation of (<b>a</b>) HH, (<b>b</b>) VV backscattering coefficient, and (<b>c</b>) HH/VV of the five selected short-cycle sampled fields extracted from CSK images of six dates, versus the sowing date of each field.</p>
Full article ">Figure 7 Cont.
<p>Variation of (<b>a</b>) HH, (<b>b</b>) VV backscattering coefficient, and (<b>c</b>) HH/VV of the five selected short-cycle sampled fields extracted from CSK images of six dates, versus the sowing date of each field.</p>
Full article ">Figure 8
<p>Variation of (<b>a</b>) HH, (<b>b</b>) VV backscattering coefficient, and (<b>c</b>) HH/VV of the 5 selected short -cycle sampled fields (in green), and 5 long-cycle fields (in red) extracted from CSK images of 6 dates, versus the sowing date of each field.</p>
Full article ">Figure 8 Cont.
<p>Variation of (<b>a</b>) HH, (<b>b</b>) VV backscattering coefficient, and (<b>c</b>) HH/VV of the 5 selected short -cycle sampled fields (in green), and 5 long-cycle fields (in red) extracted from CSK images of 6 dates, versus the sowing date of each field.</p>
Full article ">Figure 9
<p>Temporal variation and standard deviation of (<b>a</b>) HH backscattering coefficient and (<b>b</b>) polarization ratio HH/VV of the four LULC sampled classes: forest in green, urban in red, river (water) in blue and rice in violet, extracted from CSK images of seven dates.</p>
Full article ">Figure 10
<p>Flowchart of the rice monitoring method using multi-temporal CSK images.</p>
Full article ">Figure 11
<p>Rice and non-rice map of Autumn-Winter season in 2013. Red: Rice at Autumn-Winter; Blue: Water; Green: built areas and trees; White: no rice detected.</p>
Full article ">Figure 12
<p>Sowing date map in Autumn-Winter crop in Chau Thanh and Thoai Son districts, An Giang Province.</p>
Full article ">Figure 13
<p>Rice varieties map using polarization ratio method. Orange: long-cycle rice crop (30.3%); red: short-cycle rice crop (69.7%); Blue: Water; Green: built areas and trees; White: no rice detected.</p>
Full article ">Figure 14
<p>Estimated area in Autumn-Winter crop by CKS vs. agency data of 15 communes in Thoai Son districts, An Giang Province. Black line represents the linear regression between two datasets.</p>
Full article ">Figure 15
<p>Retrieved sowing date in Autumn-Winter crop by CKS vs. ground data collection of 30 rice fields in Chau Thanh and Thoai Son districts, An Giang Province. Blue line represents the linear regression between two datasets.</p>
Full article ">
19 pages, 3381 KiB  
Article
An Inverse Neural Controller Based on the Applicability Domain of RBF Network Models
by Alex Alexandridis, Marios Stogiannos, Nikolaos Papaioannou, Elias Zois and Haralambos Sarimveis
Sensors 2018, 18(1), 315; https://doi.org/10.3390/s18010315 - 22 Jan 2018
Cited by 9 | Viewed by 5971
Abstract
This paper presents a novel methodology of generic nature for controlling nonlinear systems, using inverse radial basis function neural network models, which may combine diverse data originating from various sources. The algorithm starts by applying the particle swarm optimization-based non-symmetric variant of the [...] Read more.
This paper presents a novel methodology of generic nature for controlling nonlinear systems, using inverse radial basis function neural network models, which may combine diverse data originating from various sources. The algorithm starts by applying the particle swarm optimization-based non-symmetric variant of the fuzzy means (PSO-NSFM) algorithm so that an approximation of the inverse system dynamics is obtained. PSO-NSFM offers models of high accuracy combined with small network structures. Next, the applicability domain concept is suitably tailored and embedded into the proposed control structure in order to ensure that extrapolation is avoided in the controller predictions. Finally, an error correction term, estimating the error produced by the unmodeled dynamics and/or unmeasured external disturbances, is included to the control scheme to increase robustness. The resulting controller guarantees bounded input-bounded state (BIBS) stability for the closed loop system when the open loop system is BIBS stable. The proposed methodology is evaluated on two different control problems, namely, the control of an experimental armature-controlled direct current (DC) motor and the stabilization of a highly nonlinear simulated inverted pendulum. For each one of these problems, appropriate case studies are tested, in which a conventional neural controller employing inverse models and a PID controller are also applied. The results reveal the ability of the proposed control scheme to handle and manipulate diverse data through a data fusion approach and illustrate the superiority of the method in terms of faster and less oscillatory responses. Full article
(This article belongs to the Special Issue Soft Sensors and Intelligent Algorithms for Data Fusion)
Show Figures

Figure 1

Figure 1
<p>Typical structure of an RBF network with Gaussian basis functions.</p>
Full article ">Figure 2
<p>Closed loop with a simple RBF IN control scheme.</p>
Full article ">Figure 3
<p>Calculating the bounds on the value of <span class="html-italic">ω</span>(<span class="html-italic">k</span>) that guarantee that extrapolation is avoided. The 3-D surface represents the AD of the RBF controller.</p>
Full article ">Figure 4
<p>Closed loop with the RBF INNER control scheme, taking into account the applicability domain and the robustifying term.</p>
Full article ">Figure 5
<p>An armature-controlled DC motor.</p>
Full article ">Figure 6
<p>Armature-controlled experimental DC motor: (<b>a</b>) controller responses; (<b>b</b>) controller actions.</p>
Full article ">Figure 7
<p>An inverted pendulum.</p>
Full article ">Figure 8
<p>Inverted pendulum, (<b>a</b>) <span class="html-italic">M</span> = 1 kg: controller responses; (<b>b</b>) <span class="html-italic">M</span> = 1 kg: controller actions.</p>
Full article ">Figure 8 Cont.
<p>Inverted pendulum, (<b>a</b>) <span class="html-italic">M</span> = 1 kg: controller responses; (<b>b</b>) <span class="html-italic">M</span> = 1 kg: controller actions.</p>
Full article ">Figure 9
<p>Inverted pendulum, (<b>a</b>) <span class="html-italic">M</span> = 1.4 kg: controller responses; (<b>b</b>) <span class="html-italic">M</span> = 2.0 kg: controller responses.</p>
Full article ">Figure 9 Cont.
<p>Inverted pendulum, (<b>a</b>) <span class="html-italic">M</span> = 1.4 kg: controller responses; (<b>b</b>) <span class="html-italic">M</span> = 2.0 kg: controller responses.</p>
Full article ">
9 pages, 7067 KiB  
Article
A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor
by Toru Okino, Seiji Yamahira, Shota Yamada, Yutaka Hirose, Akihiro Odagawa, Yoshihisa Kato and Tsuyoshi Tanaka
Sensors 2018, 18(1), 314; https://doi.org/10.3390/s18010314 - 22 Jan 2018
Cited by 12 | Viewed by 9624
Abstract
We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has [...] Read more.
We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. Full article
(This article belongs to the Special Issue Special Issue on the 2017 International Image Sensor Workshop (IISW))
Show Figures

Figure 1

Figure 1
<p>Quantum efficiency spectra of the present OPF CMOS imager (red), a Si CIS (black) and an AlGaN photodiode (blue) [<a href="#B7-sensors-18-00314" class="html-bibr">7</a>].</p>
Full article ">Figure 2
<p>A cross sectional view of an organic photoconductive film (OPF) CMOS imager.</p>
Full article ">Figure 3
<p>A schematic of the pixel circuit.</p>
Full article ">Figure 4
<p>A photograph of the developed imaging system (camera).</p>
Full article ">Figure 5
<p>A signal flow diagram of the developed UV imaging system. (a) is the reference visible light background image. (b) is the UV image of interest overlapped on the VIS background image. (c) is the UV image extracted by subtracting (a) from (b). (d) is the synthesized image extracted by adding (a) to amplified (c).</p>
Full article ">Figure 6
<p>Flame images of hydrogen (<b>a</b>) and propane (<b>b</b>) taken by an ordinary CIS showing entirely invisible characteristics of the hydrogen flame.</p>
Full article ">Figure 7
<p>Light intensity spectrum of the hydrogen flame with a peak at around 310 nm.</p>
Full article ">Figure 8
<p>Images of tip of burner: (<b>a</b>) is a reference visible light (VIS) image stored in the DDR; (<b>b</b>) is an image including the UV light of hydrogen flame and background visible light (VIS + UV); (<b>c</b>) is the hydrogen flame image extracted by subtracting (<b>a</b>) VIS image from (<b>b</b>) UV + VIS image; (<b>d</b>) is the hydrogen flame image extracted by synthesizing (<b>a</b>,<b>c</b>).</p>
Full article ">Figure 8 Cont.
<p>Images of tip of burner: (<b>a</b>) is a reference visible light (VIS) image stored in the DDR; (<b>b</b>) is an image including the UV light of hydrogen flame and background visible light (VIS + UV); (<b>c</b>) is the hydrogen flame image extracted by subtracting (<b>a</b>) VIS image from (<b>b</b>) UV + VIS image; (<b>d</b>) is the hydrogen flame image extracted by synthesizing (<b>a</b>,<b>c</b>).</p>
Full article ">Figure 9
<p>UV image of hydrogen gas extracted by an UV optical filter system.</p>
Full article ">Figure 10
<p>UV images of hydrogen gas generated by a rolling shutter mode (<b>a</b>) and by a global shutter mode (<b>b</b>).</p>
Full article ">Figure 11
<p>Diagram of the Experimental equipment for corona discharge observation.</p>
Full article ">Figure 12
<p>Light intensity spectrum of the corona discharge.</p>
Full article ">Figure 13
<p>Images of corona discharge taken by an ordinary CIS (<b>a</b>) and an OPF imager (<b>b</b>).</p>
Full article ">
20 pages, 2049 KiB  
Article
A Survey of Data Semantization in Internet of Things
by Feifei Shi, Qingjuan Li, Tao Zhu and Huansheng Ning
Sensors 2018, 18(1), 313; https://doi.org/10.3390/s18010313 - 22 Jan 2018
Cited by 72 | Viewed by 9278
Abstract
With the development of Internet of Things (IoT), more and more sensors, actuators and mobile devices have been deployed into our daily lives. The result is that tremendous data are produced and it is urgent to dig out hidden information behind these volumous [...] Read more.
With the development of Internet of Things (IoT), more and more sensors, actuators and mobile devices have been deployed into our daily lives. The result is that tremendous data are produced and it is urgent to dig out hidden information behind these volumous data. However, IoT data generated by multi-modal sensors or devices show great differences in formats, domains and types, which poses challenges for machines to process and understand. Therefore, adding semantics to Internet of Things becomes an overwhelming tendency. This paper provides a systematic review of data semantization in IoT, including its backgrounds, processing flows, prevalent techniques, applications, existing challenges and open issues. It surveys development status of adding semantics to IoT data, mainly referring to sensor data and points out current issues and challenges that are worth further study. Full article
(This article belongs to the Special Issue Sensing, Data Analysis and Platforms for Ubiquitous Intelligence)
Show Figures

Figure 1

Figure 1
<p>The System Architecture for Adding Semantics.</p>
Full article ">Figure 2
<p>Relationships between sub-languages of OWL 2.</p>
Full article ">
20 pages, 7231 KiB  
Article
Dual Channel S-Band Frequency Modulated Continuous Wave Through-Wall Radar Imaging
by Ying-Chun Li, Daegun Oh, Sunwoo Kim and Jong-Wha Chong
Sensors 2018, 18(1), 311; https://doi.org/10.3390/s18010311 - 22 Jan 2018
Cited by 19 | Viewed by 6383
Abstract
This article deals with the development of a dual channel S-Band frequency-modulated continuous wave (FMCW) system for a through-the-wall imaging (TWRI) system. Most existing TWRI systems using FMCW were developed for synthetic aperture radar (SAR) which has many drawbacks such as the need [...] Read more.
This article deals with the development of a dual channel S-Band frequency-modulated continuous wave (FMCW) system for a through-the-wall imaging (TWRI) system. Most existing TWRI systems using FMCW were developed for synthetic aperture radar (SAR) which has many drawbacks such as the need for several antenna elements and movement of the system. Our implemented TWRI system comprises a transmitting antenna and two receiving antennas, resulting in a significant reduction of the number of antenna elements. Moreover, a proposed algorithm for range-angle-Doppler 3D estimation based on a 3D shift invariant structure is utilized in our implemented dual channel S-band FMCW TWRI system. Indoor and outdoor experiments were conducted to image the scene beyond a wall for water targets and person targets, respectively. The experimental results demonstrate that high-quality imaging can be achieved under both experimental scenarios. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Geometry of TWRI system, (<b>a</b>) practical models of wave propagation, (<b>b</b>) simplified model for proposed algorithm.</p>
Full article ">Figure 2
<p>Illustration of high-pass filtering, (<b>a</b>) Beat signals before filtering, (<b>b</b>) Filtered beat signals.</p>
Full article ">Figure 3
<p>TWRI SAR geometry in [<a href="#B31-sensors-18-00311" class="html-bibr">31</a>].</p>
Full article ">Figure 4
<p>3D Shift Invariant Structure.</p>
Full article ">Figure 5
<p>Block diagram of the S-Band FMCW TWRI radar system.</p>
Full article ">Figure 6
<p>Bandpass filter (<b>a</b>) Image of real product (<b>b</b>) Filter response.</p>
Full article ">Figure 7
<p>(<b>a</b>) Standard horn antenna. (<b>b</b>) Antenna array. (<b>c</b>) Pattern at 3 GHz, H-plane with 3 dB beam width 43.51° and E-plane pattern with 3 dB beam width 51.32°.</p>
Full article ">Figure 8
<p>Outward appearance of implemented S-band FMCW TWRI radar system.</p>
Full article ">Figure 9
<p>Inward appearance of implemented S-band FMCW TWRI radar system.</p>
Full article ">Figure 10
<p>FPGA and DSP board.</p>
Full article ">Figure 11
<p>(<b>a</b>) Imaged indoor scene, (<b>b</b>) Value of M obtained by MDL through 100 experiment trials; (<b>c</b>) One frame of the detection results obtained when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 6.</p>
Full article ">Figure 11 Cont.
<p>(<b>a</b>) Imaged indoor scene, (<b>b</b>) Value of M obtained by MDL through 100 experiment trials; (<b>c</b>) One frame of the detection results obtained when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 6.</p>
Full article ">Figure 12
<p>(<b>a</b>) Imaged outdoor scene; water target and TWRI system. (<b>b</b>) One frame of detection results of multi targets (water blocks) behind the wall when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 5. (<b>c</b>) One frame of Range-Doppler maps, showing the Doppler frequency.</p>
Full article ">Figure 12 Cont.
<p>(<b>a</b>) Imaged outdoor scene; water target and TWRI system. (<b>b</b>) One frame of detection results of multi targets (water blocks) behind the wall when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 5. (<b>c</b>) One frame of Range-Doppler maps, showing the Doppler frequency.</p>
Full article ">Figure 13
<p>(<b>a</b>) Imaged scene for person detection. (<b>b</b>) One frame of the detection results when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 8. (<b>c</b>) One frame of Range-Doppler maps, showing the Doppler frequency.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) Imaged scene for person detection. (<b>b</b>) One frame of the detection results when <math display="inline"> <semantics> <mover accent="true"> <mi>M</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math> = 8. (<b>c</b>) One frame of Range-Doppler maps, showing the Doppler frequency.</p>
Full article ">
10 pages, 1932 KiB  
Article
Online Removal of Baseline Shift with a Polynomial Function for Hemodynamic Monitoring Using Near-Infrared Spectroscopy
by Ke Zhao, Yaoyao Ji, Yan Li and Ting Li
Sensors 2018, 18(1), 312; https://doi.org/10.3390/s18010312 - 21 Jan 2018
Cited by 17 | Viewed by 5818
Abstract
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to [...] Read more.
Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices. Full article
Show Figures

Figure 1

Figure 1
<p>Near-infrared spectroscopy (NIRS) device. (<b>a</b>) Three parts of the device: the software in a computer, the three probes, and the functional module; (<b>b</b>) a probe with one light-emitting diode (LED) source and two sensors; (<b>c</b>) a device with the probe stuck on the solid phantom; (<b>d</b>) a device working with the probe and phantom inside a full dark box.</p>
Full article ">Figure 2
<p>Comparisons of goodness of fit among second-order, fourth-order, and fifth-order polynomials for Probe 3 Channel 1. HbO<sub>2</sub>: oxygenated hemoglobin; Hb: deoxygenated hemoglobin; tHb: total hemoglobin; fit: fit data.</p>
Full article ">Figure 3
<p>Hemodynamic parameters measured without calibration and with calibration for a typical subject as an example, using fourth-order polynomial detrending.</p>
Full article ">Figure 4
<p>Thermal changes in typical LED source and photo sensor, Probe 3 Channel 1.</p>
Full article ">
15 pages, 3428 KiB  
Article
Textile Concentric Ring Electrodes for ECG Recording Based on Screen-Printing Technology
by José Vicente Lidón-Roger, Gema Prats-Boluda, Yiyao Ye-Lin, Javier Garcia-Casado and Eduardo Garcia-Breijo
Sensors 2018, 18(1), 300; https://doi.org/10.3390/s18010300 - 21 Jan 2018
Cited by 29 | Viewed by 8671
Abstract
Among many of the electrode designs used in electrocardiography (ECG), concentric ring electrodes (CREs) are one of the most promising due to their enhanced spatial resolution. Their development has undergone a great push due to their use in recent years; however, they are [...] Read more.
Among many of the electrode designs used in electrocardiography (ECG), concentric ring electrodes (CREs) are one of the most promising due to their enhanced spatial resolution. Their development has undergone a great push due to their use in recent years; however, they are not yet widely used in clinical practice. CRE implementation in textiles will lead to a low cost, flexible, comfortable, and robust electrode capable of detecting high spatial resolution ECG signals. A textile CRE set has been designed and developed using screen-printing technology. This is a mature technology in the textile industry and, therefore, does not require heavy investments. Inks employed as conductive elements have been silver and a conducting polymer (poly (3,4-ethylenedioxythiophene) polystyrene sulfonate; PEDOT:PSS). Conducting polymers have biocompatibility advantages, they can be used with flexible substrates, and they are available for several printing technologies. CREs implemented with both inks have been compared by analyzing their electric features and their performance in detecting ECG signals. The results reveal that silver CREs present a higher average thickness and slightly lower skin-electrode impedance than PEDOT:PSS CREs. As for ECG recordings with subjects at rest, both CREs allowed the uptake of bipolar concentric ECG signals (BC-ECG) with signal-to-noise ratios similar to that of conventional ECG recordings. Regarding the saturation and alterations of ECGs captured with textile CREs caused by intentional subject movements, silver CREs presented a more stable response (fewer saturations and alterations) than those of PEDOT:PSS. Moreover, BC-ECG signals provided higher spatial resolution compared to conventional ECG. This improved spatial resolution was manifested in the identification of P1 and P2 waves of atrial activity in most of the BC-ECG signals. It can be concluded that textile silver CREs are more suitable than those of PEDOT:PSS for obtaining BC-ECG records. These developed textile electrodes bring the use of CREs closer to the clinical environment. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Spain 2017)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Graphic representation of a concentric ring electrode (CRE); (<b>b</b>) CRE locations coincide as far as possible with the precordial registration positions CMV1 and CMV2.</p>
Full article ">Figure 2
<p>Screen patterns used. (<b>a</b>) The first layer corresponds to the disc electrode (conductor layer); (<b>b</b>) dielectric that insulates the connection line that joins the inner disc to the connector; (<b>c</b>) concentric ring electrode is implemented in the third layer (conductor layer); (<b>d</b>) fourth layer, dielectric, insulates the connection line that joins the concentric ring electrode to the connector and the skin.</p>
Full article ">Figure 3
<p>Photograph of the CRE set implemented: (<b>a</b>) corresponds to the silver electrode, (<b>b</b>) corresponds to the PEDOT:PSS (poly (3,4-ethylenedioxythiophene) polystyrene sulfonate) electrode and (<b>c</b>) CRE integrated with an adjustable belt.</p>
Full article ">Figure 4
<p>Attachment of the concentric ring electrode to the chest for obtaining two BC-ECG recordings simultaneously. M1: inner disc of the patient's right electrode, M2: outer ring of the patient’s right electrode, M3: inner disk of the patient’s left electrode and M4: outer ring of the patient's left electrode.</p>
Full article ">Figure 5
<p>(<b>a</b>) Detail of the PEDOT:PSS on the substrate; the PEDOT:PSS is embedded in the fabric pattern; (<b>b</b>) Detail of the Ag on the substrate.</p>
Full article ">Figure 6
<p>(<b>a</b>) Thickness of the PEDOT:PSS on the substrate (view A–B from <a href="#sensors-18-00300-f005" class="html-fig">Figure 5</a>a); (<b>b</b>) thickness of the Ag on the substrate (view A’–B’ from <a href="#sensors-18-00300-f005" class="html-fig">Figure 5</a>b).</p>
Full article ">Figure 7
<p>External ring: (<b>a</b>) electrode–skin impedance (pole-to-pole) magnitude and (<b>b</b>) phase angle.</p>
Full article ">Figure 8
<p>Five seconds of raw ECG signals and its corresponding averaged beat at its right. (<b>a.1</b>) BC1-ECG acquired with the silver CRE at the right position (CMV1). (<b>b.1</b>) BC2-ECG acquired with the silver CRE at the left position. (<b>d.1</b>) BC1-ECG acquired with the PEDOT:PSS CRE in the right position (CMV1). (<b>e.1</b>) BC2-ECG acquired with the PEDOT:PSS CRE in the left position. (<b>c.1</b>,<b>f.1</b>) Standard lead II simultaneously recorded with the two BC-ECG signals sensed by the silver and PEDOT:PSS CREs, respectively. (<b>a.2</b>–<b>f.2</b>) averaged beats of BC-ECG signals shown in traces (<b>a.1</b>–<b>f.1</b>) respectively.</p>
Full article ">
11 pages, 4258 KiB  
Article
A Circular Microstrip Antenna Sensor for Direction Sensitive Strain Evaluation
by Przemyslaw Lopato and Michal Herbko
Sensors 2018, 18(1), 310; https://doi.org/10.3390/s18010310 - 20 Jan 2018
Cited by 58 | Viewed by 7854
Abstract
In this paper, a circular microstrip antenna for stress evaluation is studied. This kind of microstrip sensor can be utilized in structural health monitoring systems. Reflection coefficient S11 is measured to determine deformation/strain value. The proposed sensor is adhesively connected to the [...] Read more.
In this paper, a circular microstrip antenna for stress evaluation is studied. This kind of microstrip sensor can be utilized in structural health monitoring systems. Reflection coefficient S11 is measured to determine deformation/strain value. The proposed sensor is adhesively connected to the studied sample. Applied strain causes a change in patch geometry and influences current distribution both in patch and ground plane. Changing the current flow in patch influences the value of resonant frequency. In this paper, two different resonant frequencies were analysed because in each case, different current distributions in patch were obtained. The sensor was designed for operating frequency of 2.5 GHz (at fundamental mode), which results in a diameter less than 55 mm. Obtained sensitivity was up to 1 MHz/100 MPa, resolution depends on utilized vector network analyser. Moreover, the directional characteristics for both resonant frequencies were defined, studied using numerical model and verified by measurements. Thus far, microstrip antennas have been used in deformation measurement only if the direction of external force was well known. Obtained directional characteristics of the sensor allow the determination of direction and value of stress by one sensor. This method of measurement can be an alternative to the rosette strain gauge. Full article
(This article belongs to the Special Issue Small Devices and the High-Tech Society)
Show Figures

Figure 1

Figure 1
<p>View and dimensions (in mm) of designed sensor.</p>
Full article ">Figure 2
<p>Finite Element Method (FEM) numerical model; (<b>a</b>) entire numerical model for directional characteristics of the microstrip sensor determination; (<b>b</b>) top view of the studied sample; (<b>c</b>) bottom view of the tested sample.</p>
Full article ">Figure 3
<p>Reflection coefficient for designed sensor.</p>
Full article ">Figure 4
<p>Current distribution and density in patch; (<b>a</b>) first resonant frequency <span class="html-italic">f<sub>r1</sub></span>; (<b>b</b>) second resonant frequency <span class="html-italic">f<sub>r2</sub></span>.</p>
Full article ">Figure 5
<p>Directional characteristics of resonant frequency shift caused by stress of 350 MPa.</p>
Full article ">Figure 6
<p>Directional characteristics of resonant frequency shift for different stress level (<b>a</b>) first resonant frequency; (<b>b</b>) second resonant frequency.</p>
Full article ">Figure 7
<p>Shift of resonant frequencies for different stain angle (simulation results), (<b>a</b>) first resonant frequency; (<b>b</b>) second resonant frequency.</p>
Full article ">Figure 8
<p>Simplified scheme of proposed measurement system.</p>
Full article ">Figure 9
<p>Photo of measurement system for evaluation of deformation produced under static loading conditions.</p>
Full article ">Figure 10
<p>Shift of resonant frequencies for different stain angle (experimental results), (<b>a</b>) first resonant frequency; (<b>b</b>) second resonant frequency.</p>
Full article ">
14 pages, 4686 KiB  
Article
Overhead Transmission Line Sag Estimation Using a Simple Optomechanical System with Chirped Fiber Bragg Gratings. Part 1: Preliminary Measurements
by Michal Wydra, Piotr Kisala, Damian Harasim and Piotr Kacejko
Sensors 2018, 18(1), 309; https://doi.org/10.3390/s18010309 - 20 Jan 2018
Cited by 63 | Viewed by 7554
Abstract
A method of measuring the power line wire sag using optical sensors that are insensitive to high electromagnetic fields was proposed. The advantage of this technique is that it is a non-invasive measurement of power line wire elongation using a unique optomechanical system. [...] Read more.
A method of measuring the power line wire sag using optical sensors that are insensitive to high electromagnetic fields was proposed. The advantage of this technique is that it is a non-invasive measurement of power line wire elongation using a unique optomechanical system. The proposed method replaces the sag of the power line wire with an extension of the control sample and then an expansion of the attached chirped fiber Bragg grating. This paper presents the results of the first measurements made on real aluminum-conducting steel-reinforced wire, frequently used for power line construction. It has been shown that the proper selection of the CFBG (chirped fiber Bragg grating) transducer and the appropriate choice of optical parameters of such a sensor will allow for high sensitivity of the line wire elongation and sag while reducing the sensitivity to the temperature. It has been shown that with a simple optomechanical system, a non-invasive measurement of the power line wire sag that is insensitive to temperature changes and the influence of high electromagnetic fields can be achieved. Full article
(This article belongs to the Special Issue Optical Fiber Sensors 2017)
Show Figures

Figure 1

Figure 1
<p>The overhead transmission line wire sag measurement system where: 1: the ACSR 26/7 Hawk conductor; 2: the measuring clamps/sensing head; 3a: the optical fiber; 3b: the inscribed chirped fiber Bragg grating; 4: the light source with a stabilized super-luminescent diode; 5: the optical circulator; 6: the optical spectrum analyzer; and 7: the computer/gateway.</p>
Full article ">Figure 2
<p>(<b>a</b>) the optomechanical system using a chirped fiber Bragg grating and a mechanical strain transformer where: 1: the ACSR 26/7 (Hawk) conductor; 2a: the optical fiber; 2b: the chirped fiber Bragg grating; 3: the thinned steel plate; 4: the semi-circular screwed clamps, <span class="html-italic">l<sub>ref</sub></span>: reference sensing head length; and (<b>b</b>) the strain distribution in the proposed steel testing plate while extending the measurement section.</p>
Full article ">Figure 3
<p>The cross-section of the sensing head of the power line sag measurement system where: 1, 2a, 3, and 4 are elements equivalent to those shown in <a href="#sensors-18-00309-f002" class="html-fig">Figure 2</a>.</p>
Full article ">Figure 4
<p>The conductor length, sag, clearance, and tension in a transmission line span.</p>
Full article ">Figure 5
<p>The sag estimation process reproduced from [<a href="#B26-sensors-18-00309" class="html-bibr">26</a>].</p>
Full article ">Figure 6
<p>The CFBG based optomechanical system for the power line sag estimation in real conditions.</p>
Full article ">Figure 7
<p>(<b>a</b>) a comparison of the spectra measured for the CFBG mounted in the proposed OTL wire monitoring system for 10 °C and 90 °C surrounding temperatures; (<b>b</b>) a comparison of the spectra measured for CFBG: FWHM_1 refers to the measurement directly after mounting the system on the wire and FWHM_2 refers to the measurement after 0.1% relative elongation.</p>
Full article ">Figure 8
<p>The comparison of <span class="html-italic">SSH</span> dependency on (<b>a</b>) the elongation of the testing plate; and (<b>b</b>) the changes of the CFBG sensor surrounding temperature.</p>
Full article ">Figure 9
<p>The characteristics of the <span class="html-italic">FWHM</span> spectral width dependency on (<b>a</b>) the relative elongation; and (<b>b</b>) the ambient temperature of the proposed system.</p>
Full article ">Figure 10
<p>(<b>a</b>) The dependency of the OTL wire sag related with the elongation of the proposed measurement system plate calculated for the initial conditions described in Equations (21) and (22); and (<b>b</b>) the dependency of the measured <span class="html-italic">FWHM</span> for different sag values of the monitored OTL line.</p>
Full article ">Figure 11
<p>The dependency of the OTL wire sag as a function of the measured <span class="html-italic">FWHM.</span></p>
Full article ">
12 pages, 2947 KiB  
Communication
Development of a Label-Free Immunosensor for Clusterin Detection as an Alzheimer’s Biomarker
by Kamrul Islam, Samar Damiati, Jagriti Sethi, Ahmed Suhail and Genhua Pan
Sensors 2018, 18(1), 308; https://doi.org/10.3390/s18010308 - 20 Jan 2018
Cited by 29 | Viewed by 8141
Abstract
Clusterin (CLU) has been associated with the clinical progression of Alzheimer’s disease (AD) and described as a potential AD biomarker in blood plasma. Due to the enormous attention given to cerebrospinal fluid (CSF) biomarkers for the past couple of decades, recently found blood-based [...] Read more.
Clusterin (CLU) has been associated with the clinical progression of Alzheimer’s disease (AD) and described as a potential AD biomarker in blood plasma. Due to the enormous attention given to cerebrospinal fluid (CSF) biomarkers for the past couple of decades, recently found blood-based AD biomarkers like CLU have not yet been reported for biosensors. Herein, we report the electrochemical detection of CLU for the first time using a screen-printed carbon electrode (SPCE) modified with 1-pyrenebutyric acid N-hydroxysuccinimide ester (Pyr-NHS) and decorated with specific anti-CLU antibody fragments. This bifunctional linker molecule contains succinylimide ester to bind protein at one end while its pyrene moiety attaches to the carbon surface by means of π-π stacking. Cyclic voltammetric and square wave voltammetric studies showed the limit of detection down to 1 pg/mL and a linear concentration range of 1–100 pg/mL with good sensitivity. Detection of CLU in spiked human plasma was demonstrated with satisfactory recovery percentages to that of the calibration data. The proposed method facilitates the cost-effective and viable production of label-free point-of-care devices for the clinical diagnosis of AD. Full article
(This article belongs to the Special Issue Label-Free Biosensors)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the electrochemical detection system. (<b>a</b>) The screen-printed carbon electrodes (SPCE) electrode; (<b>b</b>–<b>e</b>) surface modification with linker, F(ab’)<sub>2</sub> fragments of CLU antibody (Anti-CLU F(ab’)<sub>2</sub>), bovine serum albumin (BSA), and CLU. Pyr-NHS: 1-pyrenebutyric acid <span class="html-italic">N</span>-hydroxysuccinimide ester.</p>
Full article ">Figure 2
<p>SDS-PAGE analysis (12% gel; non-reducing conditions) of the full-length CLU antibody and their F(ab’)<sub>2</sub> fragments: (column 1 is molecular weight marker; column 2 is F(ab’)<sub>2</sub> fragments of CLU antibody; column 3 is digest fragments; and columns 4–5 are full-length Anti-CLU IgG.</p>
Full article ">Figure 3
<p>Cyclic voltammetry (CV) has been performed with 10 mM [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup> system (1:1) and 100 mM KCl at 0.05 V/s scan rate at bare carbon, Pyr-NHS linker, Anti-CLU F(ab’)<sub>2</sub>, and BSA-coated SPC electrodes.</p>
Full article ">Figure 4
<p>(<b>a</b>) Cyclic voltammograms of C/Pyr-NHS/Anti-CLU F(ab’)2/BSA in [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup> solution containing KCl with a scan rate from 10 to 100 mV/s; (<b>b</b>) dependence of the redox peak currents on the scan rates.</p>
Full article ">Figure 5
<p>(<b>a</b>) SWV peaks of the C/Pyr-NHS/Anti-CLU F(ab’)<sub>2</sub>/BSA/CLU-modified electrode in the presence of different concentrations of CLU (0–150 pg/mL); (<b>b</b>) Exponential rise to maximum fit curve with regression analysis and limit of detection (LOD). Inset: linear fit curve with regression analysis and LOD.</p>
Full article ">Figure 6
<p>SWV of negative control experiment with CLU and Insulin.</p>
Full article ">
10 pages, 2315 KiB  
Article
An Antibody-Immobilized Silica Inverse Opal Nanostructure for Label-Free Optical Biosensors
by Wang Sik Lee, Taejoon Kang, Shin-Hyun Kim and Jinyoung Jeong
Sensors 2018, 18(1), 307; https://doi.org/10.3390/s18010307 - 20 Jan 2018
Cited by 59 | Viewed by 10051
Abstract
Three-dimensional SiO2-based inverse opal (SiO2-IO) nanostructures were prepared for use as biosensors. SiO2-IO was fabricated by vertical deposition and calcination processes. Antibodies were immobilized on the surface of SiO2-IO using 3-aminopropyl trimethoxysilane (APTMS), a succinimidyl-[(N-maleimidopropionamido)-tetraethyleneglycol] [...] Read more.
Three-dimensional SiO2-based inverse opal (SiO2-IO) nanostructures were prepared for use as biosensors. SiO2-IO was fabricated by vertical deposition and calcination processes. Antibodies were immobilized on the surface of SiO2-IO using 3-aminopropyl trimethoxysilane (APTMS), a succinimidyl-[(N-maleimidopropionamido)-tetraethyleneglycol] ester (NHS-PEG4-maleimide) cross-linker, and protein G. The highly accessible surface and porous structure of SiO2-IO were beneficial for capturing influenza viruses on the antibody-immobilized surfaces. Moreover, as the binding leads to the redshift of the reflectance peak, the influenza virus could be detected by simply monitoring the change in the reflectance spectrum without labeling. SiO2-IO showed high sensitivity in the range of 103–105 plaque forming unit (PFU) and high specificity to the influenza A (H1N1) virus. Due to its structural and optical properties, SiO2-IO is a promising material for the detection of the influenza virus. Our study provides a generalized sensing platform for biohazards as various sensing strategies can be employed through the surface functionalization of three-dimensional nanostructures. Full article
(This article belongs to the Special Issue Nanostructured Hybrid Materials Based Opto-Electronics Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Schematic illustration showing fabrication procedure of inverse opal (IO) nanostructure. SEM images of (<b>b</b>) the top surface of opal; (<b>c</b>) the top surface of the IO nanostructure; and (<b>d</b>) the cross-section of the IO nanostructure.</p>
Full article ">Figure 2
<p>(<b>a</b>) Reflectance spectra of the opal and IO nanostructures. Insets are the corresponding optical microscope images; (<b>b</b>) Optical images showing contact angles of a water drop on the opal (top panel) and IO (bottom panel) nanostructures.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic illustration showing the molecular structures formed by the surface functionalization on the IO nanostructure for binding the H1N1 subtype; (<b>b</b>) Reflectance peak positions for the pristine, APTMS-treated, NHS-PEG<sub>4</sub>-Maleimide cross linker-treated, and Cys-ProG -antibody immobilized IOs. Inset shows reflectance spectra for all four samples. APTMS: 3-aminopropyl trimethoxysilane.</p>
Full article ">Figure 4
<p>(<b>a</b>) The magnitude of reflectance peak shift as a function of H1N1 subtype concentration, where the concentration was varied in the range of 10<sup>3</sup> to 10<sup>5</sup> PFU in 10 μL (<span class="html-italic">n</span> = 3). Phosphate-buffered saline (PBS) buffer solution is used for the control; (<b>b</b>) The magnitude of reflectance peak shift depending on the type of virus, where the concentration was set to 10<sup>4</sup> PFU (<span class="html-italic">n</span> = 3) for influenza A virus subtypes H3N2 and H1N1, as well as the influenza B virus (IFVB).</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>d</b>) SEM images showing the top surface of the IO nanostructure, where IO is treated with no virus (<b>a</b>), or 10<sup>3</sup> PFU (<b>b</b>), 10<sup>4</sup> PFU (<b>c</b>), or 10<sup>5</sup> PFU of the H1N1 subtype (<b>d</b>); Scale bars denote 500 nm.</p>
Full article ">
25 pages, 5500 KiB  
Article
L-Tree: A Local-Area-Learning-Based Tree Induction Algorithm for Image Classification
by Jaesung Choi, Eungyeol Song and Sangyoun Lee
Sensors 2018, 18(1), 306; https://doi.org/10.3390/s18010306 - 20 Jan 2018
Cited by 6 | Viewed by 5997
Abstract
The decision tree is one of the most effective tools for deriving meaningful outcomes from image data acquired from the visual sensors. Owing to its reliability, superior generalization abilities, and easy implementation, the tree model has been widely used in various applications. However, [...] Read more.
The decision tree is one of the most effective tools for deriving meaningful outcomes from image data acquired from the visual sensors. Owing to its reliability, superior generalization abilities, and easy implementation, the tree model has been widely used in various applications. However, in image classification problems, conventional tree methods use only a few sparse attributes as the splitting criterion. Consequently, they suffer from several drawbacks in terms of performance and environmental sensitivity. To overcome these limitations, this paper introduces a new tree induction algorithm that classifies images on the basis of local area learning. To train our predictive model, we extract a random local area within the image and use it as a feature for classification. In addition, the self-organizing map, which is a clustering technique, is used for node learning. We also adopt a random sampled optimization technique to search for the optimal node. Finally, each trained node stores the weights that represent the training data and class probabilities. Thus, a recursively trained tree classifies the data hierarchically based on the local similarity at each node. The proposed tree is a type of predictive model that offers benefits in terms of image’s semantic energy conservation compared with conventional tree methods. Consequently, it exhibits improved performance under various conditions, such as noise and illumination changes. Moreover, the proposed algorithm can improve the generalization ability owing to its randomness. In addition, it can be easily applied to ensemble techniques. To evaluate the performance of the proposed algorithm, we perform quantitative and qualitative comparisons with various tree-based methods using four image datasets. The results show that our algorithm not only involves a lower classification error than the conventional methods but also exhibits stable performance even under unfavorable conditions such as noise and illumination changes. Full article
(This article belongs to the Special Issue Smart Decision-Making)
Show Figures

Figure 1

Figure 1
<p>An example showing the disadvantages of conventional tree models. A trained tree that is strongly dependent on the splitting attribute is vulnerable to noise or illumination changes.</p>
Full article ">Figure 2
<p>Overall flowchart of our tree induction algorithm.</p>
Full article ">Figure 3
<p>Node learning process of L-Tree.</p>
Full article ">Figure 4
<p>Visualization of weights at root node using GTSRB dataset. The initial weights of each neuron learned with iteration processing.</p>
Full article ">Figure 5
<p>Visualization of SOM weights at root node with different scale values.</p>
Full article ">Figure 6
<p>Visualization of node in one Opt-L-Tree with different depths of GTSRB datasets.</p>
Full article ">Figure 7
<p>Four different datasets for evaluation: (<b>a</b>) MNIST dataset; (<b>b</b>) GTSRB dataset; (<b>c</b>) DMPB dataset; and (<b>d</b>) Caltech101 dataset.</p>
Full article ">Figure 8
<p>Images under different conditions with variations of Gaussian sigma and brightness.</p>
Full article ">Figure 9
<p>CAR Comparison with other algorithms in noisy conditions for four different datasets.</p>
Full article ">Figure 10
<p>CAR comparison with other algorithms under illumination changes for four different datasets.</p>
Full article ">Figure 11
<p>Classification results of single L-Tree and Opt-L-Tree under different parameter changes. All values except <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math> have a significant effect on the result, and they all determine the complexity of the model of the generated tree.</p>
Full article ">Figure 11 Cont.
<p>Classification results of single L-Tree and Opt-L-Tree under different parameter changes. All values except <math display="inline"> <semantics> <mi>τ</mi> </semantics> </math> have a significant effect on the result, and they all determine the complexity of the model of the generated tree.</p>
Full article ">Figure 12
<p><math display="inline"> <semantics> <mrow> <mi>N</mi> <mi>S</mi> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>E</mi> <mi>D</mi> </mrow> </semantics> </math> results of single L-Tree with variations of optimized iteration.</p>
Full article ">Figure 13
<p>The CER comparison under various bagging L-Tree condition with changing the scale factor, iteration number, number of neurons, and normalization.</p>
Full article ">Figure A1
<p>The CAR comparison with other ensemble algorithms by increasing the fully-developed tree number on MNIST dataset.</p>
Full article ">Figure A2
<p>Visualization of node in one Opt-L-Tree with different depths on (<b>a</b>) MNIST dataset; (<b>b</b>) DMPB dataset; and (<b>c</b>) Caltech101 dataset</p>
Full article ">
13 pages, 5392 KiB  
Article
A 750 K Photocharge Linear Full Well in a 3.2 μm HDR Pixel with Complementary Carrier Collection
by Frédéric Lalanne, Pierre Malinge, Didier Hérault, Clémence Jamin-Mornet and Nicolas Virollet
Sensors 2018, 18(1), 305; https://doi.org/10.3390/s18010305 - 20 Jan 2018
Cited by 9 | Viewed by 7712
Abstract
Mainly driven by automotive applications, there is an increasing interest in image sensors combining a high dynamic range (HDR) and immunity to the flicker issue. The native HDR pixel concept based on a parallel electron and hole collection for, respectively, a low signal [...] Read more.
Mainly driven by automotive applications, there is an increasing interest in image sensors combining a high dynamic range (HDR) and immunity to the flicker issue. The native HDR pixel concept based on a parallel electron and hole collection for, respectively, a low signal level and a high signal level is particularly well-suited for this performance challenge. The theoretical performance of this pixel is modeled and compared to alternative HDR pixel architectures. This concept is proven with the fabrication of a 3.2 μm pixel in a back-side illuminated (BSI) process including capacitive deep trench isolation (CDTI). The electron-based image uses a standard 4T architecture with a pinned diode and provides state-of-the-art low-light performance, which is not altered by the pixel modifications introduced for the hole collection. The hole-based image reaches 750 kh+ linear storage capability thanks to a 73 fF CDTI capacitor. Both images are taken from the same integration window, so the HDR reconstruction is not only immune to the flicker issue but also to motion artifacts. Full article
(This article belongs to the Special Issue Special Issue on the 2017 International Image Sensor Workshop (IISW))
Show Figures

Figure 1

Figure 1
<p>Pixel schematic. TG: transfer gate; RST: reset transistor in high CG mode; CDTI: capacitive deep trench isolation; VRT: positive supply voltage; SWRST: reset transistor in low CG mode; VCDTI: CDTI biasing voltage.</p>
Full article ">Figure 2
<p>Schematic cross-section of the pixel.</p>
Full article ">Figure 3
<p>Pixel timing. (<b>a</b>) Full timing for low conversion gain mode (LCG); (<b>b</b>) readout timing with double conversion gain mode for electron. HCG: high conversion gain.</p>
Full article ">Figure 3 Cont.
<p>Pixel timing. (<b>a</b>) Full timing for low conversion gain mode (LCG); (<b>b</b>) readout timing with double conversion gain mode for electron. HCG: high conversion gain.</p>
Full article ">Figure 4
<p>Pixel simulation. Signal: 30 k (<b>a</b>); 400 k (<b>b</b>). SN: Sense node.</p>
Full article ">Figure 5
<p>Architecture benchmark for a 1 Me− equivalent full well pixel, simplified model. SNR: signal-to-noise ratio.</p>
Full article ">Figure 6
<p>SNR versus charge.</p>
Full article ">Figure 7
<p>Hole photonic noise.</p>
Full article ">Figure 8
<p>Linearity versus readout mode.</p>
Full article ">Figure 9
<p>Linearity versus exposure.</p>
Full article ">Figure 10
<p>PRNU versus signal for holes.</p>
Full article ">Figure 11
<p>QE electrons and holes.</p>
Full article ">Figure 12
<p>QE benchmark versus non-HDR pixel.</p>
Full article ">
21 pages, 8051 KiB  
Article
An IMU-Aided Body-Shadowing Error Compensation Method for Indoor Bluetooth Positioning
by Zhongliang Deng, Xiao Fu and Hanhua Wang
Sensors 2018, 18(1), 304; https://doi.org/10.3390/s18010304 - 20 Jan 2018
Cited by 18 | Viewed by 5675
Abstract
Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause [...] Read more.
Research on indoor positioning technologies has recently become a hotspot because of the huge social and economic potential of indoor location-based services (ILBS). Wireless positioning signals have a considerable attenuation in received signal strength (RSS) when transmitting through human bodies, which would cause significant ranging and positioning errors in RSS-based systems. This paper mainly focuses on the body-shadowing impairment of RSS-based ranging and positioning, and derives a mathematical expression of the relation between the body-shadowing effect and the positioning error. In addition, an inertial measurement unit-aided (IMU-aided) body-shadowing detection strategy is designed, and an error compensation model is established to mitigate the effect of body-shadowing. A Bluetooth positioning algorithm with body-shadowing error compensation (BP-BEC) is then proposed to improve both the positioning accuracy and the robustness in indoor body-shadowing environments. Experiments are conducted in two indoor test beds, and the performance of both the BP-BEC algorithm and the algorithms without body-shadowing error compensation (named no-BEC) is evaluated. The results show that the BP-BEC outperforms the no-BEC by about 60.1% and 73.6% in terms of positioning accuracy and robustness, respectively. Moreover, the execution time of the BP-BEC algorithm is also evaluated, and results show that the convergence speed of the proposed algorithm has an insignificant effect on real-time localization. Full article
Show Figures

Figure 1

Figure 1
<p>System model of range-based Bluetooth Low Energy (BLE) positioning.</p>
Full article ">Figure 2
<p>Relationship between ranging error and Body-Shadowing Influence Error (BSIE). UP: unknown point.</p>
Full article ">Figure 3
<p>Relationship between the localization error and the BSIE using the LS method.</p>
Full article ">Figure 4
<p>Relationship between the localization error and the BSIE using Weighted K-Nearest Neighbor (WKNN).</p>
Full article ">Figure 5
<p>The description of the three shadowing states considered in this work.</p>
Full article ">Figure 6
<p>Body-shadowing detection strategy diagram: (<b>a</b>) diagram of the front condition; (<b>b</b>) diagram of the side condition; (<b>c</b>) diagram of the back condition.</p>
Full article ">Figure 7
<p>The flow chart of the body-shadowing error compensation (BP-BEC) algorithm.</p>
Full article ">Figure 8
<p>The BLE beacons and terminal used in this work.</p>
Full article ">Figure 9
<p>Experimental environment for Test bed 1.</p>
Full article ">Figure 10
<p>Experimental environment for Test bed 2.</p>
Full article ">Figure 11
<p>The RSS data for AP9–UP pair at a distance of 3 m for three shadowing angles: (<b>a</b>) the original measured RSS data and the filtered data through Kalman filter for the three angle states; (<b>b</b>) the Cumulative Distribution Function (CDF) of the measured RSS data for the three angle states.</p>
Full article ">Figure 12
<p>The relation between the BSIE and the AP–UP distance for different shadowing angles: (<b>a</b>) the relation between the BSIE and the distance in the “back” state and the curve fitting results; (<b>b</b>) the relation between the BSIE and the distance in the “side” state and the curve fitting results.</p>
Full article ">Figure 13
<p>The curve fitting results comparison of different shadowing angles.</p>
Full article ">Figure 14
<p>The comparison of positioning accuracy between the BP-BEC algorithm and the algorithm without body-shadowing error compensation (no-BEC): (<b>a</b>) positioning error comparison in scatter diagram; (<b>b</b>) positioning error comparison in CDF.</p>
Full article ">Figure 15
<p>The comparison of positioning robustness between the BP-BEC algorithm and the traditional algorithm: (<b>a</b>) positioning robustness error comparison in scatter diagram; (<b>b</b>) positioning robustness error comparison in CDF.</p>
Full article ">Figure 16
<p>The execution time comparison between the BP-BEC and no-BEC algorithms.</p>
Full article ">Figure 17
<p>The real trajectories in Test bed 2 and the positioning results comparison between the BP-BEC and the no-BECalgorithms: (<b>a</b>) the real trajectory 1 in Test bed 2; (<b>b</b>) the real trajectory 2 in Test bed 2; (<b>c</b>) the positioning results comparison along trajectory 1; (<b>d</b>) the positioning results comparison along trajectory 2.</p>
Full article ">Figure 17 Cont.
<p>The real trajectories in Test bed 2 and the positioning results comparison between the BP-BEC and the no-BECalgorithms: (<b>a</b>) the real trajectory 1 in Test bed 2; (<b>b</b>) the real trajectory 2 in Test bed 2; (<b>c</b>) the positioning results comparison along trajectory 1; (<b>d</b>) the positioning results comparison along trajectory 2.</p>
Full article ">
18 pages, 8266 KiB  
Article
Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology
by Shuo Chen, Chenggao Luo, Bin Deng, Hongqiang Wang, Yongqiang Cheng and Zhaowen Zhuang
Sensors 2018, 18(1), 303; https://doi.org/10.3390/s18010303 - 19 Jan 2018
Cited by 16 | Viewed by 5684
Abstract
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, [...] Read more.
As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the terahertz coded-aperture imaging (TCAI) system.</p>
Full article ">Figure 2
<p>The TCAI architecture for close objects based on transmission coded-aperture.</p>
Full article ">Figure 3
<p>The imaging process of the proposed single input multiple output (SIMO) TCAI architecture.</p>
Full article ">Figure 4
<p>Schematic diagram of preliminary solution on the purposive phase modulation factor for <span class="html-italic">y</span>-axis direction, which can be abbreviated as <span class="html-italic">y</span>-PPMF.</p>
Full article ">Figure 5
<p>Schematic diagram of modified solution on <span class="html-italic">y</span>-PPMF.</p>
Full article ">Figure 6
<p>Basic process of three-level imaging.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>c</b>) Radiation field distributions for SIMO TCAI corresponding to <span class="html-italic">xoy</span>, <span class="html-italic">xoz</span>, and <span class="html-italic">yoz</span>, respectively; (<b>d</b>–<b>f</b>) Space independence functions for SIMO TCAI corresponding to <span class="html-italic">xoy</span>, <span class="html-italic">xoz</span>, and <span class="html-italic">yoz</span>, respectively; (<b>g</b>–<b>i</b>) Radiation field distributions for SISO TCAI corresponding to <span class="html-italic">xoy</span>, <span class="html-italic">xoz</span>, and <span class="html-italic">yoz</span>, respectively; (<b>j</b>–<b>l</b>) Space independence functions for SISO TCAI corresponding to <span class="html-italic">xoy</span>, <span class="html-italic">xoz</span>, and <span class="html-italic">yoz</span>, respectively.</p>
Full article ">Figure 8
<p>Distributions of incident fields on the left face of the imaging area for different 3D imaging cells with different PPMFs (<span class="html-italic">f</span><sub>focal</sub>@(<span class="html-italic">x</span><sub>0</sub>, <span class="html-italic">y</span><sub>0</sub>)): (<b>a</b>) 0.2069@(−0.0603, 0); (<b>b</b>) 0.2069@(0.0603, 0); (<b>c</b>) 0.2046@(0, 0); (<b>d</b>) 0.2069@(0, −0.0603); and (<b>e</b>) 0.2069@(0, 0.0603).</p>
Full article ">Figure 9
<p>Distributions of incident fields on the right face of the imaging area for different 3D imaging cells with different PPMFs (<span class="html-italic">f</span><sub>focal</sub>@(<span class="html-italic">x</span><sub>0</sub>, <span class="html-italic">y</span><sub>0</sub>)): (<b>a</b>) 0.2069@(−0.0603, 0); (<b>b</b>) 0.2069@(0.0603, 0); (<b>c</b>) 0.2046@(0, 0); (<b>d</b>) 0.2069@(0, −0.0603); and (<b>e</b>) 0.2069@(0, 0.0603).</p>
Full article ">Figure 10
<p>Distributions of the random-modulation incident fields on the left face of the imaging area with the same PPMF and different RPMF phase ranges: (<b>a</b>) [−0.25π, 0.25π]; (<b>b</b>) [−0.5π, 0.5π]; (<b>c</b>) [−0.75π, 0.75π]; and (<b>d</b>) [−π, π].</p>
Full article ">Figure 11
<p>(<b>a</b>) The 3D human target, and reconstruction results at low-resolution level in different perspectives: (<b>b</b>) the left side; (<b>c</b>) the left front side; (<b>d</b>) the front; (<b>e</b>) the right front side; and (<b>f</b>) the right side.</p>
Full article ">Figure 12
<p>Reconstruction results at medium-resolution level in different perspectives: (<b>a</b>) the left side; (<b>b</b>) the left front side; (<b>c</b>) the front; (<b>d</b>) the right front side; and (<b>e</b>) the right side.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>e</b>) Imaging results at high-resolution level for SIMO architecture in different perspectives; (<b>f</b>–<b>j</b>) Imaging results at high-resolution level for SISO architecture in different perspectives.</p>
Full article ">
35 pages, 3200 KiB  
Article
IMU-to-Segment Assignment and Orientation Alignment for the Lower Body Using Deep Learning
by Tobias Zimmermann, Bertram Taetz and Gabriele Bleser
Sensors 2018, 18(1), 302; https://doi.org/10.3390/s18010302 - 19 Jan 2018
Cited by 71 | Viewed by 11820
Abstract
Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer [...] Read more.
Human body motion analysis based on wearable inertial measurement units (IMUs) receives a lot of attention from both the research community and the and industrial community. This is due to the significant role in, for instance, mobile health systems, sports and human computer interaction. In sensor based activity recognition, one of the major issues for obtaining reliable results is the sensor placement/assignment on the body. For inertial motion capture (joint kinematics estimation) and analysis, the IMU-to-segment (I2S) assignment and alignment are central issues to obtain biomechanical joint angles. Existing approaches for I2S assignment usually rely on hand crafted features and shallow classification approaches (e.g., support vector machines), with no agreement regarding the most suitable features for the assignment task. Moreover, estimating the complete orientation alignment of an IMU relative to the segment it is attached to using a machine learning approach has not been shown in literature so far. This is likely due to the high amount of training data that have to be recorded to suitably represent possible IMU alignment variations. In this work, we propose online approaches for solving the assignment and alignment tasks for an arbitrary amount of IMUs with respect to a biomechanical lower body model using a deep learning architecture and windows of 128 gyroscope and accelerometer data samples. For this, we combine convolutional neural networks (CNNs) for local filter learning with long-short-term memory (LSTM) recurrent networks as well as generalized recurrent units (GRUs) for learning time dynamic features. The assignment task is casted as a classification problem, while the alignment task is casted as a regression problem. In this framework, we demonstrate the feasibility of augmenting a limited amount of real IMU training data with simulated alignment variations and IMU data for improving the recognition/estimation accuracies. With the proposed approaches and final models we achieved 98.57% average accuracy over all segments for the I2S assignment task (100% when excluding left/right switches) and an average median angle error over all segments and axes of 2.91 ° for the I2S alignment task. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of the two addressed problems for seven lower body segments and seven IMUs: (i) I2S assignment; and (ii) I2S alignment determination.</p>
Full article ">Figure 2
<p>Overview of the proposed network configurations.</p>
Full article ">Figure 3
<p>The CNN layer configuration used in the proposed networks. Note, the Bias, batch normalization (BN) and rectified linear unit (ReLu) operations are applied per activation (see Equation (<a href="#FD3-sensors-18-00302" class="html-disp-formula">3</a>)).</p>
Full article ">Figure 4
<p>Exemplary illustration of IMU (<span class="html-italic">I</span>), segment (<span class="html-italic">S</span>, dashed) and global (<span class="html-italic">G</span>) coordinate frames for one IMU and one segment. The I2S alignment variations in terms of axes and angles (<math display="inline"> <semantics> <msub> <mi>θ</mi> <mn>1</mn> </msub> </semantics> </math>, <math display="inline"> <semantics> <msub> <mi>θ</mi> <mn>2</mn> </msub> </semantics> </math>) as basis for IMU data simulation are also illustrated.</p>
Full article ">Figure 5
<p>Comparison between real and re-simulated IMU data for one example from a capturing setup where IMUs were rigidly mounted on a rigid body and were additionally tracked with a marker based optical system. The frequency was 60 Hz, i.e., 60 samples correspond to one second.</p>
Full article ">Figure 6
<p>Comparison between real and re-simulated IMU data from the right foot from a capturing setup where IMUs were mounted on a person during walking and were additionally tracked with a marker based optical system. The frequency was 60 Hz, i.e., 60 samples correspond to one second.</p>
Full article ">Figure 7
<p><a href="#sensors-18-00302-f006" class="html-fig">Figure 6</a> augmented with 100 noisy re-simulated signals per timestep and the respective root squared errors (RSE) between the real signal and the closest re-simulated signal at each timestep (RSE Opt.).</p>
Full article ">Figure 8
<p>(<b>Left</b>) IMU configurations used for the recording of dataset <b>C</b> (dashed lines mark IMUs placed on the back); and (<b>Right</b>) skeleton with exemplary segment coordinate systems of pelvis and right leg (analogous for left leg). The global coordinate system (<span class="html-italic">G</span>) is a fixed reference coordinate system.</p>
Full article ">Figure 9
<p>Network configuration cross validation on dataset <b>A</b>: (Left) The bar plot represents the accuracy of the I2S assignment problem; and (Right) the bar plots represent the mean angle errors over all axes and windows for the I2S alignment problem for each body segment.</p>
Full article ">Figure 10
<p>Evaluation results on simulated dataset <b>A</b>: (<b>a</b>) confusion matrix for the I2S assignment problem; and (<b>b</b>) boxplots of the angle errors around the three body segment axes for the I2S alignment problem (exemplary for the left foot).</p>
Full article ">Figure 11
<p>Evaluation results on dataset <b>B</b>, test case 1: (<b>a</b>) confusion matrix for the I2S assignment problem; and (<b>b</b>) angle errors over all windows, axes and considered (nine) IMU configurations for the I2S alignment problem for test case 1.</p>
Full article ">Figure 12
<p>Evaluation results in terms of confusion matrices for the I2S assignment problem on dataset <b>B</b>.</p>
Full article ">Figure 13
<p>Evaluation results for the I2S alignment problem on dataset <b>B</b>. The bar plots show the maximum, mean and median angle errors for all segments over all windows, axes and considered (nine) IMU configurations. These were obtained through cross validation.</p>
Full article ">Figure 13 Cont.
<p>Evaluation results for the I2S alignment problem on dataset <b>B</b>. The bar plots show the maximum, mean and median angle errors for all segments over all windows, axes and considered (nine) IMU configurations. These were obtained through cross validation.</p>
Full article ">Figure 14
<p>Evaluation results in terms of a confusion matrix for the I2S assignment problem using the final model (based on all datasets).</p>
Full article ">Figure 15
<p>Evaluation results for the I2S alignment problem using the final model (based on all datasets). The bar plots show the median angle errors around the three body segment axes over all windows, considered test persons and IMU configurations.</p>
Full article ">Figure A1
<p>IMU alignments used during the recording of dataset <b>B</b>.</p>
Full article ">Figure A2
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 1 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.57</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A3
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 2 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.43</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A4
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 3 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.43</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A5
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 4 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.43</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A6
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 5 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>97.29</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A7
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 6 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>96.14</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A8
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 7 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>100</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A9
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 8 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>98.14</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A10
<p>I2S assignment problem, final model: test person from dataset <b>B</b>, IMU configuration 9 (cf. <a href="#sensors-18-00302-t0A7" class="html-table">Table A7</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.71</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">Figure A11
<p>I2S assignment problem, final model: test person from dataset <b>C</b>, IMU configuration according to <a href="#sensors-18-00302-f008" class="html-fig">Figure 8</a>). Average accuracy: <math display="inline"> <semantics> <mrow> <mn>99.14</mn> <mo>%</mo> </mrow> </semantics> </math>.</p>
Full article ">
10 pages, 2328 KiB  
Article
A Polymer Optical Fiber Temperature Sensor Based on Material Features
by Arnaldo Leal-Junior, Anselmo Frizera-Neto, Carlos Marques and Maria José Pontes
Sensors 2018, 18(1), 301; https://doi.org/10.3390/s18010301 - 19 Jan 2018
Cited by 82 | Viewed by 7474
Abstract
This paper presents a polymer optical fiber (POF)-based temperature sensor. The operation principle of the sensor is the variation in the POF mechanical properties with the temperature variation. Such mechanical property variation leads to a variation in the POF output power when a [...] Read more.
This paper presents a polymer optical fiber (POF)-based temperature sensor. The operation principle of the sensor is the variation in the POF mechanical properties with the temperature variation. Such mechanical property variation leads to a variation in the POF output power when a constant stress is applied to the fiber due to the stress-optical effect. The fiber mechanical properties are characterized through a dynamic mechanical analysis, and the output power variation with different temperatures is measured. The stress is applied to the fiber by means of a 180° curvature, and supports are positioned on the fiber to inhibit the variation in its curvature with the temperature variation. Results show that the sensor proposed has a sensitivity of 1.04 × 10−3 °C−1, a linearity of 0.994, and a root mean squared error of 1.48 °C, which indicates a relative error of below 2%, which is lower than the ones obtained for intensity-variation-based temperature sensors. Furthermore, the sensor is able to operate at temperatures up to 110 °C, which is higher than the ones obtained for similar POF sensors in the literature. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Dynamic mechanical analyzer employed on the polymer optical fiber (POF) characterization.</p>
Full article ">Figure 2
<p>Variation of the POF static Young’s modulus with the temperature.</p>
Full article ">Figure 3
<p>Experimental setup to evaluate the variation in the output power with the temperature sweep and the lateral view of the POF with the lateral section.</p>
Full article ">Figure 4
<p>An analytical model of POF power variation under curvature with the temperature increase.</p>
Full article ">Figure 5
<p>Calibration curve for the POF temperature sensor.</p>
Full article ">Figure 6
<p>Repeatability tests for the POF temperature sensor. (<b>a</b>) Cycles ranging from 40 to 110 °C to show the sensor behavior with increasing and decreasing temperatures. (<b>b</b>) Sensor response on a test with temperatures higher than the PMMA T<sub>g</sub>. (<b>c</b>) Repeatability of the temperature sensor on the cycles presented in (<b>a</b>).</p>
Full article ">
9 pages, 15850 KiB  
Article
A Comprehensive Study of a Micro-Channel Heat Sink Using Integrated Thin-Film Temperature Sensors
by Tao Wang, Jiejun Wang, Jian He, Chuangui Wu, Wenbo Luo, Yao Shuai, Wanli Zhang, Xiancai Chen, Jian Zhang and Jia Lin
Sensors 2018, 18(1), 299; https://doi.org/10.3390/s18010299 - 19 Jan 2018
Cited by 11 | Viewed by 7619
Abstract
A micro-channel heat sink is a promising cooling method for high power integrated circuits (IC). However, the understanding of such a micro-channel device is not sufficient, because the tools for studying it are very limited. The details inside the micro-channels are not readily [...] Read more.
A micro-channel heat sink is a promising cooling method for high power integrated circuits (IC). However, the understanding of such a micro-channel device is not sufficient, because the tools for studying it are very limited. The details inside the micro-channels are not readily available. In this letter, a micro-channel heat sink is comprehensively studied using the integrated temperature sensors. The highly sensitive thin film temperature sensors can accurately monitor the temperature change in the micro-channel in real time. The outstanding heat dissipation performance of the micro-channel heat sink is proven in terms of maximum temperature, cooling speed and heat resistance. The temperature profile along the micro-channel is extracted, and even small temperature perturbations can be detected. The heat source formed temperature peak shifts towards the flow direction with the increasing flow rate. However, the temperature non-uniformity is independent of flow rate, but solely dependent on the heating power. Specific designs for minimizing the temperature non-uniformity are necessary. In addition, the experimental results from the integrated temperature sensors match the simulation results well. This can be used to directly verify the modeling results, helping to build a convincing simulation model. The integrated sensor could be a powerful tool for studying the micro-channel based heat sink. Full article
(This article belongs to the Special Issue Integrated Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) The 3-D schematic illustration of the micro-channel based heat sink with integrated thin film temperature sensors and simulated heat source; (<b>b</b>) The structure of the heat sink 3D breakdown drawing; (<b>c</b>) Sectional view along the inlet to outlet in the direction of heat sink, where the location of thin film temperature sensors (S-In to S-Out).</p>
Full article ">Figure 2
<p>(<b>a</b>) shows the microfluidic channel Chip (Chip A); (<b>b</b>) Chip B integrated with temperature sensors and; (<b>c</b>) is the micro-channel heat sink after bonding; and (<b>d</b>) is the microscopic picture of the temperature sensor through the outlet.</p>
Full article ">Figure 3
<p>The physical picture of thermal testing system.</p>
Full article ">Figure 4
<p>(<b>a</b>) Temperature sensors response under different flow rate (25 mL/h, 50 mL/h, 100 mL/h, 150 mL/h, 200 mL/h, 250 mL/h) with the condition of flow off or flow on; (<b>b</b>) The temperature profile in the micro-fluid channel under different flow rates.</p>
Full article ">Figure 5
<p>(<b>a</b>) Temperature sensors response under different power applied on the heat source (0.01 W, 0.1 W, 0.5 W, 1.0 W, 2.0 W) with the condition of flow off or flow on; (<b>b</b>) The temperature profile in the micro-fluid channel from the inlet to outlet under different heating power.</p>
Full article ">Figure 6
<p>Under different flow rate, the temperature profile in the micro-fluid channel from the inlet to outlet where corresponding the temperature sensors from S-In to S-Out.</p>
Full article ">Figure 7
<p>Under different heating power, the temperature profile in the micro-fluid channel from the inlet to outlet where corresponding the temperature sensors from S-In to S-Out.</p>
Full article ">Figure 8
<p>Measured and simulated spatial temperature distribution of the heat sink under the condition of flow is off.</p>
Full article ">Figure 9
<p>Measured and simulated spatial temperature distribution of the heat sink under the condition of flow is 100 mL/h.</p>
Full article ">Figure 10
<p>The variance of temperature fluctuation with the temperature change of the micro fluid by changing the power on power IC or the flow rate.</p>
Full article ">
17 pages, 4195 KiB  
Article
Bridge Structure Deformation Prediction Based on GNSS Data Using Kalman-ARIMA-GARCH Model
by Jingzhou Xin, Jianting Zhou, Simon X. Yang, Xiaoqing Li and Yu Wang
Sensors 2018, 18(1), 298; https://doi.org/10.3390/s18010298 - 19 Jan 2018
Cited by 72 | Viewed by 8173
Abstract
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become [...] Read more.
Bridges are an essential part of the ground transportation system. Health monitoring is fundamentally important for the safety and service life of bridges. A large amount of structural information is obtained from various sensors using sensing technology, and the data processing has become a challenging issue. To improve the prediction accuracy of bridge structure deformation based on data mining and to accurately evaluate the time-varying characteristics of bridge structure performance evolution, this paper proposes a new method for bridge structure deformation prediction, which integrates the Kalman filter, autoregressive integrated moving average model (ARIMA), and generalized autoregressive conditional heteroskedasticity (GARCH). Firstly, the raw deformation data is directly pre-processed using the Kalman filter to reduce the noise. After that, the linear recursive ARIMA model is established to analyze and predict the structure deformation. Finally, the nonlinear recursive GARCH model is introduced to further improve the accuracy of the prediction. Simulation results based on measured sensor data from the Global Navigation Satellite System (GNSS) deformation monitoring system demonstrated that: (1) the Kalman filter is capable of denoising the bridge deformation monitoring data; (2) the prediction accuracy of the proposed Kalman-ARIMA-GARCH model is satisfactory, where the mean absolute error increases only from 3.402 mm to 5.847 mm with the increment of the prediction step; and (3) in comparision to the Kalman-ARIMA model, the Kalman-ARIMA-GARCH model results in superior prediction accuracy as it includes partial nonlinear characteristics (heteroscedasticity); the mean absolute error of five-step prediction using the proposed model is improved by 10.12%. This paper provides a new way for structural behavior prediction based on data processing, which can lay a foundation for the early warning of bridge health monitoring system based on sensor data using sensing technology. Full article
(This article belongs to the Special Issue Sensors for Transportation)
Show Figures

Figure 1

Figure 1
<p>The Caijia Jialing River Bridge.</p>
Full article ">Figure 2
<p>The overall layout of the sensor measuring point: <b>A</b>, the stress sensor (including temperature measurement as well); <b>B</b>, the acceleration sensor; <b>C</b>, the static level gauge; <b>D</b>, Global Navigation Satellite System (GNSS); <b>E</b>, Linear Variable Displacement Transducer; <b>F</b>, the tiltmeter; <b>G</b>, the temperature and humidity sensor; <b>H</b>, the pluviometer; <b>I</b>, the dogvane and anemoscope and <b>J</b>, the anchor cable meter through the cable. Also, the numbers in parentheses indicate the amount of each sensor.</p>
Full article ">Figure 3
<p>The layout diagram of strain sensors on the surface of concrete.</p>
Full article ">Figure 4
<p>GNSS deformation monitoring system.</p>
Full article ">Figure 5
<p>Predictor-corrector algorithm of Kalman filter.</p>
Full article ">Figure 6
<p>The deflection sample series after the Kalman filter.</p>
Full article ">Figure 7
<p>Deformation time series {<span class="html-italic">X</span><sub>2<span class="html-italic">t</span></sub>}.</p>
Full article ">Figure 8
<p>Time series {<span class="html-italic">X</span><sub>3<span class="html-italic">t</span></sub>}.</p>
Full article ">Figure 9
<p>The autocorrelation (ACF) and partial ACF (PACF) for the {<span class="html-italic">X</span><sub>3<span class="html-italic">t</span></sub>} series.</p>
Full article ">Figure 10
<p>The residual error of autoregressive integrated moving average (ARIMA).</p>
Full article ">Figure 11
<p>Results of the predictions for the original deformation series {<span class="html-italic">X</span><sub>2<span class="html-italic">t</span></sub>} by the ARIMA and the autoregressive conditional heteroscedasticity model (GARCH): (<b>a</b>) One-step prediction; (<b>b</b>) Three-step prediction.</p>
Full article ">Figure 12
<p>Results of the predictions for the original deformation series {<span class="html-italic">X</span><sub>2<span class="html-italic">t</span></sub>} by the ARIMA and the GARCH: (<b>a</b>) Five-step prediction; (<b>b</b>) Five-step maximum deformation prediction.</p>
Full article ">Figure 12 Cont.
<p>Results of the predictions for the original deformation series {<span class="html-italic">X</span><sub>2<span class="html-italic">t</span></sub>} by the ARIMA and the GARCH: (<b>a</b>) Five-step prediction; (<b>b</b>) Five-step maximum deformation prediction.</p>
Full article ">
15 pages, 1350 KiB  
Article
A Novel Walking Detection and Step Counting Algorithm Using Unconstrained Smartphones
by Xiaomin Kang, Baoqi Huang and Guodong Qi
Sensors 2018, 18(1), 297; https://doi.org/10.3390/s18010297 - 19 Jan 2018
Cited by 86 | Viewed by 9603
Abstract
Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, [...] Read more.
Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The 3D data derived by the gyroscope of a smartphone placed in three different positions. The horizontal axes denote time.</p>
Full article ">Figure 2
<p>The flow diagram of the walking detection and step counting algorithm.</p>
Full article ">Figure 3
<p>The spectrums of the angular velocities with respect to six different activities.</p>
Full article ">Figure 4
<p>The spectrums of the accelerations with respect to six different activities.</p>
Full article ">Figure 5
<p>The spectrums of the angular velocities produced by switching among different smartphone placement.</p>
Full article ">Figure 6
<p>The screenshots of two Android apps. (<b>a</b>) sensory data collection; (<b>b</b>) step counter using the proposed method.</p>
Full article ">Figure 7
<p>The walking detection results of one subject by using different methods in the first scenario.</p>
Full article ">
23 pages, 1784 KiB  
Article
A GPS Phase-Locked Loop Performance Metric Based on the Phase Discriminator Output
by Stefan Stevanovic and Boris Pervan
Sensors 2018, 18(1), 296; https://doi.org/10.3390/s18010296 - 19 Jan 2018
Cited by 13 | Viewed by 4894
Abstract
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and [...] Read more.
We propose a novel GPS phase-lock loop (PLL) performance metric based on the standard deviation of tracking error (defined as the discriminator’s estimate of the true phase error), and explain its advantages over the popular phase jitter metric using theory, numerical simulation, and experimental results. We derive an augmented GPS phase-lock loop (PLL) linear model, which includes the effect of coherent averaging, to be used in conjunction with this proposed metric. The augmented linear model allows more accurate calculation of tracking error standard deviation in the presence of additive white Gaussian noise (AWGN) as compared to traditional linear models. The standard deviation of tracking error, with a threshold corresponding to half of the arctangent discriminator pull-in region, is shown to be a more reliable/robust measure of PLL performance under interference conditions than the phase jitter metric. In addition, the augmented linear model is shown to be valid up until this threshold, which facilitates efficient performance prediction, so that time-consuming direct simulations and costly experimental testing can be reserved for PLL designs that are much more likely to be successful. The effect of varying receiver reference oscillator quality on the tracking error metric is also considered. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Phase lock loop block diagram.</p>
Full article ">Figure 2
<p>Augmented PLL linear model.</p>
Full article ">Figure 3
<p>Receiver clock power spectral density (PSD) comparison.</p>
Full article ">Figure 4
<p>Effect of <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> </mrow> </msub> </semantics> </math> and PLL bandwidth on input–output transfer function, 1 ms.</p>
Full article ">Figure 5
<p>Effect of <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> </mrow> </msub> </semantics> </math> and PLL bandwidth on input–output transfer function, 20 ms.</p>
Full article ">Figure 6
<p>Effect of <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> </mrow> </msub> </semantics> </math> and PLL bandwidth on tracking error transfer function, 1 ms.</p>
Full article ">Figure 7
<p>Effect of <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> </mrow> </msub> </semantics> </math> and PLL bandwidth on tracking error transfer function, 20 ms.</p>
Full article ">Figure 8
<p>Sigma tracking error vs. PLL bandwidth.</p>
Full article ">Figure 9
<p>Sigma phase error vs. PLL bandwidth.</p>
Full article ">Figure 10
<p>Sigma phase error vs. sigma tracking error.</p>
Full article ">Figure 11
<p>Sigma phase error vs. sigma tracking error, for <math display="inline"> <semantics> <msub> <mi>T</mi> <mrow> <mi>c</mi> <mi>o</mi> </mrow> </msub> </semantics> </math> = 20 ms.</p>
Full article ">Figure 12
<p>PDF of tracking error.</p>
Full article ">Figure 13
<p>Experimental results with theory using Rb PSD model.</p>
Full article ">Figure 14
<p>Experimental results with theory using OCXO PSD model.</p>
Full article ">Figure A1
<p>LPFRS measurements and specs with fitted power-law model.</p>
Full article ">
34 pages, 16648 KiB  
Article
Motorcycles that See: Multifocal Stereo Vision Sensor for Advanced Safety Systems in Tilting Vehicles
by Gustavo Gil, Giovanni Savino, Simone Piantini and Marco Pierini
Sensors 2018, 18(1), 295; https://doi.org/10.3390/s18010295 - 19 Jan 2018
Cited by 10 | Viewed by 10469
Abstract
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study [...] Read more.
Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications. Full article
(This article belongs to the Special Issue Sensors for Transportation)
Show Figures

Figure 1

Figure 1
<p>Top view of a conventional stereo vision system. Detail of the Range Field including the boundary of depth triangulation range adopted at the Horopter of 10 disparities.</p>
Full article ">Figure 2
<p>Overview of the imaging system. (<b>a</b>) Multi-focal stereo rigs installed in the frontal part of the vehicle and fixed to the scooter frame; (<b>b</b>) Top view of the 3D space to measure in front of the scooter (the outer stereo cameras are used for development purposes and future extension of the long range of measurement).</p>
Full article ">Figure 3
<p>Wiring detail for the synchronization of the six cameras. (<b>a</b>) Circuital scheme; (<b>b</b>) A disassembled camera showing the location of the electrical connections labeled “1” and “2”.</p>
Full article ">Figure 4
<p>Overview of the graphical user interface of the Matlab Application for the stereo camera calibration.</p>
Full article ">Figure 5
<p>Depth accuracy quantification of the calibrated stereo camera (long-range camera rig). (<b>a</b>) Rectified left picture acquired by the long range stereo camera sensor; (<b>b</b>) Disparity Map of the scene (range from 0 to 128d); (<b>c</b>) Three-dimensional reconstruction or 3D point cloud of the scene imaged; (<b>d</b>) Top view of the 3D point cloud highlighting the location of the traffic cones originally placed at 10 m, 15 m, and 20 m. This 3D point cloud can be download according to <a href="#app3-sensors-18-00295" class="html-app">Appendix B</a> for a better assessment of the reader.</p>
Full article ">Figure 6
<p>Analysis of small narrow objects in the farthest half of the Range Field (long-range camera sensor). 3D control points were located at similar places for each couple of cones for the analysis. (<b>a</b>) Detail of the 3D representation of the targets; (<b>b</b>) Frontal view of the targets (grid sized 10 cm); (<b>c</b>) Top view of the 3D point clouds (grid sized 50 cm). For the cones at 20 m the fattening effect becomes evident (depth artifact).</p>
Full article ">Figure 7
<p>Illustration of the automatic camera extrinsic parameters re-calibration. (<b>a</b>) An initial rectification of the stereo frame according to the static calibration values; (<b>b</b>) SURF feature extraction in both images of the stereo pair (circle’s diameter represents the scale of the feature); (<b>c</b>) The salient features matched (correct pixel assignments indicated by yellow connections) are overlaid on a 3D anaglyph.</p>
Full article ">Figure 8
<p>Analysis of a turning maneuver: measurement of the space in front of the scooter. Short-range and long-range measurements are depicted simultaneously for comparison. (<b>a</b>) Rectified left picture of the short range stereo camera sensor; (<b>b</b>) Rectified left picture of the long range stereo camera sensor; (<b>c</b>) Short-range Disparity Map (0 to 32d); (<b>d</b>) Long-range Disparity Map (0 to 128d). The 3D point cloud is available for download (<a href="#app3-sensors-18-00295" class="html-app">Appendix B</a>).</p>
Full article ">Figure 9
<p>3D point clouds corresponding to the turning maneuver scene calculated from the information provided by the two stereo camera sensors. (<b>a</b>) Reconstruction for the short-range stereo camera (wide common Field of View); (<b>b</b>) Reconstruction for the long-range stereo camera (narrow common Field of View).</p>
Full article ">Figure 10
<p>Top view of the 3D point clouds corresponding to the turning maneuver. (<b>a</b>) Depth measurement delivered by the short range sensor (accurate depth measures are inside the Range Field); (<b>b</b>) Depth measures delivered by the long range sensor (Car 4 is not in the common Field of View of this stereo camera sensor).</p>
Full article ">Figure 11
<p>Cleaned measurements of the corresponding to the turning maneuver. (<b>a</b>) The 3D point cloud inclined 13° to compensate the leaning of the scooter; (<b>b</b>) Top view of the measures.</p>
Full article ">Figure 12
<p>Analysis of the pre-crash scene (id90–InSAFE). (<b>a</b>) Rectified left picture of the long range stereo camera sensor; (<b>b</b>) The 3D anaglyph composed by the stereo frame; (<b>c</b>) Disparity Map (0 to 64d); (<b>d</b>) 3D reconstruction available for download (<a href="#app3-sensors-18-00295" class="html-app">Appendix B</a>).</p>
Full article ">Figure 13
<p>Detail of 3D reconstruction of the pre-crash scene (id90–InSAFE). (<b>a</b>) Cleaned 3D point cloud seen from the scooter point-of-view; (<b>b</b>) Cleaned top view representation of the pre-crash scene.</p>
Full article ">Figure 14
<p>Roll angle fluctuations during 5 trials of the emulation of the motorcycle crash (id90–InSAFE).</p>
Full article ">Figure 15
<p>Chart showing the percentage of effective stereo frame used to calculate the Disparity Map (DM) during the first six neighboring frames (consecutive frames) of the 1 s pre-crash sequence. Below 45%, the number of reliable pixels is insufficient to compute the DM. The corresponding DMs for these six frames are presented in the <a href="#app4-sensors-18-00295" class="html-app">Appendix C</a>.</p>
Full article ">Figure A1
<p>Radar Cross-Section of a motorcycle. Adapted from (Köhler et al. 2013) [<a href="#B119-sensors-18-00295" class="html-bibr">119</a>].</p>
Full article ">Figure A2
<p>Example of the artificial depth interpretation of a real traffic scene thanks to the stereo camera sensor technology. (<b>a</b>) Left image acquired from a stereo camera pair onboard the instrumented scooter; (<b>b</b>) Anaglyph of the traffic scene, it can be see the scene in 3D by wearing color-coded glasses (red-blue ones); (<b>c</b>) 3D point cloud representation of the scene in which can be detected the location in the space of the cyclist and the garbage bins placed on the right side of the lane.</p>
Full article ">Figure A3
<p>Sensing of a cyclist from a distance of 12 m. (<b>a</b>) Left image acquired from a stereo camera pair onboard the instrumented scooter; (<b>b</b>) Disparity map of the road scene (0 to 64d); (<b>c</b>) 3D point cloud representation of the scene in which can be detected the location in the space of the cyclist and the light pole about 5 m behind.</p>
Full article ">
31 pages, 12179 KiB  
Article
Game-Theoretical Design of an Adaptive Distributed Dissemination Protocol for VANETs
by Cristhian Iza-Paredes, Ahmad Mohamad Mezher, Mónica Aguilar Igartua and Jordi Forné
Sensors 2018, 18(1), 294; https://doi.org/10.3390/s18010294 - 19 Jan 2018
Cited by 19 | Viewed by 5549
Abstract
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging [...] Read more.
Road safety applications envisaged for Vehicular Ad Hoc Networks (VANETs) depend largely on the dissemination of warning messages to deliver information to concerned vehicles. The intended applications, as well as some inherent VANET characteristics, make data dissemination an essential service and a challenging task in this kind of networks. This work lays out a decentralized stochastic solution for the data dissemination problem through two game-theoretical mechanisms. Given the non-stationarity induced by a highly dynamic topology, diverse network densities, and intermittent connectivity, a solution for the formulated game requires an adaptive procedure able to exploit the environment changes. Extensive simulations reveal that our proposal excels in terms of number of transmissions, lower end-to-end delay and reduced overhead while maintaining high delivery ratio, compared to other proposals. Full article
(This article belongs to the Special Issue Smart Vehicular Mobile Sensing)
Show Figures

Figure 1

Figure 1
<p>Distance factor <math display="inline"> <semantics> <mrow> <mi>D</mi> <msub> <mi>f</mi> <mi>i</mi> </msub> </mrow> </semantics> </math>.</p>
Full article ">Figure 2
<p>Forwarding Game in an urban scenario.</p>
Full article ">Figure 3
<p>Screenshots of OMNet++ and SUMO simulators’ graphical user interfaces running network and road traffic simulations, respectively. Vehicular network scenario in OMNeT<math display="inline"> <semantics> <mrow> <mo>+</mo> <mo>+</mo> </mrow> </semantics> </math>: 2.5 km × 2.5 km urban region in Berlin, Germany (red rectangles = buildings; red circle = crashed vehicle; green circles = warned vehicles; purple circles = RSUs).</p>
Full article ">Figure 4
<p>Results with 95% confidence intervals for 10 repetitions per point with independent seeds. Text dissemination case. Different vehicles’ densities in a 2.5 km × 2.5 km urban region in Berlin, Germany.</p>
Full article ">Figure 5
<p>Beacon Overhead.</p>
Full article ">Figure 6
<p>Frame Delivery Ratio (FDR) with 95% confidence intervals for 10 repetitions per point with independent seeds. Video dissemination case. Different vehicles’ densities in a 2.5 km × 2.5 km urban region in Berlin, Germany.</p>
Full article ">Figure 7
<p>PSNR for video dissemination with 95% confidence intervals for 10 repetitions per point with independent seeds. Different network densities in a 2.5 km × 2.5 km urban region in Berlin, Germany.</p>
Full article ">Figure 8
<p>Comparison sample for the different simulated protocols at frame 72 in <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>S</mi> <msub> <mi>U</mi> <mn>4</mn> </msub> </mrow> </semantics> </math> located at 1200 m with 100 vehicles/km<math display="inline"> <semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics> </math>.</p>
Full article ">
10 pages, 11117 KiB  
Article
Transparent Fingerprint Sensor System for Large Flat Panel Display
by Wonkuk Seo, Jae-Eun Pi, Sung Haeung Cho, Seung-Youl Kang, Seong-Deok Ahn, Chi-Sun Hwang, Ho-Sik Jeon, Jong-Uk Kim and Myunghee Lee
Sensors 2018, 18(1), 293; https://doi.org/10.3390/s18010293 - 19 Jan 2018
Cited by 19 | Viewed by 8878
Abstract
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the [...] Read more.
In this paper, we introduce a transparent fingerprint sensing system using a thin film transistor (TFT) sensor panel, based on a self-capacitive sensing scheme. An armorphousindium gallium zinc oxide (a-IGZO) TFT sensor array and associated custom Read-Out IC (ROIC) are implemented for the system. The sensor panel has a 200 × 200 pixel array and each pixel size is as small as 50 μm × 50 μm. The ROIC uses only eight analog front-end (AFE) amplifier stages along with a successive approximation analog-to-digital converter (SAR ADC). To get the fingerprint image data from the sensor array, the ROIC senses a capacitance, which is formed by a cover glass material between a human finger and an electrode of each pixel of the sensor array. Three methods are reviewed for estimating the self-capacitance. The measurement result demonstrates that the transparent fingerprint sensor system has an ability to differentiate a human finger’s ridges and valleys through the fingerprint sensor array. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the proposed finger print sensor system.</p>
Full article ">Figure 2
<p>(<b>a</b>) Top view of the self-capacitance type sensor pixel structure; (<b>b</b>) equivalent parasitic model of the sensor array; and (<b>c</b>) cross-sectional view of the proposed sensor pixel.</p>
Full article ">Figure 3
<p>(<b>a</b>) Simple capacitor structure for an infinitesimal gap; and (<b>b</b>) the capacitor structure for a fingerprint sensor case.</p>
Full article ">Figure 4
<p>Capacitance value: Ideal parallel-plate calculation vs. BEM.</p>
Full article ">Figure 5
<p>The contour plot of the electric field in the sensor structure in the horizontal direction.</p>
Full article ">Figure 6
<p>The simulation results on the capacitance difference (∆C) of the ridge to valley region (valley depth: 50 μm and 100 μm, ridge pitch: 300 μm, and the pixel pitch: 50/60/70/90 μm).</p>
Full article ">Figure 7
<p>A pixel equivalent circuit model and a single channel analog front end (AFE) of fingerprint senor system.</p>
Full article ">Figure 8
<p>Structure of readout integrated circuit (ROIC)’s charge to current converter.</p>
Full article ">Figure 9
<p>Timing diagram for data processing sequence.</p>
Full article ">Figure 10
<p>Simulation result of the AFE stage.</p>
Full article ">Figure 11
<p>(<b>a</b>) The assembled module of the transparent fingerprint sensing system. ROIC is packaged on COF (chip-on-film) and attached to the panel; (<b>b</b>) measured fingerprint image with pores.</p>
Full article ">Figure 12
<p>(<b>a</b>) ROIC layout; (<b>b</b>) ADC sub-block, (<b>c</b>) AFE sub-block.</p>
Full article ">
15 pages, 5634 KiB  
Article
Multi-Sensor Data Integration Using Deep Learning for Characterization of Defects in Steel Elements
by Grzegorz Psuj
Sensors 2018, 18(1), 292; https://doi.org/10.3390/s18010292 - 19 Jan 2018
Cited by 94 | Viewed by 9309
Abstract
Nowadays, there is a strong demand for inspection systems integrating both high sensitivity under various testing conditions and advanced processing allowing automatic identification of the examined object state and detection of threats. This paper presents the possibility of utilization of a magnetic multi-sensor [...] Read more.
Nowadays, there is a strong demand for inspection systems integrating both high sensitivity under various testing conditions and advanced processing allowing automatic identification of the examined object state and detection of threats. This paper presents the possibility of utilization of a magnetic multi-sensor matrix transducer for characterization of defected areas in steel elements and a deep learning based algorithm for integration of data and final identification of the object state. The transducer allows sensing of a magnetic vector in a single location in different directions. Thus, it enables detecting and characterizing any material changes that affect magnetic properties regardless of their orientation in reference to the scanning direction. To assess the general application capability of the system, steel elements with rectangular-shaped artificial defects were used. First, a database was constructed considering numerical and measurements results. A finite element method was used to run a simulation process and provide transducer signal patterns for different defect arrangements. Next, the algorithm integrating responses of the transducer collected in a single position was applied, and a convolutional neural network was used for implementation of the material state evaluation model. Then, validation of the obtained model was carried out. In this paper, the procedure for updating the evaluated local state, referring to the neighboring area results, is presented. Finally, the results and future perspective are discussed. Full article
(This article belongs to the Special Issue Small Devices and the High-Tech Society)
Show Figures

Figure 1

Figure 1
<p>Model and photo of the multi-sensor transducer: (<b>a</b>) cross-section; (<b>b</b>) 3D view; (<b>c</b>) photo of the bottom-side, all dimensions are in [mm].</p>
Full article ">Figure 2
<p>The utilized measuring system configuration diagram (<b>a</b>) and photo (<b>b</b>). EXC—excitation section; SEN—sensors; AMP—amplifier; MUX—multiplexer; CH—channel; XYZ Scanner—Cartesian coordinate robot; D/A—digital-to-analog converter; µC—microcontroller; PC—personal computer.</p>
Full article ">Figure 3
<p>The schematic diagram of the definition of the defect characterization model; FEM—finite element method.</p>
Full article ">Figure 4
<p>Utilized FEM model of the transducer: IE—infinite element domain, TD—transducer’s domain, StP—steel plate, MSM—multi-sensor matrix, FeC—ferrite core, EC—excitation coil, D—defect; (<b>a</b>) model view, (<b>b</b>) computation mesh view.</p>
Full article ">Figure 5
<p>Selected results of flux distributions obtained during FEM simulations: (<b>a</b>) without defect; (<b>b</b>) with defect; (<b>c</b>) polar plot of the <span class="html-italic">V</span><sub>z</sub> component of the flux sensed by the successive sensors normalized to maximum value.</p>
Full article ">Figure 6
<p>Comparison of selected results of magnetic field vector components acquired by all sensors for 1D scan along the 100% defect aligned at 0° to scanning direction obtained during FEM numerical simulations (<b>a</b>) and measurements (<b>b</b>).</p>
Full article ">Figure 7
<p>Selected results of reconstruction procedure obtained for: (<b>a</b>) different depth of the defects aligned at 0° (<span class="html-italic">r</span><sub>0</sub>); (<b>b</b>) different orientation angle and depth of 2 mm (<span class="html-italic">d</span><sub>2.0</sub>); the size of each reconstruction is 43 × 43.</p>
Full article ">Figure 8
<p>The block diagram of multi-class defect evaluation procedure from single point measurements.</p>
Full article ">Figure 9
<p>Schematic view of the deep convolutional neural network (<span class="html-italic">DCNN</span><sub>DND</sub>) architecture for evaluation of defect occurrence; layers: <span class="html-italic">IN</span>—input, <span class="html-italic">C</span>—convolutional, <span class="html-italic">MP</span>—max-pooling, <span class="html-italic">FC</span> &amp; <span class="html-italic">SM</span>—fully connected and softmax, <span class="html-italic">CL</span>—classification; <span class="html-italic">FM</span>—feature maps.</p>
Full article ">Figure 10
<p>Visualization of the <span class="html-italic">DCNN</span><sub>DND</sub> network’s third convolutional layer response for random inputs.</p>
Full article ">Figure 11
<p>Visualization of the neighborhood based class probability update algorithm for the <span class="html-italic">DCNN</span><sub>DND</sub> case.</p>
Full article ">Figure 12
<p>Visualization of the <span class="html-italic">DCNN</span><sub>DND</sub> class evaluation results: (<b>a</b>) before and (<b>b</b>) after utilization of class probability update algorithm; class 0—defect not sensed by the transducer, class 1—defect indicated by the transducer; white dashed line depicts the circumference of the transducer.</p>
Full article ">
14 pages, 3244 KiB  
Article
Joint Bearing and Range Estimation of Multiple Objects from Time-Frequency Analysis
by Jeng-Cheng Liu, Yuang-Tung Cheng and Hsien-Sen Hung
Sensors 2018, 18(1), 291; https://doi.org/10.3390/s18010291 - 19 Jan 2018
Cited by 6 | Viewed by 4664
Abstract
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. [...] Read more.
Direction-of-arrival (DOA) and range estimation is an important issue of sonar signal processing. In this paper, a novel approach using Hilbert-Huang transform (HHT) is proposed for joint bearing and range estimation of multiple targets based on a uniform linear array (ULA) of hydrophones. The structure of this ULA based on micro-electro-mechanical systems (MEMS) technology, and thus has attractive features of small size, high sensitivity and low cost, and is suitable for Autonomous Underwater Vehicle (AUV) operations. This proposed target localization method has the following advantages: only a single snapshot of data is needed and real-time processing is feasible. The proposed algorithm transforms a very complicated nonlinear estimation problem to a simple nearly linear one via time-frequency distribution (TFD) theory and is verified with HHT. Theoretical discussions of resolution issue are also provided to facilitate the design of a MEMS sensor with high sensitivity. Simulation results are shown to verify the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Smart Sensors for Mechatronic and Robotic Systems)
Show Figures

Figure 1

Figure 1
<p>The <span class="html-italic">k</span>th near-field target observed by a <span class="html-italic">M</span>-element uniform linear array.</p>
Full article ">Figure 2
<p>(<b>a</b>) The snapshot data; (<b>b</b>) The first IMF at the direction angle of 56°; (<b>c</b>) The second IMF at the direction angle of 10°; (<b>d</b>) Spatial IF (black) with linear regression line (red) at the angle of 56°; (<b>e</b>) Spatial IF (black) with linear regression line (red) at the angle of 10°.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) The snapshot data; (<b>b</b>) The first IMF at the direction angle of 56°; (<b>c</b>) The second IMF at the direction angle of 10°; (<b>d</b>) Spatial IF (black) with linear regression line (red) at the angle of 56°; (<b>e</b>) Spatial IF (black) with linear regression line (red) at the angle of 10°.</p>
Full article ">Figure 3
<p>The resulting IMFs (<b>a</b>,<b>b</b>) as well as spatial IFs (<b>c</b>,<b>d</b>) using HHT.</p>
Full article ">Figure 4
<p>Spatial spectrum obtained from FFT analysis.</p>
Full article ">Figure 5
<p>Fabrication process of the hydrophone linear array. (<b>a</b>) Schematic cross section illustration of a hydrophone chip. (<b>b</b>) Top view of this proposed hydrophone linear array.</p>
Full article ">Figure 6
<p>Experimental setup for underwater test of the fabricated hydrophone.</p>
Full article ">Figure 7
<p>Displacement at sensing membrane in the 1–220 kHz range with a peak-to-peak amplitude output of 200 mV output of a single sensor in the hydrophone linear array.</p>
Full article ">Figure 8
<p>Frequency dependence of output sound pressure of a device in the acoustic array at bias of 300 mV.</p>
Full article ">
10 pages, 1485 KiB  
Article
Wide-Field Fluorescence Microscopy of Real-Time Bioconjugation Sensing
by Marcin Szalkowski, Karolina Sulowska, Justyna Grzelak, Joanna Niedziółka-Jönsson, Ewa Roźniecka, Dorota Kowalska and Sebastian Maćkowski
Sensors 2018, 18(1), 290; https://doi.org/10.3390/s18010290 - 19 Jan 2018
Cited by 9 | Viewed by 4476
Abstract
We apply wide-field fluorescence microscopy to measure real-time attachment of photosynthetic proteins to plasmonically active silver nanowires. The observation of this effect is enabled, on the one hand, by sensitive detection of fluorescence and, on the other hand, by plasmonic enhancement of protein [...] Read more.
We apply wide-field fluorescence microscopy to measure real-time attachment of photosynthetic proteins to plasmonically active silver nanowires. The observation of this effect is enabled, on the one hand, by sensitive detection of fluorescence and, on the other hand, by plasmonic enhancement of protein fluorescence. We examined two sample configurations with substrates being a bare glass coverslip and a coverslip functionalized with a monolayer of streptavidin. The different preparation of the substrate changes the observed behavior as far as attachment of the protein is concerned as well as its subsequent photobleaching. For the latter substrate the conjugation process is measurably slower. The described method can be universally applied in studying protein-nanostructure interactions for real-time fluorescence-based sensing. Full article
(This article belongs to the Special Issue Novel Approaches to Biosensing with Nanoparticles)
Show Figures

Figure 1

Figure 1
<p>Absorption and emission spectra of Peridinin–Chlorophyll–Protein (PCP) (black and red lines, respectively) and extinction spectrum of AgNWs (blue line).</p>
Full article ">Figure 2
<p>(<b>A</b>) Transmission image of silver nanowires (AgNWs) prior to deposition of the PCP solution, (<b>B</b>) fluorescence image of AgNWs prior to deposition of the PCP solution, (<b>C</b>–<b>F</b>) fluorescence maps of PCP emission acquired at indicated times. The intensity scale of all the images is identical, and the solution of PCP complexes was added at <span class="html-italic">t</span> = 10 s.</p>
Full article ">Figure 3
<p>Schematic illustration of the structural differences between the two used substrates: bare glass (<b>A</b>) and streptavidin-covered glass (<b>B</b>). The fluorescence maps collected at 12.5 s (moment of maximum fluorescence intensity during the experiment) and at 300 s from the start of the measurement ((<b>C</b>,<b>D</b>), respectively) for bare glass substrate, and analogous fluorescence maps recorded at 30 s and 300 s from the start of the measurement for the streptavidin-covered glass ((<b>E</b>,<b>F</b>), respectively).</p>
Full article ">Figure 4
<p>Time-traces of fluorescence intensity measured at the ends (blue lines) and along the AgNWs (red and green lines for averaged and non-averaged data, respectively), as well as background signal level (black lines) measured for the bare glass (<b>A</b>) and streptavidin-functionalized glass (<b>B</b>) substrates.</p>
Full article ">Figure 5
<p>Histogram of fluorescence intensity of PCP complexes conjugated to the ends and centers of AgNWs (black and red bars, respectively) for the bare glass (<b>A</b>) and streptavidin-covered glass (<b>B</b>) substrates.</p>
Full article ">
16 pages, 3184 KiB  
Article
Mechanical Structural Design of a MEMS-Based Piezoresistive Accelerometer for Head Injuries Monitoring: A Computational Analysis by Increments of the Sensor Mass Moment of Inertia
by Marco Messina, James Njuguna and Chrysovalantis Palas
Sensors 2018, 18(1), 289; https://doi.org/10.3390/s18010289 - 19 Jan 2018
Cited by 16 | Viewed by 6757
Abstract
This work focuses on the proof-mass mechanical structural design improvement of a tri-axial piezoresistive accelerometer specifically designed for head injuries monitoring where medium-G impacts are common; for example, in sports such as racing cars or American Football. The device requires the highest sensitivity [...] Read more.
This work focuses on the proof-mass mechanical structural design improvement of a tri-axial piezoresistive accelerometer specifically designed for head injuries monitoring where medium-G impacts are common; for example, in sports such as racing cars or American Football. The device requires the highest sensitivity achievable with a single proof-mass approach, and a very low error (<1%) as the accuracy for these types of applications is paramount. The optimization method differs from previous work as it is based on the progressive increment of the sensor proof-mass mass moment of inertia (MMI) in all three axes. Three different designs are presented in this study, where at each step of design evolution, the MMI of the sensor proof-mass gradually increases in all axes. The work numerically demonstrates that an increment of MMI determines an increment of device sensitivity with a simultaneous reduction of cross-axis sensitivity in the particular axis under study. This is due to the linkage between the external applied stress and the distribution of mass (of the proof-mass), and therefore of its mass moment of inertia. Progressively concentrating the mass on the axes where the piezoresistors are located (i.e., x- and y-axis) by increasing the MMI in the x- and y-axis, will undoubtedly increase the longitudinal stresses applied in that areas for a given external acceleration, therefore increasing the piezoresistors fractional resistance change and eventually positively affecting the sensor sensitivity. The final device shows a sensitivity increase of about 80% in the z-axis and a reduction of cross-axis sensitivity of 18% respect to state-of-art sensors available in the literature from a previous work of the authors. Sensor design, modelling, and optimization are presented, concluding the work with results, discussion, and conclusion. Full article
(This article belongs to the Special Issue I3S 2017 Selected Papers)
Show Figures

Figure 1

Figure 1
<p>Mechanical structure of a triaxial accelerometer available in the literature by the first author Ph.D. thesis and patent [<a href="#B20-sensors-18-00289" class="html-bibr">20</a>,<a href="#B23-sensors-18-00289" class="html-bibr">23</a>]. (<b>a</b>) Isometric view with annotation of parts; (<b>b</b>) top view with location of piezoresistors.</p>
Full article ">Figure 2
<p>Mechanical structures top views. Optimization process increases the MMI at each step of evolution and therefore hypothetically there would be an increase in the sensitivity and a reduction in cross sensitivity.</p>
Full article ">Figure 3
<p>Percentage increment of MMI respect to the original circular proof-mass device. The design Cross 3 offers the highest percentage increment of MMI.</p>
Full article ">Figure 4
<p>Measurement circuit design: (<b>a</b>) Piezoresistors location on the top surface of the device. A total of 16 piezoresistors are used, 4 for the <span class="html-italic">x</span>-axis, 4 for the <span class="html-italic">y</span>-axis, and 8 for the <span class="html-italic">z</span>-axis; (<b>b</b>) <span class="html-italic">Ax</span>-, <span class="html-italic">Ay</span>-, and <span class="html-italic">Az</span>-Wheatstone Bridge measurement circuit; (<b>c</b>) Stress distribution under <span class="html-italic">x</span>-axis acceleration. Piezoresistors are placed where the highest stress is located by the finite element stress distribution analysis in order to maximize sensor sensitivity (see axial coordinates). Notice that in the oblique directions on the beams, where there is a clear high stress distribution (in red), the piezoresistors cannot be placed as in these directions they do not show high piezoresistance effect.</p>
Full article ">Figure 5
<p>The number of nodes used for the meshing is about 250,000. As it can be seen higher number of nodes is concentrated on the beams as more accurate stress distribution is required. Moreover, the structure is fixed on the bottom frame and the load applied is an acceleration of 500 G for each axis. Notice the axial coordinates that show the piezoresistors locations.</p>
Full article ">Figure 6
<p>Sensitivity increment of new designs in percentage. Highest increment is for the <span class="html-italic">z</span>-axis sensitivity of design Cross 3 (≈80%), overall the sensitivity increases progressively from design Cross 1 to Cross 3, demonstrating the effect of MMI.</p>
Full article ">Figure 7
<p>Cross-axis sensitivity reduction comparison of each new design.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop