[go: up one dir, main page]

Next Issue
Volume 19, October-2
Previous Issue
Volume 19, September-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 19 (October-1 2019) – 296 articles

Cover Story (view full-size image): A mobile system that can detect viruses in real-time is urgently needed. The PAMONO (plasmon-assisted microscopy of nanosized objects) biosensor represents a viable technology for mobile real-time detection of viruses. It could be used for fast and reliable diagnoses in hospitals, airports, the open air, or other settings. For analysis of the images provided by the sensor, state-of-the-art methods based on convolutional neural networks (CNNs) can achieve high accuracy. Such computationally intensive methods, however, may not be suitable on most resource-constrained mobile systems. In this work, we propose nanoparticle classification approaches based on frequency domain analysis, which are significantly less resource-intensive. With our results, we identify the trade-off between resource efficiency and classification performance for nanoparticle classification. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
10 pages, 7883 KiB  
Article
Performance Analysis of Positioning Solution Using Low-Cost Single-Frequency U-Blox Receiver Based on Baseline Length Constraint
by Liguo Lu, Liye Ma, Tangting Wu and Xiaoyong Chen
Sensors 2019, 19(19), 4352; https://doi.org/10.3390/s19194352 - 8 Oct 2019
Cited by 21 | Viewed by 3574
Abstract
With the rapid development of the satellite navigation industry, low-cost and high-precision Global Navigation Satellite System (GNSS) positioning has recently become a research hotspot. The traditional application of GNSS may be further extended thanks to the low cost of measuring instruments, but effective [...] Read more.
With the rapid development of the satellite navigation industry, low-cost and high-precision Global Navigation Satellite System (GNSS) positioning has recently become a research hotspot. The traditional application of GNSS may be further extended thanks to the low cost of measuring instruments, but effective methods are also desperately needed due to the low quality of the data obtained using these instruments. Thus, in this paper, we propose the analysis and evaluation of the ambiguity fixed-rate and positioning accuracy of single-frequency Global Positioning System (GPS) and BeiDou Navigation Satellite System (BDS) data, collected from a low-cost u-blox receiver, based on the Constrained LAMBDA (CLAMBDA) method with a baseline length constraint, instead of the classical LAMBDA method. Three sets of experiments in different observation environments, including two sets of static short-baseline experiments and a set of dynamic vehicle experiments, are adopted in this paper. The experiment results show that, compared to classical LAMBDA method, the CLAMBDA method can significantly improve the success rate of the GNSS ambiguity resolution. When the ambiguity is fixed correctly, the baseline solution accuracy reaches 0.5 and 1 cm in a static scenario, and 1 and 2 cm on a dynamic platform. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Experimental scene of the static data.</p>
Full article ">Figure 2
<p>Number of satellites (NSATs) and the position dilution of precision (PDOP) value. GB denotes GPS + BDS.</p>
Full article ">Figure 3
<p>East (E)/north (N)/ up (U) deviation sequence diagram ((<b>left</b>) LAMBDA, (<b>right</b>) CLAMBDA).</p>
Full article ">Figure 4
<p>Data acquisition scenario ((<b>left</b>) reference station, (<b>right</b>) rover station).</p>
Full article ">Figure 5
<p>Number of satellites (NSATs) and the PDOP value. GB denotes GPS + BDS.</p>
Full article ">Figure 6
<p>E/N/U deviation sequence diagram ((<b>left</b>) LAMBDA, (<b>right</b>) CLAMBDA).</p>
Full article ">Figure 7
<p>Experimental scene of the dynamic vehicle data.</p>
Full article ">Figure 8
<p>Number of satellites (NSATs) and the PDOP value. GB denotes GPS + BDS.</p>
Full article ">Figure 9
<p>E/N/U deviation sequence diagram ((<b>left</b>) LAMBDA, (<b>right</b>) CLAMBDA).</p>
Full article ">
27 pages, 19080 KiB  
Article
Indoor Positioning on Disparate Commercial Smartphones Using Wi-Fi Access Points Coverage Area
by Imran Ashraf, Soojung Hur and Yongwan Park
Sensors 2019, 19(19), 4351; https://doi.org/10.3390/s19194351 - 8 Oct 2019
Cited by 38 | Viewed by 4604
Abstract
The applications of location-based services require precise location information of a user both indoors and outdoors. Global positioning system’s reduced accuracy for indoor environments necessitated the initiation of Indoor Positioning Systems (IPSs). However, the development of an IPS which can determine the user’s [...] Read more.
The applications of location-based services require precise location information of a user both indoors and outdoors. Global positioning system’s reduced accuracy for indoor environments necessitated the initiation of Indoor Positioning Systems (IPSs). However, the development of an IPS which can determine the user’s position with heterogeneous smartphones in the same fashion is a challenging problem. The performance of Wi-Fi fingerprinting-based IPSs is degraded by many factors including shadowing, absorption, and interference caused by obstacles, human mobility, and body loss. Moreover, the use of various smartphones and different orientations of the very same smartphone can limit its positioning accuracy as well. As Wi-Fi fingerprinting is based on Received Signal Strength (RSS) vector, it is prone to dynamic intrinsic limitations of radio propagation, including changes over time, and far away locations having similar RSS vector. This article presents a Wi-Fi fingerprinting approach that exploits Wi-Fi Access Points (APs) coverage area and does not utilize the RSS vector. Using the concepts of APs coverage area uniqueness and coverage area overlap, the proposed approach calculates the user’s current position with the help of APs’ intersection area. The experimental results demonstrate that the device dependency can be mitigated by making the fingerprinting database with the proposed approach. The experiments performed at a public place proves that positioning accuracy can also be increased because the proposed approach performs well in dynamic environments with human mobility. The impact of human body loss is studied as well. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Access Points (APs) and Received Signal Strength (RSS) values using four devices scanned from the same position: (<b>a</b>) LG G6, (<b>b</b>) LG G7, (<b>c</b>) S8 device 1, and (<b>d</b>) S8 device 2. Graphs are made using WiFi analyzer [<a href="#B22-sensors-19-04351" class="html-bibr">22</a>].</p>
Full article ">Figure 2
<p>Received signal strength (RSS) values for different Wi-Fi APs in different directions scanned from the same position.</p>
Full article ">Figure 3
<p>Received signal strength (RSS) values for different Wi-Fi APs with different phone orientations scanned from the same position.</p>
Full article ">Figure 4
<p>The coverage area of individual APs in the experiment area. The RSS values change due to indoor environment structure.</p>
Full article ">Figure 5
<p>AP locations in the experiment area.</p>
Full article ">Figure 6
<p>(<b>a</b>) AP coverage in the experiment area. (<b>b</b>) AP coverage in the experiment area top view.</p>
Full article ">Figure 7
<p>The APs coverage in the experiment area with higher number of APS.</p>
Full article ">Figure 8
<p>Intersection area based on Wi-Fi APs’ coverage. The coverage area is based on the filtered RSS using <math display="inline"><semantics> <mi>α</mi> </semantics></math> threshold: (<b>a</b>) two APs, (<b>b</b>) three APs, (<b>c</b>) four APs, and (<b>d</b>) five APs.</p>
Full article ">Figure 9
<p>Spatial proximity calculation using the distance between candidate positions.</p>
Full article ">Figure 10
<p>The experiment setup for small room (office environment).</p>
Full article ">Figure 11
<p>Floor plan of test sites to evaluate the proposed approach. (<b>a</b>) CRC building and (<b>b</b>) RIC building.</p>
Full article ">Figure 12
<p>Floor plan of test sites to evaluate the proposed approach. (<b>a</b>) IT building and (<b>b</b>) TE building.</p>
Full article ">Figure 13
<p>The error graph for room level positioning.</p>
Full article ">Figure 14
<p>Distance error cumulative distribution function (CDF) for all buildings with all devices using the proposed approach: (<b>a</b>) IT building, (<b>b</b>) RIC building, (<b>c</b>) TE building, and (<b>d</b>) CRC building.</p>
Full article ">Figure 15
<p>Prediction error against the AP intersection found for the error.</p>
Full article ">Figure 16
<p>Distance error CDF for all buildings with different techniques using different smartphones. (<b>a</b>) S8 smartphone, (<b>b</b>) G6 smartphone, and (<b>c</b>) G7 smartphone.</p>
Full article ">Figure 17
<p>Experiment path used in Starfield COnvention and EXhibition (COEX) center.</p>
Full article ">Figure 18
<p>The error CDF for COEX center using different smartphones. (<b>a</b>) Galaxy S8 smartphone, (<b>b</b>) LG G6 smartphone, and (<b>c</b>) LG G7 smartphone.</p>
Full article ">Figure 19
<p>The scenario where APs are geographically close.</p>
Full article ">Figure 20
<p>The scenario with geographically scattered APs.</p>
Full article ">
18 pages, 6586 KiB  
Article
INSPEX: Optimize Range Sensors for Environment Perception as a Portable System
by Julie Foucault, Suzanne Lesecq, Gabriela Dudnik, Marc Correvon, Rosemary O’Keeffe, Vincenza Di Palma, Marco Passoni, Fabio Quaglia, Laurent Ouvry, Steven Buckley, Jean Herveg, Andrea di Matteo, Tiana Rakotovao, Olivier Debicki, Nicolas Mareau, John Barrett, Susan Rea, Alan McGibney, François Birot, Hugues de Chaumont, Richard Banach, Joseph Razavi and Cian Ó’Murchúadd Show full author list remove Hide full author list
Sensors 2019, 19(19), 4350; https://doi.org/10.3390/s19194350 - 8 Oct 2019
Cited by 7 | Viewed by 4892
Abstract
Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, [...] Read more.
Environment perception is crucial for the safe navigation of vehicles and robots to detect obstacles in their surroundings. It is also of paramount interest for navigation of human beings in reduced visibility conditions. Obstacle avoidance systems typically combine multiple sensing technologies (i.e., LiDAR, radar, ultrasound and visual) to detect various types of obstacles under different lighting and weather conditions, with the drawbacks of a given technology being offset by others. These systems require powerful computational capability to fuse the mass of data, which limits their use to high-end vehicles and robots. INSPEX delivers a low-power, small-size and lightweight environment perception system that is compatible with portable and/or wearable applications. This requires miniaturizing and optimizing existing range sensors of different technologies to meet the user’s requirements in terms of obstacle detection capabilities. These sensors consist of a LiDAR, a time-of-flight sensor, an ultrasound and an ultra-wideband radar with measurement ranges respectively of 10 m, 4 m, 2 m and 10 m. Integration of a data fusion technique is also required to build a model of the user’s surroundings and provide feedback about the localization of harmful obstacles. As primary demonstrator, the INSPEX device will be fixed on a white cane. Full article
(This article belongs to the Special Issue Wearable Sensors and Devices for Healthcare Applications)
Show Figures

Figure 1

Figure 1
<p><b>Left</b>: primary demonstrator. <b>Right</b>: safety cocoon offered by the INSPEX system.</p>
Full article ">Figure 2
<p>INSPEX methodology.</p>
Full article ">Figure 3
<p>Arrangement of the sensors to cover the whole person’s height.</p>
Full article ">Figure 4
<p>Ultrasound prototype module.</p>
Full article ">Figure 5
<p>Ultrasound module components: (<b>a</b>) transducer board; (<b>b</b>) main board; (<b>c</b>) ultrasound module; (<b>d</b>) ultrasound module with cone-shaped mask.</p>
Full article ">Figure 6
<p>Measured distance vs. reference and its error.</p>
Full article ">Figure 7
<p>Long-range LiDAR prototype brought to the project.</p>
Full article ">Figure 8
<p>Optimized long-range LiDAR Components: (<b>a</b>) Motherboard; (<b>b</b>) Optics board; (<b>c</b>) Module; (<b>d</b>) Module with Optics.</p>
Full article ">Figure 9
<p>Depth camera (time-of-flight) module.</p>
Full article ">Figure 10
<p>The depth camera (time-of-flight) module can detect objects located at a distance up to 4 m under controlled conditions.</p>
Full article ">Figure 11
<p>Ultra-wideband radar prototype (<b>left</b>, <b>middle</b>) and its response over time (3 snapshots) (<b>right</b>).</p>
Full article ">Figure 12
<p>Ultra-wideband radar module. (<b>a</b>) Antenna board; (<b>b</b>) RF board; (<b>c</b>) digital board.</p>
Full article ">Figure 13
<p>Example of performance verification with a human walking towards the radar.</p>
Full article ">Figure 14
<p>Experiment setup: the preliminary INSPEX prototype is fixed on a white cane; the device tries to detect obstacles (walls, objects, people) in a corridor.</p>
Full article ">Figure 15
<p>Ultrasound (<b>left</b>) and depth camera (<b>right</b>) measurements together with their respective occupancy grid computed with SigmaFusion™. The color of each cell encodes the occupancy probability (black: “occupied”; light grey: “empty”, grey: “unknown”).</p>
Full article ">Figure 16
<p>Occupancy grid computed from both the ultrasound and the depth camera measurements.</p>
Full article ">Figure 17
<p>Mockup of the integrated INSPEX device.</p>
Full article ">
20 pages, 4835 KiB  
Article
Beehive-Inspired Information Gathering with a Swarm of Autonomous Drones
by Alberto Viseras, Thomas Wiedemann, Christoph Manss, Valentina Karolj, Dmitriy Shutin and Juan Marchal
Sensors 2019, 19(19), 4349; https://doi.org/10.3390/s19194349 - 8 Oct 2019
Cited by 14 | Viewed by 5463
Abstract
This paper presents a beehive-inspired multi-agent drone system for autonomous information collection to support the needs of first responders and emergency teams. The proposed system is designed to be simple, cost-efficient, yet robust and scalable at the same time. It includes several unmanned [...] Read more.
This paper presents a beehive-inspired multi-agent drone system for autonomous information collection to support the needs of first responders and emergency teams. The proposed system is designed to be simple, cost-efficient, yet robust and scalable at the same time. It includes several unmanned aerial vehicles (UAVs) that can be tasked with data collection, and a single control station that acts as a data accumulation and visualization unit. The system also provides a local communication access point for the UAVs to exchange information and coordinate the data collection routes. By avoiding peer-to-peer communication and using proactive collision avoidance and path-planning, the payload weight and per-drone costs can be significantly reduced; the whole concept can be implemented using inexpensive off-the-shelf components. Moreover, the proposed concept can be used with different sensors and types of UAVs. As such, it is suited for local-area operations, but also for large-scale information-gathering scenarios. The paper outlines the details of the system hardware and software design, and discusses experimental results for collecting image information with a set of 4 multirotor UAVs at a small experimental area. The obtained results validate the concept and demonstrate robustness and scalability of the system. Full article
Show Figures

Figure 1

Figure 1
<p>System block diagram.</p>
Full article ">Figure 2
<p>Base station workflow.</p>
Full article ">Figure 3
<p>Three snapshots of the map discretization algorithm for partitioning the ROI into three regions (red, blue, green). The green circumference delimits the drones starting position. Obstacles are marked in gray.</p>
Full article ">Figure 4
<p>GUI for swarm control and data visualization.</p>
Full article ">Figure 5
<p>Algorithm for autonomous information gathering. Orange boxes indicate algorithm steps in which the UAV is gathering information and is, therefore, not connected to the base station. White boxes indicate steps in which the UAV shall be connected to the base station.</p>
Full article ">Figure 6
<p>Simulation results that evaluate the total mission time in terms of the number of UAVs in the system, and of the system’s setup time. For all simulations we assume a ROI that has a constant size. (<b>a</b>) We divide the ROI in 8 regions. (<b>b</b>) We divide the ROI in 3 regions.</p>
Full article ">Figure 7
<p>Experimental setup to evaluate our system design. (<b>a</b>) Aerial image of the area in which we carried out the experiments. (<b>b</b>) Three drones flying while they take images of a ROI. (<b>c</b>) One of the Astec Hummingbird quadcopters that we used for the experiments.</p>
Full article ">Figure 8
<p>Nominal and actual trajectories of a system composed by 3 drones and 3 regions. (<b>a</b>) Trajectories as stored in the DB. (<b>b</b>) Trajectories recorded by each drone’s GPS receiver. In (<b>a</b>) we plot trajectories in a local coordinate frame, while in (<b>b</b>) we plot them in a global coordinate frame.</p>
Full article ">Figure 9
<p>Algorithm scalability with respect to the number of drones. We evaluate a system of 2, 3 and 4 drones. (<b>a</b>) Time needed to observe a certain number of square meters. (<b>b</b>) Time required for each UAV to travel a certain distance.</p>
Full article ">Figure 10
<p>Nominal and actual trajectories of a system composed by 4 drones. (<b>a</b>) Trajectories as stored in the DB. (<b>b</b>) Trajectories recorded by each drone’s GPS receiver. In (<b>a</b>) we plot trajectories in a local coordinate frame, while in (<b>b</b>) we plot them in a global coordinate frame. Here we can observe that Otto crashed, as it was not able to complete the route. Charles, followed by Hans, took over Otto’s route, which demonstrates the system’s robustness against a drone crash.</p>
Full article ">Figure 11
<p>Assignment of regions for three different experiments. For each of the drones we represent the instant of time in which they were assigned to a region (R1, R2, or R3). We depict results for one experiment that was successfully completed (<b>a</b>), and for one experiment in which a drone crashed (<b>b</b>).</p>
Full article ">
19 pages, 1571 KiB  
Article
Machine Learning for LTE Energy Detection Performance Improvement
by Małgorzata Wasilewska and Hanna Bogucka
Sensors 2019, 19(19), 4348; https://doi.org/10.3390/s19194348 - 8 Oct 2019
Cited by 18 | Viewed by 4085
Abstract
The growing number of radio communication devices and limited spectrum resources are drivers for the development of new techniques of dynamic spectrum access and spectrum sharing. In order to make use of the spectrum opportunistically, the concept of cognitive radio was proposed, where [...] Read more.
The growing number of radio communication devices and limited spectrum resources are drivers for the development of new techniques of dynamic spectrum access and spectrum sharing. In order to make use of the spectrum opportunistically, the concept of cognitive radio was proposed, where intelligent decisions on transmission opportunities are based on spectrum sensing. In this paper, two Machine Learning (ML) algorithms, namely k-Nearest Neighbours and Random Forest, have been proposed to increase spectrum sensing performance. These algorithms have been applied to Energy Detection (ED) and Energy Vector-based data (EV) to detect the presence of a Fourth Generation (4G) Long-Term Evolution (LTE) signal for the purpose of utilizing the available resource blocks by a 5G new radio system. The algorithms capitalize on time, frequency and spatial dependencies in daily communication traffic. Research results show that the ML methods used can significantly improve the spectrum sensing performance if the input training data set is carefully chosen. The input data sets with ED decisions and energy values have been examined, and advantages and disadvantages of their real-life application have been analyzed. Full article
(This article belongs to the Special Issue Intelligent Sensor Signal in Machine Learning)
Show Figures

Figure 1

Figure 1
<p>k-Nearest Neighbors—visualization of the closest data points for different <span class="html-italic">k</span> values.</p>
Full article ">Figure 2
<p>Decision tree—tree with depth 3.</p>
Full article ">Figure 3
<p>System model.</p>
Full article ">Figure 4
<p>LTE Resource Blocks features.</p>
Full article ">Figure 5
<p>Transmitted LTE Resource Blocks.</p>
Full article ">Figure 6
<p>Probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> for the Energy Detection stage for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>%</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>0.5</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Resulting probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and probability of false alarm <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of the Energy Detection-based k-Nearest Neighbors method for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Resulting probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and probability of false alarm <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of the Energy Detection-based Random Forest method for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Resulting probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and probability of false alarm <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of the Energy Vector-based k-Nearest Neighbors method for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Resulting probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and probability of false alarm <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of the Energy Vector-based Random Forest method for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>Probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> comparison of the Energy Detection-based and Energy Vector-based k-Nearest Neighbors and Random Forest methods for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 12
<p>Probability of detection <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> comparison of the Energy Detection-based and Energy Vector-based k-Nearest Neighbors and Random Forest methods for different assumed <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 13
<p>SNR values resulting from the shadowing effect in the considered area.</p>
Full article ">Figure 14
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> for different locations.</p>
Full article ">Figure 15
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> for the k-Nearest Neighbors method applied in different locations. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Detection-based k-Nearest Neighbors; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Vector-based k-Nearest Neighbors.</p>
Full article ">Figure 16
<p>Resulting probabilities <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of Energy Detection-based k-Nearest Neighbors compared with Energy Vector-based k-Nearest Neighbors for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> for different SNR values with a shadowing channel.</p>
Full article ">Figure 17
<p>Probabilities <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> in different locations for the applied Random Forest method. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Detection-based Random Forest; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Vector-based Random Forest.</p>
Full article ">Figure 18
<p>Resulting probabilities <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> of applied Energy Detection-based Random Forest compared with Energy Vector-based Random Forest for <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>P</mi> <mo stretchy="false">¯</mo> </mover> <mi>fa</mi> </msub> <mo>=</mo> <mn>10</mn> <mo>%</mo> </mrow> </semantics></math> for different SNR values with a shadowing channel.</p>
Full article ">Figure 19
<p>Probabilities <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> in different locations for the applied k-Nearest Neighbors, Random Forest, Gaussian Naive Bayes and Support Vector Machine classifier methods. (<b>a</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Detection-based Machine Learning algorithms; (<b>b</b>) <math display="inline"><semantics> <msub> <mi>P</mi> <mi mathvariant="normal">d</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>P</mi> <mi>fa</mi> </msub> </semantics></math> surfaces for Energy Vector-based Machine Learning algorithms.</p>
Full article ">
14 pages, 9063 KiB  
Article
Improving the GRACE Kinematic Precise Orbit Determination Through Modified Clock Estimating
by Xingyu Zhou, Weiping Jiang, Hua Chen, Zhao Li and Xuexi Liu
Sensors 2019, 19(19), 4347; https://doi.org/10.3390/s19194347 - 8 Oct 2019
Cited by 13 | Viewed by 3594
Abstract
Utilizing global positioning system (GPS) to determine the precise kinematic orbits for the twin satellites of the Gravity Recovery and Climate Experiment (GRACE) plays a very important role in the earth’s gravitational and other scientific fields. However, the orbit quality is highly depended [...] Read more.
Utilizing global positioning system (GPS) to determine the precise kinematic orbits for the twin satellites of the Gravity Recovery and Climate Experiment (GRACE) plays a very important role in the earth’s gravitational and other scientific fields. However, the orbit quality is highly depended on the geometry of observed GPS satellites. In this study, we propose a kinematic orbit determination method for improving the GRACE orbit quality especially when the geometry of observed GPS satellites is weak, where an appropriate random walk clock constraint between adjacent epochs is recommended according to the stability of on-board GPS receiver clocks. GRACE data over one month were adopted in the experimental validation. Results show that the proposed method could improve the root mean square (RMS) by 20–40% in radial component and 5–20% in along and cross components. For those epochs with position dilution of precision (PDOP) larger than 4, the orbits were improved by 50–70% in radial component and 17–50% in along and cross components. Meanwhile, the Allan deviation of clock estimates in the proposed method was much closer to the reported Allan deviation of GRACE on-board oscillator. All the results confirmed the improvement of the proposed method. Full article
(This article belongs to the Special Issue GNSS Data Processing and Navigation)
Show Figures

Figure 1

Figure 1
<p>PDOP, GDOP, number of satellites, clock offsets and radial-along-cross residuals, float solution and float + random walk (RW) solution, Gravity Recovery and Climate Experiment (GRACE)-B, 1 February 2017.</p>
Full article ">Figure 2
<p>Daily RMS of GRACE-A and GRACE-B, float solution with and without RW constraints, 1–28 February 2017.</p>
Full article ">Figure 3
<p>Allan deviation of clock offset estimates, float solution with and without RW constraints, GRACE-B, 1 February 2007.</p>
Full article ">Figure 4
<p>RMS for Epochs PDOP greater than 4, float solution with and without RW constraints, GRACE-B, 1–28 February 2017.</p>
Full article ">Figure 5
<p>Ambiguities, PDOP, GDOP, number of satellites, clock offsets and radial-along-cross residuals, float, precise point positioning ambiguity resolution (PPP-AR) and PPP-AR + RW, GRACE-B, 1 February 2007.</p>
Full article ">Figure 6
<p>Daily RMS of GRACE-A and GRACE-B, PPP-AR solution with and without RW constraints, 1–28 February 2007.</p>
Full article ">Figure 7
<p>Allan deviation of clock offset estimates, PPP-AR solution with and without RW constraints, GRACE-B, 1 February 2007.</p>
Full article ">Figure 8
<p>PDOP, GDOP, number of GPS satellites, radial-along-cross residuals, float, float + RW and float + RW (post-processing), GRACE-A, 1 February 2007.</p>
Full article ">Figure 9
<p>PDOP, GDOP, number of GPS satellites, radial-along-cross residuals, float, float + RW and float + RW (post-processing), GRACE-B, 1 February 2007.</p>
Full article ">Figure 10
<p>DOPs, number of GPS satellites, radial-along-cross residuals, float, float + RW and float + RW (post-processing), GRACE-B, 00:00–00:40, 1 February 2007.</p>
Full article ">
7 pages, 1677 KiB  
Article
Improved Optical Waveguide Microcantilever for Integrated Nanomechanical Sensor
by Yachao Jing, Guofang Fan, Rongwei Wang, Zeping Zhang, Xiaoyu Cai, Jiasi Wei, Xin Chen, Hongyu Li and Yuan Li
Sensors 2019, 19(19), 4346; https://doi.org/10.3390/s19194346 - 8 Oct 2019
Cited by 5 | Viewed by 2880
Abstract
This paper reports on an improved optical waveguide microcantilever sensor with high sensitivity. To improve the sensitivity, a buffer was introduced into the connection of the input waveguide and optical waveguide cantilever by extending the input waveguide to reduce the coupling loss of [...] Read more.
This paper reports on an improved optical waveguide microcantilever sensor with high sensitivity. To improve the sensitivity, a buffer was introduced into the connection of the input waveguide and optical waveguide cantilever by extending the input waveguide to reduce the coupling loss of the junction. The buffer-associated optical losses were examined for different cantilever thicknesses. The optimum length of the buffer was found to be 0.97 μm for a cantilever thickness of 300 nm. With this configuration, the optical loss was reduced to about 40%, and the maximum sensitivity was more than twice that of the conventional structure. Full article
(This article belongs to the Special Issue Nanomechanical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic of the optical waveguide sensors: (<b>a</b>) conventional structure and (<b>b</b>) improved structure.</p>
Full article ">Figure 2
<p>Effective index with the thickness of the optical cantilever.</p>
Full article ">Figure 3
<p>Electric field distribution for the input waveguide and cantilever of (<b>a</b>) the conventional structure and (<b>b</b>) the improved structure with a buffer.</p>
Full article ">Figure 4
<p>The optical loss in the air with the buffer length for different OWC thicknesses.</p>
Full article ">Figure 5
<p>The optical coupling efficiency with the fabrication tolerances of the 0.97 um length buffer.</p>
Full article ">Figure 6
<p>(<b>a</b>) The coupling efficiency. (<b>b</b>) The sensitivity with the displacement of the cantilever for the conventional structure and the improved structure with a buffer.</p>
Full article ">
10 pages, 2820 KiB  
Article
Gold-Film-Thickness Dependent SPR Refractive Index and Temperature Sensing with Hetero-Core Optical Fiber Structure
by Rui Zhang, Shengli Pu and Xinjie Li
Sensors 2019, 19(19), 4345; https://doi.org/10.3390/s19194345 - 8 Oct 2019
Cited by 58 | Viewed by 6113
Abstract
A simple hetero-core optical fiber (MMF-NCF-MMF) surface plasmon resonance (SPR) sensing structure was proposed. The SPR spectral sensitivity, full width of half peak (FWHM), valley depth (VD), and figure of merit (FOM) were defined to evaluate the sensing performance comprehensively. The effect of [...] Read more.
A simple hetero-core optical fiber (MMF-NCF-MMF) surface plasmon resonance (SPR) sensing structure was proposed. The SPR spectral sensitivity, full width of half peak (FWHM), valley depth (VD), and figure of merit (FOM) were defined to evaluate the sensing performance comprehensively. The effect of gold film thickness on the refractive index and temperature sensing performance was studied experimentally. The optimum gold film thickness was found. The maximum sensitivities for refractive index and temperature measurement were obtained to be 2933.25 nm/RIU and −0.91973 nm/°C, respectively. The experimental results are helpful to design the SPR structure with improved sensing performance. The proposed SPR sensing structure has the advantages of simple structure, easy implementation, and good robustness, which implies a broad application prospect. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic of the proposed sensor; (<b>b</b>) Picture of the as-fabricated structure coated with gold film.</p>
Full article ">Figure 2
<p>(<b>a</b>) Typical screenshot of the step profiler for measuring the gold film thickness with sputtering time of 90 s; (<b>b</b>) Gold film thickness as a function of sputtering time.</p>
Full article ">Figure 3
<p>Experimental setup for investigating the sensing properties of the simple hetero-core optical fiber (MMF-NCF-MMF) structure.</p>
Full article ">Figure 4
<p>Transmission spectra of the sensing structure at different surrounding refractive indices (<b>a</b>) and ambient temperatures (<b>b</b>). The thickness of the gold film is 25.753 nm.</p>
Full article ">Figure 5
<p>Resonance wavelength as a function of surrounding refractive index (<b>a</b>) and ambient temperature (<b>b</b>) at different gold film thicknesses.</p>
Full article ">Figure 6
<p>Refractive index and temperature sensitivities as functions of gold film thickness.</p>
Full article ">Figure 7
<p>Gold film thickness dependence of (<b>a</b>) full width of half peak (FWHM), (<b>b</b>) valley depth (VD), and (<b>c</b>) figure of merit (FOM) during refractive index measurement.</p>
Full article ">Figure 8
<p>Gold film thickness dependence of FOM during temperature measurement.</p>
Full article ">
17 pages, 5350 KiB  
Article
Unsupervised Moving Object Segmentation from Stationary or Moving Camera Based on Multi-frame Homography Constraints
by Zhigao Cui, Ke Jiang and Tao Wang
Sensors 2019, 19(19), 4344; https://doi.org/10.3390/s19194344 - 8 Oct 2019
Cited by 4 | Viewed by 4067
Abstract
Moving object segmentation is the most fundamental task for many vision-based applications. In the past decade, it has been performed on the stationary camera, or moving camera, respectively. In this paper, we show that the moving object segmentation can be addressed in a [...] Read more.
Moving object segmentation is the most fundamental task for many vision-based applications. In the past decade, it has been performed on the stationary camera, or moving camera, respectively. In this paper, we show that the moving object segmentation can be addressed in a unified framework for both type of cameras. The proposed method consists of two stages: (1) In the first stage, a novel multi-frame homography model is generated to describe the background motion. Then, the inliers and outliers of that model are classified as background trajectories and moving object trajectories by the designed cumulative acknowledgment strategy. (2) In the second stage, a super-pixel-based Markov Random Fields model is used to refine the spatial accuracy of initial segmentation and obtain final pixel level labeling, which has integrated trajectory classification information, a dynamic appearance model, and spatial temporal cues. The proposed method overcomes the limitations of existing object segmentation algorithms and resolves the difference between stationary and moving cameras. The algorithm is tested on several challenging open datasets. Experiments show that the proposed method presents significant performance improvement over state-of-the-art techniques quantitatively and qualitatively. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The framework. Our method takes a raw video as input, and produces a binary labeling as the output. Two major steps are initial multi-frame homography model for trajectories classification (left) and final Markov random fields model for foreground background labeling (right).</p>
Full article ">Figure 2
<p>Illustration for the temporally partition of the input video.</p>
Full article ">Figure 3
<p>Example of initial trajectory classification and final pixel labeling. The selected image is from cars 2 video sequence of Hopkins 155 [<a href="#B40-sensors-19-04344" class="html-bibr">40</a>] dataset. (<b>a</b>) Example of initial trajectory classification. (<b>b</b>) Example of final pixel level labeling.</p>
Full article ">Figure 4
<p>Trajectory classification results on the <span class="html-italic">cars</span> 1 (column 1), <span class="html-italic">cars</span> 5 (column 2), <span class="html-italic">people</span> 2 (column 3), <span class="html-italic">vperson</span> (column 4), <span class="html-italic">backyard</span> (column 5), and <span class="html-italic">highway</span> (column 6) sequences using our method (row 2) and Sheikh et al. [<a href="#B22-sensors-19-04344" class="html-bibr">22</a>] (row 3). For visualization purposes, the foreground and background trajectories are shown by the purple, and blue colors, respectively.</p>
Full article ">Figure 5
<p>Experimental results of final pixel level labeling on the <span class="html-italic">cars</span> 3 (row 1), <span class="html-italic">cars</span> 4 (row 2), <span class="html-italic">people</span> 1 (row 3), <span class="html-italic">people</span> 2 (row 4), and <span class="html-italic">pets</span> 2006 (row 5) sequences. (Column 1) Input image. (Column 2) Ground truth. (Column 3) Sheikh et al. [<a href="#B22-sensors-19-04344" class="html-bibr">22</a>]. (Column 4) Zhu et al. [<a href="#B26-sensors-19-04344" class="html-bibr">26</a>]. (Column 5) Chiranjoy et al. [<a href="#B31-sensors-19-04344" class="html-bibr">31</a>]. (Column 6) Our algorithm.</p>
Full article ">Figure 5 Cont.
<p>Experimental results of final pixel level labeling on the <span class="html-italic">cars</span> 3 (row 1), <span class="html-italic">cars</span> 4 (row 2), <span class="html-italic">people</span> 1 (row 3), <span class="html-italic">people</span> 2 (row 4), and <span class="html-italic">pets</span> 2006 (row 5) sequences. (Column 1) Input image. (Column 2) Ground truth. (Column 3) Sheikh et al. [<a href="#B22-sensors-19-04344" class="html-bibr">22</a>]. (Column 4) Zhu et al. [<a href="#B26-sensors-19-04344" class="html-bibr">26</a>]. (Column 5) Chiranjoy et al. [<a href="#B31-sensors-19-04344" class="html-bibr">31</a>]. (Column 6) Our algorithm.</p>
Full article ">
17 pages, 1507 KiB  
Article
An Amplifier-Less Acquisition Chain for Power Measurements in Series Resonant Inverters
by Jorge Villa, José I. Artigas, Luis A. Barragán and Denis Navarro
Sensors 2019, 19(19), 4343; https://doi.org/10.3390/s19194343 - 8 Oct 2019
Cited by 2 | Viewed by 2856
Abstract
Successive approximation register (SAR) analog-to-digital converter (ADC) manufacturers recommend the use of a driver amplifier to achieve the best performance. When a driver amplifier is not used, the conversion speed is severely penalized because of the need to meet the settling time constraint. [...] Read more.
Successive approximation register (SAR) analog-to-digital converter (ADC) manufacturers recommend the use of a driver amplifier to achieve the best performance. When a driver amplifier is not used, the conversion speed is severely penalized because of the need to meet the settling time constraint. This paper proposes a simple digital correction method to raise the performance (conversion speed and/or accuracy) when the acquisition chain lacks a driver amplifier. It is intended to reduce the cost, size and power consumption of the conditioning circuit while maintaining acceptable performance. The method is applied to the measurement of the output power delivered by a series resonant inverter for domestic induction heating. Full article
(This article belongs to the Special Issue Electronic Interfaces for Sensors)
Show Figures

Figure 1

Figure 1
<p>Successive approximation register (SAR) analog-to-digital converter (ADC) input driving circuit recommended by manufacturers.</p>
Full article ">Figure 2
<p>Domestic induction heating power converter: (<b>a</b>) Schematic. (<b>b</b>) Output waveforms.</p>
Full article ">Figure 3
<p>Circuit analysis: (<b>a</b>) front-end and sample and hold (SH) schematic, and (<b>b</b>) waveforms.</p>
Full article ">Figure 4
<p>Acquisition chain for the inverter output voltage.</p>
Full article ">Figure 5
<p>Octave code for the voltage calibration method.</p>
Full article ">Figure 6
<p>Acquisition chain for the inverter load current.</p>
Full article ">Figure 7
<p>Experimental setup of the system.</p>
Full article ">Figure 8
<p>Acquired voltage fitting for zero output voltage.</p>
Full article ">Figure 9
<p>Acquired current fitting for zero load current.</p>
Full article ">Figure 10
<p>Experimental waveforms captured through the oscilloscope: Channel 1 (blue), <math display="inline"><semantics> <msub> <mi>v</mi> <mi>O</mi> </msub> </semantics></math>; Channel 4 (red), <math display="inline"><semantics> <msub> <mi>i</mi> <mi>L</mi> </msub> </semantics></math>, for switching frequencies of (<b>a</b>) 35 kHz, (<b>b</b>) 50 kHz, and (<b>c</b>) 70 kHz.</p>
Full article ">Figure 11
<p>Voltage, current and power errors with raw data and correction method with nominal values for switching frequencies of (<b>a</b>) 35 kHz, (<b>b</b>) 50 kHz, and (<b>c</b>) 70 kHz.</p>
Full article ">Figure 12
<p>Nominal versus optimized voltage, current and power errors for switching frequencies of (<b>a</b>) 35 kHz, (<b>b</b>) 50 kHz, and (<b>c</b>) 70 kHz.</p>
Full article ">Figure 13
<p>Voltage error for several <math display="inline"><semantics> <msub> <mi>V</mi> <mrow> <mi>S</mi> <mi>H</mi> <mn>0</mn> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 14
<p>Spectral quality parameters of the ADC for different input frequencies and acquisition times. (<b>a</b>) signal-to-noise and distortion ratio (SINAD), (<b>b</b>) spurious-free dynamic range (SFDR).</p>
Full article ">
17 pages, 4811 KiB  
Article
Prediction of Motor Failure Time Using An Artificial Neural Network
by Gustavo Scalabrini Sampaio, Arnaldo Rabello de Aguiar Vallim Filho, Leilton Santos da Silva and Leandro Augusto da Silva
Sensors 2019, 19(19), 4342; https://doi.org/10.3390/s19194342 - 8 Oct 2019
Cited by 54 | Viewed by 14275
Abstract
Industry is constantly seeking ways to avoid corrective maintenance so as to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way. In the context of condition-based maintenance, the main contributions of this [...] Read more.
Industry is constantly seeking ways to avoid corrective maintenance so as to reduce costs. Performing regular scheduled maintenance can help to mitigate this problem, but not necessarily in the most efficient way. In the context of condition-based maintenance, the main contributions of this work were to propose a methodology to treat and transform the collected data from a vibration system that simulated a motor and to build a dataset to train and test an Artificial Neural Network capable of predicting the future condition of the equipment, pointing out when a failure can happen. To achieve this goal, a device model was built to simulate typical motor vibrations, consisting of a computer cooler fan and several magnets. Measurements were made using an accelerometer, and the data were collected and processed to produce a structured dataset. The neural network training with this dataset converged quickly and stably, while the tests performed, k-fold cross-validation and model generalization, presented excellent performance. The same tests were performed with other machine learning techniques, to demonstrate the effectiveness of neural networks mainly in their generalizability. The results of the work confirm that it is possible to use neural networks to perform predictive tasks in relation to the conditions of industrial equipment. This is an important area of study that helps to support the growth of smart industries. Full article
(This article belongs to the Special Issue Sensor Technologies for Smart Industry and Smart Infrastructure)
Show Figures

Figure 1

Figure 1
<p>Data collection and pre-processing flow chart.</p>
Full article ">Figure 2
<p>Device model developed to simulate vibrations in motors.</p>
Full article ">Figure 3
<p>Weights distribution configurations between the cooler’s blades, performed to collect different vibration behaviors.</p>
Full article ">Figure 4
<p>Vibration signals from two different and sequential measuring windows.</p>
Full article ">Figure 5
<p>Process of simplification of measured vibration signal. (<b>a</b>) Vibration signal collected from a measuring window; (<b>b</b>) Application of the Fourrier Transform in the collected signal, generating all pairs of amplitude and frequency present in the signal; (<b>c</b>) Calculation of the RMS value of the amplitudes and frequencies, generating only one pair for each measurement window.</p>
Full article ">Figure 6
<p>Schematic of vibration signal transformation to generate amplitude dataset (AY).</p>
Full article ">Figure 7
<p>Iterative error evolution of the ANN training process (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 8
<p>Results of the tests of the first folder (<math display="inline"><semantics> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>) carried out with the machine learning techniques.</p>
Full article ">Figure 9
<p>Generalization Test (a).</p>
Full article ">Figure 10
<p>Generalization Test (b).</p>
Full article ">
20 pages, 2795 KiB  
Article
Structural Damage Identification Based on AR Model with Additive Noises Using an Improved TLS Solution
by Cai Wu, Shujin Li and Yuanjin Zhang
Sensors 2019, 19(19), 4341; https://doi.org/10.3390/s19194341 - 8 Oct 2019
Cited by 7 | Viewed by 2945
Abstract
Structural damage is inevitable due to the structural aging and disastrous external excitation. The auto-regressive (AR) based method is one of the most widely used methods for structural damage identification. In this regard, the classical least-squares algorithm is often utilized to solve the [...] Read more.
Structural damage is inevitable due to the structural aging and disastrous external excitation. The auto-regressive (AR) based method is one of the most widely used methods for structural damage identification. In this regard, the classical least-squares algorithm is often utilized to solve the AR model. However, this algorithm generally could not take all the observed noises into account. In this study, a partial errors-in-variables (EIV) model is used so that both the current and prior observation errors are considered. Accordingly, a total least-squares (TLSE) solution is introduced to solve the partial EIV model. The solution estimates and accounts for the correlations between the current observed data and the design matrix. An effective damage indicator is chosen to count for damage levels of the structures. Both mathematical and finite element simulation results show that the proposed TLSE method yields better accuracy than the classical LS method and the AR model. Finally, the response data of a high-rise building shaking table test is used for demonstrating the effectiveness of the proposed method in identifying the location and damage degree of a model structure. Full article
Show Figures

Figure 1

Figure 1
<p>Size of the beam: (<b>a</b>) Cross-sectional view of the beam; (<b>b</b>) Distribution of the sensors.</p>
Full article ">Figure 2
<p>Acceleration power spectral density (PSD) of testing point 5: (<b>a</b>) Before damage; (<b>b</b>) 50% damage.</p>
Full article ">Figure 3
<p>Damage identification results: (<b>a</b>) Point 4; (<b>b</b>) Condition 4.</p>
Full article ">Figure 4
<p>Observed signal with 30 dB Noises: (<b>a</b>) Before damaged; (<b>b</b>) Condition 3.</p>
Full article ">Figure 5
<p>Damage identification results (<b>a</b>) Point 4; (<b>b</b>) Point 6.</p>
Full article ">Figure 6
<p>Identification results along the beam in condition 4.</p>
Full article ">Figure 7
<p>Picture of the model.</p>
Full article ">Figure 8
<p>Damage after the test (52nd floor).</p>
Full article ">Figure 9
<p>PSD figures for acceleration outputs of the top floor: (<b>a</b>) Before the earthquake excitations; (<b>b</b>) After the earthquake excitations.</p>
Full article ">Figure 10
<p>IFs of some floors after different earthquake intensities: (<b>a</b>) 8th floor; (<b>b</b>) 14th floor; (<b>c</b>) 41st floor; (<b>d</b>)Top floor.</p>
Full article ">Figure 11
<p>Identification factors (Ifs) along with stories: (<b>a</b>) Frequent 6; (<b>b</b>) Rare 7.</p>
Full article ">Figure 12
<p>Comparison between least square (LS) solution with total least-squares (TLS<sub>E</sub>) solution: (<b>a</b>) 8th story; (<b>b</b>) 50th story.</p>
Full article ">Figure 13
<p>Comparison along with stories after Moderate 6.</p>
Full article ">
22 pages, 5916 KiB  
Article
Design and Fabrication of CMOS Microstructures to Locally Synthesize Carbon Nanotubes for Gas Sensing
by Avisek Roy, Mehdi Azadmehr, Bao Q. Ta, Philipp Häfliger and Knut E. Aasmundtveit
Sensors 2019, 19(19), 4340; https://doi.org/10.3390/s19194340 - 8 Oct 2019
Cited by 7 | Viewed by 3762
Abstract
Carbon nanotubes (CNTs) can be grown locally on custom-designed CMOS microstructures to use them as a sensing material for manufacturing low-cost gas sensors, where CMOS readout circuits are directly integrated. Such a local CNT synthesis process using thermal chemical vapor deposition (CVD) requires [...] Read more.
Carbon nanotubes (CNTs) can be grown locally on custom-designed CMOS microstructures to use them as a sensing material for manufacturing low-cost gas sensors, where CMOS readout circuits are directly integrated. Such a local CNT synthesis process using thermal chemical vapor deposition (CVD) requires temperatures near 900 °C, which is destructive for CMOS circuits. Therefore, it is necessary to ensure a high thermal gradient around the CNT growth structures to maintain CMOS-compatible temperature (below 300 °C) on the bulk part of the chip, where readout circuits are placed. This paper presents several promising designs of CNT growth microstructures and their thermomechanical analyses (by ANSYS Multiphysics software) to check the feasibility of local CNT synthesis in CMOS. Standard CMOS processes have several conductive interconnecting metal and polysilicon layers, both being suitable to serve as microheaters for local resistive heating to achieve the CNT growth temperature. Most of these microheaters need to be partially or fully suspended to produce the required thermal isolation for CMOS compatibility. Necessary CMOS post-processing steps to realize CNT growth structures are discussed. Layout designs of the microstructures, along with some of the microstructures fabricated in a standard AMS 350 nm CMOS process, are also presented in this paper. Full article
(This article belongs to the Special Issue Advanced Nanomaterials based Gas Sensors)
Show Figures

Figure 1

Figure 1
<p>Concept illustration of local carbon nanotubes (CNT) synthesis on microstructures.</p>
Full article ">Figure 2
<p>Influential ratio (<span class="html-italic">ρ</span>/<span class="html-italic">t</span><sup>2</sup><span class="html-italic">k</span>) for maximum microheater temperature plotted against total microheater thickness associated with the material.</p>
Full article ">Figure 3
<p>Partially suspended polysilicon microheater design illustrations. (<b>a</b>) Microheater surface area; (<b>b</b>) cross-sectional view of non-suspended region, and (<b>c</b>) cross-sectional view of suspended region.</p>
Full article ">Figure 4
<p>Thermal–electric simulation results of poly-1 partially suspended microheater (PSM) designs: (<b>a</b>) design-1 and (<b>b</b>) design-2.</p>
Full article ">Figure 5
<p>Thermal analysis of partially suspended poly-2 designs: (<b>a</b>) microheater temperature along its length; (<b>b</b>) temperature distribution from the center of the heater to the surrounding surface.</p>
Full article ">Figure 6
<p>Thermal analysis of non-suspended and fully suspended poly-2 designs: (<b>a</b>) microheater temperature along its length; (<b>b</b>) temperature distribution from the center of the heater to the surrounding surface.</p>
Full article ">Figure 7
<p>Thermal–electric simulation results of Ni<sub>3</sub>Al designs: (<b>a</b>) non-suspended and (<b>b</b>) partially suspended.</p>
Full article ">Figure 8
<p>Temperature distribution from the center of different cupronickel microheaters to the surrounding surface.</p>
Full article ">Figure 9
<p>Aluminum CNT growth structures in AMS 350 nm process: (<b>a</b>) layout design and (<b>b</b>) optical micrograph.</p>
Full article ">Figure 10
<p>Thermal micrograph of an aluminum microheater on a CMOS chip.</p>
Full article ">Figure 11
<p>Poly-1 CNT growth structures in AMS 350 nm process: (<b>a</b>) layout design and (<b>b</b>) optical micrograph.</p>
Full article ">Figure 12
<p>Poly-2 CNT growth structures in AMS 350 nm process: (<b>a</b>) layout design and (<b>b</b>) optical micrograph.</p>
Full article ">Figure 13
<p>Surface and cross-sectional illustration of a selectively under-etched PSM design.</p>
Full article ">
23 pages, 3432 KiB  
Article
Distributed Hybrid Two-Stage Multi-Sensor Fusion for Cooperative Modulation Classification in Large-Scale Wireless Sensor Networks
by Goran B. Markovic, Vlada S. Sokolovic and Miroslav L. Dukic
Sensors 2019, 19(19), 4339; https://doi.org/10.3390/s19194339 - 8 Oct 2019
Cited by 6 | Viewed by 2934
Abstract
Recent studies showed that the performance of the modulation classification (MC) is considerably improved by using multiple sensors deployed in a cooperative manner. Such cooperative MC solutions are based on the centralized fusion of independent features or decisions made at sensors. Essentially, the [...] Read more.
Recent studies showed that the performance of the modulation classification (MC) is considerably improved by using multiple sensors deployed in a cooperative manner. Such cooperative MC solutions are based on the centralized fusion of independent features or decisions made at sensors. Essentially, the cooperative MC employs multiple uncorrelated observations of the unknown signal to gather more complete information, compared to the single sensor reception, which is used in the fusion process to refine the MC decision. However, the non-cooperative nature of MC inherently induces large loss in cooperative MC performance due to the unreliable measure of quality for the MC results obtained at individual sensors (which causes the partial information loss while performing centralized fusion). In this paper, the distributed two-stage fusion concept for the cooperative MC using multiple sensors is proposed. It is shown that the proposed distributed fusion, which combines feature (cumulant) fusion and decision fusion, facilitate preservation of information during the fusion process and thus considerably improve the MC performance. The clustered architecture is employed, with the influence of mismatched references restricted to the intra-cluster data fusion in the first stage. The adopted distributed concept represents a flexible and scalable solution that is suitable for implementation of large-scale networks. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The general scheme of cooperative modulation classification (MC) with centralized fusion [<a href="#B25-sensors-19-04339" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>The actual cumulant (estimate) means for the 16QAM signal averaged over <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mtext> </mtext> <mn>5</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mtext> </mtext> <mn>10</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> as a function of signal-to-noise ratio (SNR) for the MC with the blind channel estimation method (BCEM).</p>
Full article ">Figure 3
<p>The actual cumulant (estimate) means for the 16QAM signal, with and without the joint cumulant estimate correction (JCEC) used for the MC with the blind channel estimation method (BCEM), averaged over different number of sensors with <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> <mi>R</mi> <mo>=</mo> <mrow> <mo>(</mo> <mrow> <mn>10</mn> <mo>±</mo> <mn>2</mn> </mrow> <mo>)</mo> </mrow> <mtext> </mtext> <mi>dB</mi> </mrow> </semantics></math> and random <math display="inline"><semantics> <mrow> <mi>L</mi> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>2</mn> <mo>,</mo> <mo>⋯</mo> <mo>,</mo> <mtext> </mtext> <mn>5</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The general scheme of the cooperative MC with distributed two-stage fusion.</p>
Full article ">Figure 5
<p>Different sensor network scenarios depending on the cluster spacing around a transmitter, (<b>a</b>) SNS1—Clusters are spaced in different directions and on different distances from the transmitter, (<b>b</b>) SNS2—Clusters are spaced in different directions but have similar distance from the transmitter, (<b>c</b>) SNS3—Clusters have different decreasing distances from the transmitter.</p>
Full article ">Figure 6
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS1, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and different dispersive environments (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ).</p>
Full article ">Figure 7
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS1, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math> and different dispersive environments (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ).</p>
Full article ">Figure 8
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS1, for different number of cluster <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>C</mi> <mi>L</mi> </mrow> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>3</mn> <mo>,</mo> <mn>5</mn> <mo>,</mo> <mn>7</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, when <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS2, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS2, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 11
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS3, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math> and less dispersive environment (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>5</mn> </mrow> </semantics></math> ).</p>
Full article ">Figure 12
<p>The estimated <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>C</mi> <mi>C</mi> <mo>,</mo> <mi>a</mi> <mi>v</mi> <mi>g</mi> </mrow> </msub> </mrow> </semantics></math> value for the centralized and distributed cooperative MC in SNS3, for <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>500</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>2000</mn> </mrow> </semantics></math> and highly dispersive environment (<math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math> ).</p>
Full article ">
19 pages, 753 KiB  
Article
Long-Term Glucose Forecasting Using a Physiological Model and Deconvolution of the Continuous Glucose Monitoring Signal
by Chengyuan Liu, Josep Vehí, Parizad Avari, Monika Reddy, Nick Oliver, Pantelis Georgiou and Pau Herrero
Sensors 2019, 19(19), 4338; https://doi.org/10.3390/s19194338 - 8 Oct 2019
Cited by 28 | Viewed by 5990
Abstract
(1) Objective: Blood glucose forecasting in type 1 diabetes (T1D) management is a maturing field with numerous algorithms being published and a few of them having reached the commercialisation stage. However, accurate long-term glucose predictions (e.g., >60 min), which are usually needed in [...] Read more.
(1) Objective: Blood glucose forecasting in type 1 diabetes (T1D) management is a maturing field with numerous algorithms being published and a few of them having reached the commercialisation stage. However, accurate long-term glucose predictions (e.g., >60 min), which are usually needed in applications such as precision insulin dosing (e.g., an artificial pancreas), still remain a challenge. In this paper, we present a novel glucose forecasting algorithm that is well-suited for long-term prediction horizons. The proposed algorithm is currently being used as the core component of a modular safety system for an insulin dose recommender developed within the EU-funded PEPPER (Patient Empowerment through Predictive PERsonalised decision support) project. (2) Methods: The proposed blood glucose forecasting algorithm is based on a compartmental composite model of glucose–insulin dynamics, which uses a deconvolution technique applied to the continuous glucose monitoring (CGM) signal for state estimation. In addition to commonly employed inputs by glucose forecasting methods (i.e., CGM data, insulin, carbohydrates), the proposed algorithm allows the optional input of meal absorption information to enhance prediction accuracy. Clinical data corresponding to 10 adult subjects with T1D were used for evaluation purposes. In addition, in silico data obtained with a modified version of the UVa-Padova simulator was used to further evaluate the impact of accounting for meal absorption information on prediction accuracy. Finally, a comparison with two well-established glucose forecasting algorithms, the autoregressive exogenous (ARX) model and the latent variable-based statistical (LVX) model, was carried out. (3) Results: For prediction horizons beyond 60 min, the performance of the proposed physiological model-based (PM) algorithm is superior to that of the LVX and ARX algorithms. When comparing the performance of PM against the secondly ranked method (ARX) on a 120 min prediction horizon, the percentage improvement on prediction accuracy measured with the root mean square error, A-region of error grid analysis (EGA), and hypoglycaemia prediction calculated by the Matthews correlation coefficient, was 18.8 % , 17.9 % , and 80.9 % , respectively. Although showing a trend towards improvement, the addition of meal absorption information did not provide clinically significant improvements. (4) Conclusion: The proposed glucose forecasting algorithm is potentially well-suited for T1D management applications which require long-term glucose predictions. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Average <math display="inline"><semantics> <msub> <mi>R</mi> <mi>a</mi> </msub> </semantics></math> profiles corresponding to the fast, slow and medium meals from the UVa-Padova simulator for a 60 g intake of carbohydrates.</p>
Full article ">Figure 2
<p>Block diagram corresponding to the proposed glucose forecasting algorithm. The whole diagram is executed every time a glucose value (<math display="inline"><semantics> <msub> <mi>G</mi> <mrow> <mi>C</mi> <mi>G</mi> <mi>M</mi> </mrow> </msub> </semantics></math>) (continuous glucose monitoring (CGM)) is received. Then, the physiological model represented by the green blocks is evaluated over the prediction horizon (PH) to obtain the forecasted glucose (<math display="inline"><semantics> <mrow> <mi>G</mi> <mo>(</mo> <mi>k</mi> <mo>+</mo> <mi>P</mi> <mi>H</mi> <mo>)</mo> </mrow> </semantics></math>).</p>
Full article ">Figure 3
<p>Average percentage of improvement against prediction horizon for three of the evaluated metrics (RMSE, A-region of EGA, and MCC) when comparing the proposed method <math display="inline"><semantics> <mrow> <mi>P</mi> <msub> <mi>M</mi> <mrow> <mi>M</mi> <mi>A</mi> </mrow> </msub> </mrow> </semantics></math> versus the <math display="inline"><semantics> <mrow> <mi>A</mi> <mi>R</mi> <mi>X</mi> </mrow> </semantics></math> model on the 10-adult real cohort.</p>
Full article ">Figure 4
<p>Example of 24 h period close up for a representative real individual showing the prediction results for the three evaluated forecasting methods with a prediction horizon of 120 min. The continuous glucose measurements is represented by the dashed black line, the prediction by the proposed PM method is displayed in solid-red line, results for the LVX and ARX methods are showed in dotted green line and dash-dotted blue line respectively. Vertical pink bars indicate carbohydrate intakes (grams) and vertical light blue bars indicate insulin boluses (units).</p>
Full article ">Figure A1
<p>Gastrointestinal model fitting of the 16 estimated <math display="inline"><semantics> <msub> <mi>R</mi> <mi>a</mi> </msub> </semantics></math> profiles. Green solid line represents the reference <math display="inline"><semantics> <msub> <mi>R</mi> <mi>a</mi> </msub> </semantics></math> and the red dashed line the model fitting.</p>
Full article ">
14 pages, 5409 KiB  
Article
A Drag Model-LIDAR-IMU Fault-Tolerance Fusion Method for Quadrotors
by Pin Lyu, Bingqing Wang, Jizhou Lai, Shichao Liu and Zhimin Li
Sensors 2019, 19(19), 4337; https://doi.org/10.3390/s19194337 - 8 Oct 2019
Cited by 2 | Viewed by 3622
Abstract
In this paper, a drag model-aided fault-tolerant state estimation method is presented for quadrotors. Firstly, the drag model accuracy was improved by modeling an angular rate related item and an angular acceleration related item, which are related with flight maneuver. Then the drag [...] Read more.
In this paper, a drag model-aided fault-tolerant state estimation method is presented for quadrotors. Firstly, the drag model accuracy was improved by modeling an angular rate related item and an angular acceleration related item, which are related with flight maneuver. Then the drag model, light detection and ranging (LIDAR), and inertial measurement unit (IMU) were fused based on the Federal Kalman filter frame. In the filter, the LIDAR estimation fault was detected and isolated, and the disturbance to the drag model was estimated and compensated. Some experiments were carried out, showing that the velocity and position estimation were improved compared with the traditional LIDAR/IMU fusion scheme. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The architecture of the proposed fault-tolerant filter.</p>
Full article ">Figure 2
<p>Quadrotor structure diagram.</p>
Full article ">Figure 3
<p>The comparison between the velocities estimated by different drag models.</p>
Full article ">Figure 4
<p>The test scheme.</p>
Full article ">Figure 5
<p>The velocity estimation result in light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) failure case.</p>
Full article ">Figure 6
<p>The position estimation result in LIDAR SLAM failure case.</p>
Full article ">Figure 7
<p>The fault detection results in LIDAR SLAM failure case.</p>
Full article ">Figure 8
<p>The velocity estimation result in quadrotor attitude maneuver case.</p>
Full article ">Figure 9
<p>The position estimation result in quadrotor attitude maneuver case.</p>
Full article ">Figure 10
<p>The fault detection results in quadrotor attitude maneuver case.</p>
Full article ">Figure 11
<p>The velocity estimation result in the wind interference case.</p>
Full article ">Figure 12
<p>The wind estimation results.</p>
Full article ">
15 pages, 5298 KiB  
Article
Design and Fabrication of a High-Frequency Single-Directional Planar Underwater Ultrasound Transducer
by Qiguo Huang, Hongwei Wang, Shaohua Hao, Chao Zhong and Likun Wang
Sensors 2019, 19(19), 4336; https://doi.org/10.3390/s19194336 - 8 Oct 2019
Cited by 18 | Viewed by 5615
Abstract
This paper describes the fabrication of 1-3 piezoelectric composites by using PZT5-A pure piezoelectric ceramics and the preparation of a high-frequency single-directional planar underwater ultrasound transducer by using the developed composites. First, three material models of the same size were designed and simulated [...] Read more.
This paper describes the fabrication of 1-3 piezoelectric composites by using PZT5-A pure piezoelectric ceramics and the preparation of a high-frequency single-directional planar underwater ultrasound transducer by using the developed composites. First, three material models of the same size were designed and simulated by ANSYS finite element simulation software. Next, based on the simulation results, the 1-3 piezoelectric composites were developed. Finally, a high-frequency single-directional planar underwater ultrasound transducer was fabricated by encapsulating and gluing the 1-3 piezoelectric composites. The performance of the transducer was tested, and results showed that the device was characterized by single-mode operation in the working frequency band, a high transmitting voltage response, and single directivity. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Two connection modes of 1-3 piezoelectric composites.</p>
Full article ">Figure 2
<p>Simulation flow chart of ANSYS finite element software.</p>
Full article ">Figure 3
<p>Curve of resonant frequency as a function of ceramic thickness.</p>
Full article ">Figure 4
<p>Model structure of a 1-3 piezoelectric composite and sizes of the PZT and polymer phases.</p>
Full article ">Figure 5
<p>Simulated structural diagram of a 1-3 piezoelectric composite.</p>
Full article ">Figure 6
<p>Simulation results of pure piezoelectric ceramics and two polymer-added 1-3 piezoelectric composites.</p>
Full article ">Figure 7
<p>Fabrication process of the piezoelectric composites.</p>
Full article ">Figure 8
<p>Photograph of the 1-3 piezoelectric composites.</p>
Full article ">Figure 9
<p>Conductivity G curve of the piezoelectric composite sensor.</p>
Full article ">Figure 10
<p>Structural diagram of the planar transducer.</p>
Full article ">Figure 11
<p>Exploded schematic of the planar transducer.</p>
Full article ">Figure 12
<p>Physical diagram of the underwater acoustic transducer.</p>
Full article ">Figure 13
<p>Conductivity G curves of the piezoelectric composite and transducer.</p>
Full article ">Figure 14
<p>Test system for the transducer.</p>
Full article ">Figure 15
<p>Performance parameters of the transducer.</p>
Full article ">Figure 15 Cont.
<p>Performance parameters of the transducer.</p>
Full article ">
24 pages, 16646 KiB  
Article
Looking Through Paintings by Combining Hyper-Spectral Imaging and Pulse-Compression Thermography
by Stefano Laureti, Hamed Malekmohammadi, Muhammad Khalid Rizwan, Pietro Burrascano, Stefano Sfarra, Miranda Mostacci and Marco Ricci
Sensors 2019, 19(19), 4335; https://doi.org/10.3390/s19194335 - 8 Oct 2019
Cited by 19 | Viewed by 4336
Abstract
The use of different spectral bands in the inspection of artworks is highly recommended to identify the maximum number of defects/anomalies (i.e., the targets), whose presence ought to be known before any possible restoration action. Although an artwork cannot be considered as a [...] Read more.
The use of different spectral bands in the inspection of artworks is highly recommended to identify the maximum number of defects/anomalies (i.e., the targets), whose presence ought to be known before any possible restoration action. Although an artwork cannot be considered as a composite material in which the zero-defect theory is usually followed by scientists, it is possible to state that the preservation of a multi-layered structure fabricated by the artist’s hands is based on a methodological analysis, where the use of non-destructive testing methods is highly desirable. In this paper, the infrared thermography and hyperspectral imaging methods were applied to identify both fabricated and non-fabricated targets in a canvas painting mocking up the famous character “Venus” by Botticelli. The pulse-compression thermography technique was used to retrieve info about the inner structure of the sample and low power light-emitting diode (LED) chips, whose emission was modulated via a pseudo-noise sequence, were exploited as the heat source for minimizing the heat radiated on the sample surface. Hyper-spectral imaging was employed to detect surface and subsurface features such as pentimenti and facial contours. The results demonstrate how the application of statistical algorithms (i.e., principal component and independent component analyses) maximized the number of targets retrieved during the post-acquisition steps for both the employed techniques. Finally, the best results obtained by both techniques and post-processing methods were fused together, resulting in a clear targets map, in which both the surface, subsurface and deeper information are all shown at a glance. Full article
Show Figures

Figure 1

Figure 1
<p>Birth of Venus by Sandro Botticelli. The face of Venus—the red-colored rectangle—is the detail of the painting chosen for the representation in the sample.</p>
Full article ">Figure 2
<p>A picture of the frame used for tensioning the canvas.</p>
Full article ">Figure 3
<p>(<b>a</b>) Defect A, and (<b>b</b>) canvas stretched on the wooden frame.</p>
Full article ">Figure 4
<p>Dressing of the canvas.</p>
Full article ">Figure 5
<p>Defect B: (<b>a</b>) The defect was obtained by inserting a Teflon insert to simulate a splitting between the preparatory layers; (<b>b</b>) a zoom of Defect B.</p>
Full article ">Figure 6
<p>(<b>a</b>) Initial step of the application of the first preparatory layer, and (<b>b</b>) final step of the application of the first preparatory layer.</p>
Full article ">Figure 7
<p>Defects C and D: two dry-crackings were simulated on the still fresh surface.</p>
Full article ">Figure 8
<p>Injection of rabbit glue in the craquelure (Defects C and D).</p>
Full article ">Figure 9
<p>Defect E: (<b>a</b>) The defect was obtained by inserting a Teflon sheet to simulate a splitting; (<b>b</b>) a zoom of Defect E.</p>
Full article ">Figure 10
<p>(<b>a</b>) Initial step of the application of the second preparatory layer, and (<b>b</b>) final step of the application of the second preparatory layer.</p>
Full article ">Figure 11
<p>Sanding the surface by means of a fine-grained abrasive paper.</p>
Full article ">Figure 12
<p>Defect F: crack formed near to Defect E.</p>
Full article ">Figure 13
<p>(<b>a</b>) Underdrawings and pentimenti, and (<b>b</b>) magnification of the covered signature.</p>
Full article ">Figure 14
<p>Coating of the primer.</p>
Full article ">Figure 15
<p>First stage of the application of the painting layer.</p>
Full article ">Figure 16
<p>Final sample including the finishing layer.</p>
Full article ">Figure 17
<p>Sample: (<b>a</b>) map of the defects/covered targets projected on the horizontal plane, and (<b>b</b>) cross-section of the sample describing: (1) the thin layers, and (2) the thick layer.</p>
Full article ">Figure 18
<p>Hyper-spectral imaging experimental setup. SUT = sample under test.</p>
Full article ">Figure 19
<p>Comparison between (<b>a</b>) pulsed thermography (PT), and (<b>b</b>) pulse-compression thermography (PuCT).</p>
Full article ">Figure 20
<p>Pseudo-noise pulse-compression thermography (PuCT). Top: hyper-raw thermograms for both a defected and sound pixel. Middle: the same signal as for Top, but after the de-trend procedure, thus ready for the pulse-compression step. Bottom: signals obtained after PuCT. From the series of thermograms showed as time lapses, it is possible to note how the signal-to-noise ratio (SNR) is enhanced from the raw acquired signal to the PuC output.</p>
Full article ">Figure 21
<p>Pulse-compression thermography setup. LED = light-emitting diode.</p>
Full article ">Figure 22
<p>Raw hyper-spectral images acquired at: (<b>a</b>) 1100 nm; (<b>b</b>) 1200 nm, (<b>c</b>) 1400 nm, and (<b>d</b>) 1650 nm.</p>
Full article ">Figure 22 Cont.
<p>Raw hyper-spectral images acquired at: (<b>a</b>) 1100 nm; (<b>b</b>) 1200 nm, (<b>c</b>) 1400 nm, and (<b>d</b>) 1650 nm.</p>
Full article ">Figure 23
<p>First, second and third PCA applied to hyper-spectral images, and the first ICA applied to the same set (between 1400 nm and 1650 nm).</p>
Full article ">Figure 24
<p>Hypercube showing the <span class="html-italic">h</span>(<span class="html-italic">t</span>) and the time-phase results as time elapses.</p>
Full article ">Figure 25
<p>Results extracted from the impulse response <math display="inline"><semantics> <mrow> <mi mathvariant="script">H</mi> <mrow> <mo>{</mo> <mrow> <mi>h</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math> at different times: (<b>a</b>) 2 s, (<b>b</b>) 22 s, (<b>c</b>) 18 s, and (<b>d</b>) 26 s.</p>
Full article ">Figure 26
<p>Integration between PC and IC analyses on PuCT images, plus hyper-spectral imaging (HSI).</p>
Full article ">
13 pages, 3712 KiB  
Article
2D Ultrasonic Antenna System for Imaging in Liquid Sodium
by Léonard Le Jeune, Raphaële Raillon, Gwénaël Toullelan, François Baqué and Laura Taupin
Sensors 2019, 19(19), 4334; https://doi.org/10.3390/s19194334 - 8 Oct 2019
Cited by 4 | Viewed by 3363
Abstract
Ultrasonic techniques are developed at CEA (French Alternative Energies and Nuclear Energy Commission) for in-service inspection of sodium-cooled reactors (SFRs). Among them, an ultrasound imaging system made up of two orthogonal antennas and originally based on an underwater imaging system is studied for [...] Read more.
Ultrasonic techniques are developed at CEA (French Alternative Energies and Nuclear Energy Commission) for in-service inspection of sodium-cooled reactors (SFRs). Among them, an ultrasound imaging system made up of two orthogonal antennas and originally based on an underwater imaging system is studied for long-distance vision in the liquid sodium of the reactor’s primary circuit. After a description of the imaging principle of this system, some results of a simulation study performed with the software CIVA in order to optimize the antenna parameters are presented. Then, experimental measurements carried out in a water tank illustrate the system capabilities. Finally, the limitations of the imaging performances and the ongoing search of solutions to address them are discussed. Full article
(This article belongs to the Special Issue Sensors for Ultrasonic NDT in Harsh Environments)
Show Figures

Figure 1

Figure 1
<p>Diagram of the antennas’ arrangement and definition of the main terms used in the article: (<b>a</b>) Diagram of the two antennas’ arrangement and of the orthogonal section of an element and (<b>b</b>) definition of the key terms for one antenna. The central axis is perpendicular to the probe surface at its centre. The delay law axis represents the direction of the deflected beam when delay law is applied. When a delay law is applied, the beam is focused at the focusing point placed at a distance <span class="html-italic">d</span> as defined in the figure and or deflected by an angle θ in the focusing plane of the antenna. The orthogonal plane is perpendicular to the focusing plane.</p>
Full article ">Figure 2
<p>Simulated beam results: Illustration of the electronic scanning principle by visualization of the antenna beams computed with CIVA in a 3D zone of which the dimensions are given on the figure. (<b>a</b>) Fan-beam radiated by the emission antenna. (<b>b</b>) Fan-beam radiated by the reception antenna. (<b>c</b>) Emission/reception (E/R) cigar-beam. (<b>d</b>) Illustration of the twelve cigar-beams obtained for twelve delay laws (the beams are displayed together in the same image, but in reality, the twelve delay laws are applied successively to scan the space).</p>
Full article ">Figure 3
<p>Simulated Full Matrix Capture (FMC)/Total Focusing Method (TFM) results: (<b>a</b>) Position of the spherical targets in front of the two antennas. The four spheres are in the R focusing plane (left); three of them are at a 1005-mm depth and one is at a 1030-mm depth (right). (<b>b</b>) Three-dimensional and 2D TFM images of the spherical four targets computed in a 3D zone placed around the targets as shown in <a href="#sensors-19-04334-f003" class="html-fig">Figure 3</a>a.</p>
Full article ">Figure 4
<p>Simulated beam results: Effect of the variation of the radius of curvature of the elements in the orthogonal plane on the radiated beam at two depths. (<b>a</b>) Position of the profiles (dash lines) where the beam was computed. (<b>b</b>) Simulated amplitude of the beams along the 2 profiles obtained for different radii of curvature of the elements. The fixed parameters used for the computation were 128 elements for each antenna, with element’s lengths of 1.8 mm in the focusing plane and of 35 mm in the orthogonal plane, a gap of 0.2 mm between adjacent elements, a centre frequency of 1.6 MHz, and a bandwidth of 30%.</p>
Full article ">Figure 5
<p>Simulated beam results: Effect of the variation of the antenna’s aperture in the focusing plane on the cigar-beam spatial resolution at a given depth. (<b>a</b>) Position of the 2D beam computation area. (<b>b</b>) Simulated maximum amplitude of the beams obtained in this area for antennas with 64 elements (i.e., for an aperture of 127.8 mm) and with 128 elements (i.e., for an aperture of 255.8 mm). The fixed parameters used for the computation were element’s lengths of 1.8 mm in the focusing plane and of 35 mm in the orthogonal plane, a gap of 0.2 mm between adjacent elements, a surface radius of curvature of 30 mm, a centre frequency of 1.6 MHz, and a bandwidth of 30%.</p>
Full article ">Figure 6
<p>Diagram of the experimental trial configuration: (<b>a</b>) 3D diagram of the two-antenna system and successive positions of the spherical targets at a 850-mm depth (red points): the spherical target’s FMC were measured one after the other for each of the twelve positions of the sphere. (<b>b</b>) Top view of the same configuration. The sphere located at X = 0° and Y = 0° (surrounded by a green circle) is used as a reference for the amplitudes (see in the text).</p>
Full article ">Figure 7
<p>Measured and simulated FMC/TFM results: (<b>a</b>) The spherical target is at the position surrounded with a blue circle in the 2D diagram of the experimental trial configuration (top of the figure). (<b>b</b>) The spherical target (Ø: 6 mm) is at the position surrounded with a blue circle in the 2D diagram. For each position of the target, the measured and CIVA simulated 3D and 2D TFM images are displayed. The 3D TFM image was computed in a 3D zone centered on the spherical target and represented on the figure. The 2D TFM image is in the XY plane and was extracted from the 3D image at the position of the maximum amplitude.</p>
Full article ">Figure 8
<p>Measured and simulated FMC/TFM results for targets placed far from the antenna: (<b>a</b>) The spherical target is at the position surrounded with a blue circle in the 2D diagram of the experimental trial configuration (top of the figure). (<b>b</b>) The spherical target (Ø: 6 mm) is at the position surrounded with a blue circle in the 2D diagram. For each position of the target, the measured and CIVA simulated 3D and 2D TFM images are displayed. The 3D TFM image was computed in a 3D zone centered on the spherical target and represented on the figure. The 2D TFM image is in the XY plane and was extracted from the 3D image at the position of the maximum amplitude.</p>
Full article ">Figure 9
<p>Comparison between (<b>a</b>) TFM, (<b>b</b>) Hadamard coding, and (<b>c</b>) Plane Wave Imaging (PWI) images obtained in a High-Density Polyethylene (HDPE) component for a 64-element, 5-MHz probe. The case is of a side-drilled hole (diameter of 2 mm) at a 25-mm depth. The amplification gain and voltage excitation are the same for the three acquisitions process. Attenuation of HDPE in the bandwidth is comprised between 0.4 and 1.4 dB/mm. Signal-to-Noise Ratios (SNR) are respectively 9 dB, 15 dB, and 18 dB [<a href="#B8-sensors-19-04334" class="html-bibr">8</a>].</p>
Full article ">Figure 10
<p>Adaptive grid: (<b>a</b>) TFM image obtained using (<b>b</b>) a regular and very fine grid and (<b>c</b>) TFM image obtained using (<b>d</b>) an adaptive grid. The number of pixels in <a href="#sensors-19-04334-f010" class="html-fig">Figure 10</a>d represents 16% of the total number of points in <a href="#sensors-19-04334-f010" class="html-fig">Figure 10</a>b.</p>
Full article ">Figure 11
<p>Algorithmic complexities: comparison between frequency domain (f-k) methods (Stolt and Lu) and PWI [<a href="#B11-sensors-19-04334" class="html-bibr">11</a>].</p>
Full article ">
15 pages, 19883 KiB  
Article
Fabrication of a Monolithic Lab-on-a-Chip Platform with Integrated Hydrogel Waveguides for Chemical Sensing
by Maria Leilani Torres-Mapa, Manmeet Singh, Olga Simon, Jose Louise Mapa, Manan Machida, Axel Günther, Bernhard Roth, Dag Heinemann, Mitsuhiro Terakawa and Alexander Heisterkamp
Sensors 2019, 19(19), 4333; https://doi.org/10.3390/s19194333 - 8 Oct 2019
Cited by 35 | Viewed by 6085
Abstract
Hydrogel waveguides have found increased use for variety of applications where biocompatibility and flexibility are important. In this work, we demonstrate the use of polyethylene glycol diacrylate (PEGDA) waveguides to realize a monolithic lab-on-a-chip device. We performed a comprehensive study on the swelling [...] Read more.
Hydrogel waveguides have found increased use for variety of applications where biocompatibility and flexibility are important. In this work, we demonstrate the use of polyethylene glycol diacrylate (PEGDA) waveguides to realize a monolithic lab-on-a-chip device. We performed a comprehensive study on the swelling and optical properties for different chain lengths and concentrations in order to realize an integrated biocompatible waveguide in a microfluidic device for chemical sensing. Waveguiding properties of PEGDA hydrogel were used to guide excitation light into a microfluidic channel to measure the fluorescence emission profile of rhodamine 6G as well as collect the fluorescence signal from the same device. Overall, this work shows the potential of hydrogel waveguides to facilitate delivery and collection of optical signals for potential use in wearable and implantable lab-on-a-chip devices. Full article
(This article belongs to the Collection Photonic Sensors)
Show Figures

Figure 1

Figure 1
<p>Swelling ratio as a function of time for (<b>a</b>) 250 Da; (<b>b</b>) 700 Da; (<b>c</b>) 6000 Da and (<b>d</b>) blend of 700 Da and 6000 Da. At <span class="html-italic">t</span> = 0, swelling ratio is determined by the weight of the hydrogel directly after fabrication with respect to the dried weight of the hydrogel.</p>
Full article ">Figure 2
<p>(<b>a</b>) Equilibrium swelling ratio and water content for different molecular weight and concentration; (<b>b</b>) Calculated water-induced volume swelling of polyethylene glycol diacrylate (PEGDA) for different concentration and molecular weight.</p>
Full article ">Figure 3
<p>(<b>a</b>) Transparency of the hydrogels for different molecular weight and concentration; (<b>b</b>) Optical transmission of PEGDA 700 hydrogels for different concentrations.</p>
Full article ">Figure 4
<p>(<b>a</b>) Refractive indices of PEGDA hydrogels at different molecular weights and concentrations; (<b>b</b>) Refractive index of PEGDA 700, 90% left in air over time. Each data point is an average measurement of n = 3 hydrogels.</p>
Full article ">Figure 5
<p>(<b>a</b>) Photograph of the PEGDA waveguides embedded in polydimethylsiloxane (PDMS). Three PEGDA 700, 90% waveguides with different radii were embedded in a single PDMS block for waveguiding tests and cut-back measurements; (<b>b</b>) Photo of the cross-section of the waveguide. Scale bar is 20 mm; (<b>c</b>) Representative photos of a fabricated 10 cm straight waveguide in a PDMS cladding with the 532 nm laser guided along the PEGDA 700, 90% core. The focused laser is incident to the waveguide marked by the arrow. Scale bar is 25 mm; (<b>d</b>) Microscope images of the fibers fabricated by filling the PDMS channels with PEGDA 700, 90% with different radii. Scale bar is 300 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m.</p>
Full article ">Figure 6
<p>(<b>a</b>) A photo of a waveguide splitter with PEGDA 700, 90% waveguide and radius of 300 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m Scale bar is 0.5 cm; (<b>b</b>) Plot is the line profile of the beam output showing the light distribution at the two distal ends of the y-splitter. Top image shows the beam output from the y-splitter. Scale bar is 0.5 cm; (<b>c</b>) (<b>Top</b>) A photo of a fabricated 1 × 4 waveguide splitter with waveguide radius of 300 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m; (<b>Bottom</b>) A photo of the 1 × 4 waveguide splitter showing four intense light output at the distal ends. Scale bar is 1.0 cm; (<b>d</b>) Top image shows the four points as imaged in the CCD camera. Scale bar is 0.5 cm. Plot shown is the line profile of the 4 spots from the 1 × 4 splitter. For plots in (<b>b</b>,<b>d</b>) blue curve is the normalized gray scale value and red curve is a Savitzky-Golay fit with order 2 and frame length 17.</p>
Full article ">Figure 7
<p>(<b>a</b>) Schematic diagram of the optical setup and a photo of the microfluidic chip with an integrated hydrogel waveguide. Laser is focused at the entrance of a PEGDA waveguide and a second PEGDA waveguide was used to collect the laser-induced fluorescence signal generated at the microfluidic channel. Top image shows a photo of the chip with the 532 nm laser guided to the microfluidic channel. Bottom shows a photo taken with a bandpass filter to allow only the fluorescence emission to be captured by the camera. Dotted lines indicate the location of the PEGDA waveguides. Scale bar is 0.5 cm; (<b>b</b>) Representative spectral profiles of laser induced fluorescence of rhodamine for different concentrations; (<b>c</b>) Normalized intensity of the rhodamine emission signal as a function of concentration for 570, 590 and 610 nm. Inset graph shows the curve between 0 to 0.1 mg/mL. Data shown is an average measurements from three different microfluidic chips.</p>
Full article ">
27 pages, 7294 KiB  
Article
Airborne Visual Detection and Tracking of Cooperative UAVs Exploiting Deep Learning
by Roberto Opromolla, Giuseppe Inchingolo and Giancarmine Fasano
Sensors 2019, 19(19), 4332; https://doi.org/10.3390/s19194332 - 7 Oct 2019
Cited by 46 | Viewed by 7444
Abstract
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated [...] Read more.
The performance achievable by using Unmanned Aerial Vehicles (UAVs) for a large variety of civil and military applications, as well as the extent of applicable mission scenarios, can significantly benefit from the exploitation of formations of vehicles able to fly in a coordinated manner (swarms). In this respect, visual cameras represent a key instrument to enable coordination by giving each UAV the capability to visually monitor the other members of the formation. Hence, a related technological challenge is the development of robust solutions to detect and track cooperative targets through a sequence of frames. In this framework, this paper proposes an innovative approach to carry out this task based on deep learning. Specifically, the You Only Look Once (YOLO) object detection system is integrated within an original processing architecture in which the machine-vision algorithms are aided by navigation hints available thanks to the cooperative nature of the formation. An experimental flight test campaign, involving formations of two multirotor UAVs, is conducted to collect a database of images suitable to assess the performance of the proposed approach. Results demonstrate high-level accuracy, and robustness against challenging conditions in terms of illumination, background and target-range variability. Full article
Show Figures

Figure 1

Figure 1
<p>State diagram of the proposed architecture. Two cases are possible for the state of the system, namely target detected (1) or not detected (0). This relatively simple architecture is justified by the cooperative nature of the assumed multi-UAV system.</p>
Full article ">Figure 2
<p>Scheme summarizing the algorithmic strategy characterizing the proposed DL-based detector. The input parameters are listed within a black (dashed) rectangular box. The processing blocks are enclosed within black rectangular boxes. The final output is highlighted in red.</p>
Full article ">Figure 3
<p>DL-based detector: example of search windows definition for a 752 × 480-pixels RGB image. <span class="html-italic">d<sub>u</sub></span> is set to 150 pixels (<span class="html-italic">N<sub>w</sub></span> = 5). The target UAV position is highlighted by a black box.</p>
Full article ">Figure 4
<p>DL-based detector: example of search windows definition for a 752 × 480-pixels RGB image. The target UAV position is highlighted by a black box. (<b>a</b>) <span class="html-italic">d<sub>u</sub></span> is set to 150 pixels (<span class="html-italic">N<sub>w</sub></span> = 5). The target UAV projection on the image plane is cut by the border between the second and third search window. (<b>b</b>) <span class="html-italic">d<sub>u</sub></span> is set to 100 pixels (<span class="html-italic">N<sub>w</sub></span> = 7). Only the third search window, which fully contains the target UAV, is highlighted for the sake of clarity.</p>
Full article ">Figure 5
<p>Example of best YOLO detection. <span class="html-italic">IoU</span> = 0.538. The reference BB, obtained using a supervised approach, is approximately centered at the geometric center of the target. The detected BB is the output of the DL-based detector.</p>
Full article ">Figure 6
<p>Main steps of the image processing approach to refine the detected BB. An image crop is obtained from the detected bounding box. The gradient operator is applied within this image portion and the gradient image is then binarized. Finally, the centroid of the set of pixels highlighted in the binarized image is computed and a refined BB is centered around this point.</p>
Full article ">Figure 7
<p>Example of application of the <span class="html-italic">BB refinement</span> block. In this case, the factor <span class="html-italic">c</span> is 1. (<b>a</b>) Detected BB. (<b>b</b>) Result of gradient estimation. (<b>c</b>) Result of binarization and centroid calculation (highlighted by a red dot).</p>
Full article ">Figure 8
<p>Result of the <span class="html-italic">BB refinement</span> algorithm. The <span class="html-italic">IoU</span> of the refined BB is 0.747.</p>
Full article ">Figure 9
<p>Examples of prediction of the target UAV projection on the image plane (highlighted by a red dot) carried out by the DL-based tracker. The search area is drawn as a red square. The target UAV is enclosed in a black box. (<b>a</b>,<b>b</b>) Far range scenario (target range ≈ 116 m); <span class="html-italic">d<sub>u,tr</sub></span> = <span class="html-italic">d<sub>v,tr</sub></span> = 150 pixels; prediction error ≈ 45 pixels. (<b>c</b>,<b>d</b>) Close range scenario (target range ≈ 20 m); <span class="html-italic">d<sub>u,tr</sub></span> = <span class="html-italic">d<sub>v,tr</sub></span> = 300 pixels; prediction error ≈ 20 pixels.</p>
Full article ">Figure 10
<p>UAVs exploited for the flight test campaign. (<b>a</b>) Tracker UAV for database A: customized Pelican by Ascending Technologies. (<b>b</b>) Target UAV for database A: customized X8+ by 3D Robotics. (<b>c</b>) Target and tracker UAV for database B: customized M100 by DJI.</p>
Full article ">Figure 11
<p>Example of images from FT3-A. The target (i.e., the X8+ octocopter) occupies a few pixels as highlighted by the zoom on the right side of each figure. (<b>a</b>,<b>b</b>). Target below the horizon. (<b>c</b>,<b>d</b>) Target above the horizon hindered by clouds.</p>
Full article ">Figure 12
<p>Example of images from FT1-B (<b>a</b>) and FT2-B (<b>b</b>).</p>
Full article ">Figure 13
<p>DL-based detector performance as a function of <span class="html-italic">τ<sub>det</sub></span>. FT3-A composed of 381 frames. (<b>a</b>) <span class="html-italic">Target UAV prediction</span> enabled (<span class="html-italic">d<sub>u</sub></span> = 100 pixels, <span class="html-italic">N<sub>w</sub></span> = 7 search windows). (<b>b</b>) <span class="html-italic">Target UAV prediction</span> disabled (<span class="html-italic">d<sub>u</sub></span> = <span class="html-italic">d<sub>v</sub></span> = 100 pixels, <span class="html-italic">N<sub>w</sub></span> = 28 search windows).</p>
Full article ">Figure 14
<p>(<b>a</b>) Detection and tracking test on FT1-B (1330 images). Distribution of <span class="html-italic">S<sub>max</sub></span> as a function of the target-tracker relative distance. (<b>b</b>) Histogram providing the distribution of the target-tracker range characterizing the 650 images selected from FT1-A and FT2-A.</p>
Full article ">Figure 15
<p>Variation of the target-chaser relative distance (blue line) during the FT1-B (1330 images). The DL-based detector and tracker are applied setting <span class="html-italic">τ<sub>det</sub></span> to 0.20 and <span class="html-italic">τ<sub>tr</sub></span> to 0.075. The frames where the algorithmic architecture provides correct detections are highlighted with red (target inside the FOV) and green (target outside the FOV) stars.</p>
Full article ">
18 pages, 4944 KiB  
Article
Digital Magnetic Compass Integration with Stationary, Land-Based Electro-Optical Multi-Sensor Surveillance System
by Branko Livada, Saša Vujić, Dragan Radić, Tomislav Unkašević and Zoran Banjac
Sensors 2019, 19(19), 4331; https://doi.org/10.3390/s19194331 - 7 Oct 2019
Cited by 11 | Viewed by 9332
Abstract
Multi-sensor imaging systems using the global navigation satellite system (GNSS) and digital magnetic compass (DMC) for geo-referencing have an important role and wide application in long-range surveillance systems. To achieve the required system heading accuracy, the specific magnetic compass calibration and compensation procedures, [...] Read more.
Multi-sensor imaging systems using the global navigation satellite system (GNSS) and digital magnetic compass (DMC) for geo-referencing have an important role and wide application in long-range surveillance systems. To achieve the required system heading accuracy, the specific magnetic compass calibration and compensation procedures, which highly depend on the application conditions, should be applied. The DMC compensation technique suitable for the operation environment is described and different technical solutions are studied. The application of the swinging procedure was shown as a good solution for DMC compensation in a given application. The selected DMC was built into a system to be experimentally evaluated, both under laboratory and field conditions. The implementation of the compensation procedure and magnetic sensor integration in systems is described. The heading accuracy measurement results show that DMC could be successfully integrated and used in long-range surveillance systems providing required geo-referencing data. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Long-range electro-optical multi-sensor structure: (<b>a</b>) general view of sensor mounting, (<b>b</b>) functional block diagram of the electro-optical head.</p>
Full article ">Figure 2
<p>Digital magnetic compass (DMC) orientation geometrical parameters: (<b>a</b>) DMC magnetometers internal orientation, (<b>b</b>) DMC orientation against horizontal plane, (<b>c</b>) measured magnetic north against geographical north.</p>
Full article ">Figure 3
<p>The Earth’s magnetic field model results for declination and inclination values, as per international geomagnetic reference field (IGRF) 12 GEN.</p>
Full article ">Figure 4
<p>Swinging procedure: (<b>a</b>) graphical description of swinging, (<b>b</b>) compensation harmonic components.</p>
Full article ">Figure 5
<p>Compass deviation compensation procedure flow chart.</p>
Full article ">Figure 6
<p>Compass deviation measurement results (CASE 1) with excess irregularity during measurements: (<b>a</b>) compass deviation angle vs. compass reading, (<b>b</b>) magnetic field projection in horizontal plane.</p>
Full article ">Figure 7
<p>Compass deviation measurement results (CASE 2) without irregularity during measurements (regular): (<b>a</b>) compass deviation angle vs. compass reading, (<b>b</b>) magnetic field projection in horizontal plane.</p>
Full article ">Figure 8
<p>Residual error distribution after compensation: (<b>a</b>) CASE 1, with accidental irregularity during measurements; (<b>b</b>) CASE 2, normal operation during measurements.</p>
Full article ">
17 pages, 4765 KiB  
Review
AR Enabled IoT for a Smart and Interactive Environment: A Survey and Future Directions
by Dongsik Jo and Gerard Jounghyun Kim
Sensors 2019, 19(19), 4330; https://doi.org/10.3390/s19194330 - 7 Oct 2019
Cited by 65 | Viewed by 12522
Abstract
Accompanying the advent of wireless networking and the Internet of Things (IoT), traditional augmented reality (AR) systems to visualize virtual 3D models of the real world are evolving into smart and interactive AR related to the context of things for physical objects. We [...] Read more.
Accompanying the advent of wireless networking and the Internet of Things (IoT), traditional augmented reality (AR) systems to visualize virtual 3D models of the real world are evolving into smart and interactive AR related to the context of things for physical objects. We propose the integration of AR and IoT in a complementary way, making AR scalable to cover objects everywhere with an acceptable level of performance and interacting with IoT in a more intuitive manner. We identify three key components for realizing such a synergistic integration: (1) distributed and object-centric data management (including for AR services); (2) IoT object-guided tracking; (3) seamless interaction and content interoperability. We survey the current state of these respective areas and herein discuss research on issues about realizing a future smart and interactive living environment. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Use case scenario highlighting object-centric data management and intuitive interaction to enable “everywhere” augmented reality (AR) service through the Internet of Things (IoT) infrastructure.</p>
Full article ">Figure 2
<p>Augmented reality (AR) control for electronic systems: (<b>a</b>) physical light control through remote touch based on the AR platform [<a href="#B24-sensors-19-04330" class="html-bibr">24</a>]; (<b>b</b>) control status for the operating manual using AR visual information [<a href="#B1-sensors-19-04330" class="html-bibr">1</a>].</p>
Full article ">Figure 3
<p>Internet of Things (IoT) combined with augmented reality (AR): overall possible architecture based on which smart and interactive AR services can be defined.</p>
Full article ">Figure 4
<p>System combining augmented reality (AR) with IoT with a comparison of Human Computer Interaction (HCI) styles such as ubiquitous computers and augmented interactions [<a href="#B31-sensors-19-04330" class="html-bibr">31</a>]: (<b>a</b>) ubiquitous computer, (<b>b</b>) augmented interaction, (<b>c</b>) proposed AR mixed with IoT.</p>
Full article ">Figure 5
<p>In situ operation of Internet of Things (IoT) lamps: an augmented reality (AR) user wearing a helmet-type AR device can interact intuitively (e.g., with hand gestures) to turn the lamp on or off with remote interaction, without having to fiddle with the actual device.</p>
Full article ">Figure 6
<p>Future distributed data management scheme for “everywhere” augmented reality (AR) service for physical objects.</p>
Full article ">Figure 7
<p>Webized augmented reality (AR) content representation in which virtual data are associated with a Web-accessible physical resource [<a href="#B37-sensors-19-04330" class="html-bibr">37</a>]: (<b>a</b>) virtual and physical resources of webized AR content, and (<b>b</b>) an example associating a physical sensor dataset [<a href="#B38-sensors-19-04330" class="html-bibr">38</a>].</p>
Full article ">Figure 8
<p>Main object recognition and tracking solutions for augmented reality (AR): (<b>a</b>) marker/fiducial, (<b>b</b>) feature, and (<b>c</b>) model-based.</p>
Full article ">Figure 8 Cont.
<p>Main object recognition and tracking solutions for augmented reality (AR): (<b>a</b>) marker/fiducial, (<b>b</b>) feature, and (<b>c</b>) model-based.</p>
Full article ">Figure 9
<p>Various augmented reality (AR) interactions with Internet of Things (IoT) objects: (<b>a</b>) in situ/remote operation with traditional graphical user interface (GUI) button; (<b>b</b>) metaphorical natural interaction (virtual dragging) to invoke an object function [<a href="#B45-sensors-19-04330" class="html-bibr">45</a>]; (<b>c</b>) interacting in a virtual/augmented space to affect the physical world [<a href="#B11-sensors-19-04330" class="html-bibr">11</a>].</p>
Full article ">Figure 10
<p>Guided AR tracking considering object characteristics [<a href="#B1-sensors-19-04330" class="html-bibr">1</a>].</p>
Full article ">Figure 11
<p>Examples of complex situations with similar shapes or the same feature sets in surrounding areas: (<b>a</b>) same shapes and different-colored textures (<b>b</b>) same textures and different shapes.</p>
Full article ">Figure 12
<p>Different styles of AR interaction using object type and requirements. Such information is obtained directly from the object, similar to the case of tracking information: (<b>a</b>) natural gesture used to carry over a task such as controlling TV volume; (<b>b</b>) swiping interaction for an AR manual searching task; (<b>c</b>) pointing gesture for toggle interaction, such as operating a lamp’s on/off switch.</p>
Full article ">Figure 13
<p>Augmented reality (AR) approach for interaction of everyday IoT objects: (<b>a</b>) interactive tool to define the operation of indoor IoT objects by hierarchical functional grouping [<a href="#B24-sensors-19-04330" class="html-bibr">24</a>]; (<b>b</b>) AR system to define the behavior of objects to create new functionalities [<a href="#B20-sensors-19-04330" class="html-bibr">20</a>].</p>
Full article ">
17 pages, 10370 KiB  
Article
Spatial Aggregation Net: Point Cloud Semantic Segmentation Based on Multi-Directional Convolution
by Guorong Cai, Zuning Jiang, Zongyue Wang, Shangfeng Huang, Kai Chen, Xuyang Ge and Yundong Wu
Sensors 2019, 19(19), 4329; https://doi.org/10.3390/s19194329 - 7 Oct 2019
Cited by 9 | Viewed by 4142
Abstract
Semantic segmentation of 3D point clouds plays a vital role in autonomous driving, 3D maps, and smart cities, etc. Recent work such as PointSIFT shows that spatial structure information can improve the performance of semantic segmentation. Motivated by this phenomenon, we propose Spatial [...] Read more.
Semantic segmentation of 3D point clouds plays a vital role in autonomous driving, 3D maps, and smart cities, etc. Recent work such as PointSIFT shows that spatial structure information can improve the performance of semantic segmentation. Motivated by this phenomenon, we propose Spatial Aggregation Net (SAN) for point cloud semantic segmentation. SAN is based on multi-directional convolution scheme that utilizes the spatial structure information of point cloud. Firstly, Octant-Search is employed to capture the neighboring points around each sampled point. Secondly, we use multi-directional convolution to extract information from different directions of sampled points. Finally, max-pooling is used to aggregate information from different directions. The experimental results conducted on ScanNet database show that the proposed SAN has comparable results with state-of-the-art algorithms such as PointNet, PointNet++, and PointSIFT, etc. In particular, our method has better performance on flat, small objects, and the edge areas that connect objects. Moreover, our model has good trade-off in segmentation accuracy and time complexity. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems)
Show Figures

Figure 1

Figure 1
<p>Illustration of the proposed directional spatial aggregation module.</p>
Full article ">Figure 2
<p>Illustration of the selection of neighbor points. (<b>a</b>) neighbor points selected by <span class="html-italic">K</span> nearest neighbor searching; (<b>b</b>) neighbor points selected by ball query searching; (<b>c</b>) neighbor points selected by octant-search.</p>
Full article ">Figure 3
<p>The details of multi-directional convolution.</p>
Full article ">Figure 4
<p>Illustration of the proposed end-to-end network architecture.</p>
Full article ">Figure 5
<p>The plane segmentation results on <math display="inline"><semantics> <mrow> <mi>K</mi> <mi>i</mi> <mi>t</mi> <mi>c</mi> <mi>h</mi> <mi>e</mi> <mi>n</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 6
<p>The plane segmentation results on <math display="inline"><semantics> <mrow> <mi>B</mi> <mi>e</mi> <mi>d</mi> <mi>r</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 7
<p>The small object segmentation results on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>g</mi> <mi>e</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 8
<p>The small object segmentation results on <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>l</mi> <mi>a</mi> <mi>s</mi> <mi>s</mi> <mi>r</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 9
<p>The edge segmentation results on <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>e</mi> <mi>s</mi> <mi>t</mi> <mi>a</mi> <mi>u</mi> <mi>r</mi> <mi>a</mi> <mi>n</mi> <mi>t</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 10
<p>The edge segmentation results on <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>f</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> <mi>R</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 11
<p>The complex scene segmentation results on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>i</mi> <mi>v</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mi>R</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> <mn>1</mn> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 12
<p>The complex scene segmentation results on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>i</mi> <mi>v</mi> <mi>i</mi> <mi>n</mi> <mi>g</mi> <mi>R</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> <mn>2</mn> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 13
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>o</mi> <mi>n</mi> <mi>f</mi> <mi>e</mi> <mi>r</mi> <mi>e</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> <mi>R</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 14
<p>openspace on S3DIS. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 15
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>f</mi> <mi>f</mi> <mi>i</mi> <mi>c</mi> <mi>e</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> <mn>1</mn> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 16
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>H</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mi>W</mi> <mi>a</mi> <mi>y</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 17
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>o</mi> <mi>u</mi> <mi>n</mi> <mi>g</mi> <mi>e</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 18
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>o</mi> <mi>p</mi> <mi>y</mi> <mi>R</mi> <mi>o</mi> <mi>o</mi> <mi>m</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 19
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>P</mi> <mi>a</mi> <mi>n</mi> <mi>t</mi> <mi>r</mi> <mi>y</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">Figure 20
<p>The segmentation results on <math display="inline"><semantics> <mrow> <mi>O</mi> <mi>f</mi> <mi>f</mi> <mi>i</mi> <mi>c</mi> <mi>e</mi> <mi>S</mi> <mn>3</mn> <mi>D</mi> <mn>2</mn> </mrow> </semantics></math>. (<b>a</b>) input, (<b>e</b>) ground truth, (<b>b</b>,<b>f</b>) the classification result by PointNet++, (<b>c</b>,<b>g</b>) the classification result by PointSIFT, and (<b>d</b>,<b>h</b>) the classification result by SAN.</p>
Full article ">
11 pages, 1947 KiB  
Article
A Virtual Pressure and Force Sensor for Safety Evaluation in Collaboration Robot Application
by Heonseop Shin, Sanghoon Kim, Kwang Seo and and Sungsoo Rhim
Sensors 2019, 19(19), 4328; https://doi.org/10.3390/s19194328 - 7 Oct 2019
Cited by 9 | Viewed by 4463
Abstract
Recent developments in robotics have resulted in implementations that have drastically increased collaborative interactions between robots and humans. As robots have the potential to collide intentionally and/or unexpectedly with a human during the collaboration, effective measures to ensure human safety must be devised. [...] Read more.
Recent developments in robotics have resulted in implementations that have drastically increased collaborative interactions between robots and humans. As robots have the potential to collide intentionally and/or unexpectedly with a human during the collaboration, effective measures to ensure human safety must be devised. In order to estimate the collision safety of a robot, this study proposes a virtual sensor based on an analytical contact model that accurately estimates the peak collision force and pressure as the robot moves along a pre-defined path, even before the occurrence of a collision event, with a short computation time. The estimated physical interaction values that would be caused by the (hypothetical) collision were compared to the collision safety thresholds provided within ISO/TS 15066 to evaluate the safety of the operation. In this virtual collision sensor model, the nonlinear physical characteristics and the effect of the contact surface shape were included to assure the reliability of the prediction. To verify the effectiveness of the virtual sensor model, the force and pressure estimated by the model were compared with various experimental results and the numerical results obtained from a finite element simulation. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic description of the proposed one-layered contact model: impactor and human-skin contact model (<b>a</b>), cylinder-flat contact (<b>b</b>), cylinder-sphere contact (<b>c</b>).</p>
Full article ">Figure 2
<p>FE model of cylindrical impactor type.</p>
Full article ">Figure 3
<p>Indentation of silicone rubber: experimental test configuration (<b>a</b>), experiment (<b>b</b>), FE simulation (<b>c</b>).</p>
Full article ">Figure 4
<p>Comparison of mathematical model, FE model and experiment: contact force (<b>a</b>) and peak pressure (<b>b</b>).</p>
Full article ">Figure 5
<p>Simulation results: trajectory of the robot: <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> (<b>a</b>), <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> (<b>b</b>), <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> (<b>c</b>).</p>
Full article ">Figure 6
<p>Simulation results for evaluation collision safety of UR5 robot: (<b>a</b>) effective mass, (<b>b</b>) normal direction velocity to contact surface, (<b>c</b>) estimated collision force, (<b>d</b>) estimated collision pressure.</p>
Full article ">
14 pages, 6417 KiB  
Article
A Passive Wireless Crack Sensor Based on Patch Antenna with Overlapping Sub-Patch
by Songtao Xue, Zhuoran Yi, Liyu Xie, Guochun Wan and Tao Ding
Sensors 2019, 19(19), 4327; https://doi.org/10.3390/s19194327 - 7 Oct 2019
Cited by 28 | Viewed by 4220
Abstract
Monolithic patch antennas for deformation measurements are designed to be stressed. To avoid the issues of incomplete strain transfer ratio and insufficient bonding strength of stressed antennas, this paper presents a passive wireless crack sensor based on an unstressed patch antenna. The rectangular [...] Read more.
Monolithic patch antennas for deformation measurements are designed to be stressed. To avoid the issues of incomplete strain transfer ratio and insufficient bonding strength of stressed antennas, this paper presents a passive wireless crack sensor based on an unstressed patch antenna. The rectangular radiation patch of the proposed sensor is partially covered by a radiation sub-patch, and the overlapped length between them will induce the resonate frequency shift representing the crack width. First, the cavity model theory is adopted to show how the resonant frequencies of the crack sensor are related to the overlapped length between the patch antenna and the sub-patch. This phenomenon is further verified by numerical simulation using the Ansoft high-frequency structure simulator (HFSS), and results show a sensitivity of 120.24 MHz/mm on average within an effective measuring range of 1.5 mm. One prototype of proposed sensor was fabricated. The experiments validated that the resonant frequency shifts are linearly proportional to the applied crack width, and the resolution is suitable for crack width measuring. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Concept of crack sensor using a patch antenna as shown by: (<b>a</b>) an axonometric drawing; (<b>b</b>) a front view.</p>
Full article ">Figure 2
<p>A normal rectangular patch antenna and proposed crack sensor. (<b>a</b>) A normal rectangular patch antenna (<b>b</b>) The proposed crack sensor using sub-patch.</p>
Full article ">Figure 3
<p>Schematic diagram of the crack sensor.</p>
Full article ">Figure 4
<p>The induced current distribution pattern on the combined radiation patch for the fundamental mode.</p>
Full article ">Figure 5
<p>The monostatic radar cross section (RCS) of the crack sensor in a wide range: (<b>a</b>) 0 mm to 0.95 mm with a resolation of 0.05 mm; (<b>b</b>) 0.50 mm to 6.00 mm with a resolation of 0.5 mm.</p>
Full article ">Figure 6
<p>Relationship between fundamental resonant frequency in longitudinal direction and overlapped length.</p>
Full article ">Figure 7
<p>Flow chart of antenna production in laboratory.</p>
Full article ">Figure 8
<p>The equipment used to produce the patch antenna. (<b>a</b>) a thermal transfer printer; (<b>b</b>) corrosive liquid.</p>
Full article ">Figure 9
<p>The manufactured crack sensor.</p>
Full article ">Figure 10
<p>The experimental setup. (<b>a</b>) The network analyzer used in experiment; (<b>b</b>) The micrometer; (<b>c</b>) The testing sensor; (<b>d</b>) The micrometer with the crack sensor.</p>
Full article ">Figure 11
<p>The monostatic radar cross section (RCS) in the experiment.</p>
Full article ">Figure 12
<p>The relationship between resonant frequency and overlapped length in theory calculation, numerical simulation, and in the experiment.</p>
Full article ">
22 pages, 3740 KiB  
Article
Arbitrary Microphone Array Optimization Method Based on TDOA for Specific Localization Scenarios
by Haitao Liu, Thia Kirubarajan and Qian Xiao
Sensors 2019, 19(19), 4326; https://doi.org/10.3390/s19194326 - 7 Oct 2019
Cited by 12 | Viewed by 4670
Abstract
Various microphone array geometries (e.g., linear, circular, square, cubic, spherical, etc.) have been used to improve the positioning accuracy of sound source localization. However, whether these array structures are optimal for various specific localization scenarios is still a subject of debate. This paper [...] Read more.
Various microphone array geometries (e.g., linear, circular, square, cubic, spherical, etc.) have been used to improve the positioning accuracy of sound source localization. However, whether these array structures are optimal for various specific localization scenarios is still a subject of debate. This paper addresses a microphone array optimization method for sound source localization based on TDOA (time difference of arrival). The geometric structure of the microphone array is established in parametric form. A triangulation method with TDOA was used to build the spatial sound source location model, which consists of a group of nonlinear multivariate equations. Through reasonable transformation, the nonlinear multivariate equations can be converted to a group of linear equations that can be approximately solved by the weighted least square method. Then, an optimization model based on particle swarm optimization (PSO) algorithm was constructed to optimize the geometric parameters of the microphone array under different localization scenarios combined with the spatial sound source localization model. In the optimization model, a reasonable fitness evaluation function is established which can comprehensively consider the positioning accuracy and robustness of the microphone array. In order to verify the array optimization method, two specific localization scenarios and two array optimization strategies for each localization scenario were constructed. The optimal array structure parameters were obtained through numerical iteration simulation. The localization performance of the optimal array structures obtained by the method proposed in this paper was compared with the optimal structures proposed in the literature as well as with random array structures. The simulation results show that the optimized array structure gave better positioning accuracy and robustness under both specific localization scenarios. The optimization model proposed could solve the problem of array geometric structure design based on TDOA and could achieve the customization of microphone array structures under different specific localization scenarios. Full article
(This article belongs to the Special Issue Acoustic Wave Sensors for Gaseous and Liquid Environments)
Show Figures

Figure 1

Figure 1
<p>Diagram of the microphones’ coordinates.</p>
Full article ">Figure 2
<p>Flow chart for microphone array optimization. PSO: particle swarm optimization.</p>
Full article ">Figure 3
<p>Scenario I—ring-shaped sound source distribution.</p>
Full article ">Figure 4
<p>The fitness evolution curve under scenario I for the two kinds of optimal array.</p>
Full article ">Figure 5
<p>The optimized array structure under scenario I. (<b>a</b>) The structure of Opt-array I; (<b>b</b>) The structure of Opt-array II.</p>
Full article ">Figure 6
<p>Random distribution of sound sources in the cyclic annular band.</p>
Full article ">Figure 7
<p>The localization error statistics for the arrays under scenario I.</p>
Full article ">Figure 8
<p>Scenario II–cuboid-shaped sound source distribution.</p>
Full article ">Figure 9
<p>The fitness evolution curve under scenario II for the two kinds of optimal array.</p>
Full article ">Figure 10
<p>The optimized array structure under scenario II. (<b>a</b>) The structure of Opt-array I; (<b>b</b>) The structure of Opt-array II.</p>
Full article ">Figure 11
<p>The random distribution of sound sources in the cuboid space band.</p>
Full article ">Figure 12
<p>The localization error statistics for the arrays under scenario II.</p>
Full article ">Figure 13
<p>The optimized array structure with seven microphones under scenario II. (<b>a</b>) The structure of Opt-array-7mic I; (<b>b</b>) The structure of Opt-array-7mic II.</p>
Full article ">Figure 14
<p>The localization error statistics for the arrays with seven microphones under scenario II.</p>
Full article ">
18 pages, 1221 KiB  
Article
Developing a Neural–Kalman Filtering Approach for Estimating Traffic Stream Density Using Probe Vehicle Data
by Mohammad A. Aljamal, Hossam M. Abdelghaffar and Hesham A. Rakha
Sensors 2019, 19(19), 4325; https://doi.org/10.3390/s19194325 - 7 Oct 2019
Cited by 22 | Viewed by 4459
Abstract
This paper presents a novel model for estimating the number of vehicles along signalized approaches. The proposed estimation algorithm utilizes the adaptive Kalman filter (AKF) to produce reliable traffic vehicle count estimates, considering real-time estimates of the system noise characteristics. The AKF utilizes [...] Read more.
This paper presents a novel model for estimating the number of vehicles along signalized approaches. The proposed estimation algorithm utilizes the adaptive Kalman filter (AKF) to produce reliable traffic vehicle count estimates, considering real-time estimates of the system noise characteristics. The AKF utilizes only real-time probe vehicle data. The AKF is demonstrated to outperform the traditional Kalman filter, reducing the prediction error by up to 29%. In addition, the paper introduces a novel approach that combines the AKF with a neural network (AKFNN) to enhance the vehicle count estimates, where the neural network is employed to estimate the probe vehicles’ market penetration rate. Results indicate that the accuracy of vehicle count estimates is significantly improved using the AKFNN approach (by up to 26%) over the AKF. Moreover, the paper investigates the sensitivity of the proposed AKF model to the initial conditions, such as the initial estimate of vehicle counts, initial mean estimate of the state system, and the initial covariance of the state estimate. The results demonstrate that the AKF is sensitive to the initial conditions. More accurate estimates could be achieved if the initial conditions are appropriately selected. In conclusion, the proposed AKF is more accurate than the traditional Kalman filter. Finally, the AKFNN approach is more accurate than the AKF and the traditional Kalman filter since the AKFNN uses more accurate values of the probe vehicle market penetration rate. Full article
(This article belongs to the Special Issue Intelligent Transportation Related Complex Systems and Sensors)
Show Figures

Figure 1

Figure 1
<p>Tested link.</p>
Full article ">Figure 2
<p>Flowchart for adaptive Kalman filter with a neural network (AKFNN) approach.</p>
Full article ">Figure 3
<p>Error histogram for the training, validation, and testing data set.</p>
Full article ">Figure 4
<p>Actual and estimated values of <math display="inline"><semantics> <msub> <mi>ρ</mi> <mrow> <mi>o</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math> for different level of market penetration (LMP) scenarios: (<b>a</b>) 10%, (<b>b</b>) 20%, (<b>c</b>) 30%, (<b>d</b>) 40%, (<b>e</b>) 50%, (<b>f</b>) 60%, (<b>g</b>) 70%, (<b>h</b>) 80%, and (<b>i</b>) 90% LMP.</p>
Full article ">Figure 5
<p>Actual and estimated vehicle counts over estimation intervals for different LMP scenarios: (<b>a</b>) 10%, (<b>b</b>) 20%, (<b>c</b>) 30%, (<b>d</b>) 40%, (<b>e</b>) 50%, (<b>f</b>) 60%, (<b>g</b>) 70%, (<b>h</b>) 80%, and (<b>i</b>) 90% LMP.</p>
Full article ">Figure 5 Cont.
<p>Actual and estimated vehicle counts over estimation intervals for different LMP scenarios: (<b>a</b>) 10%, (<b>b</b>) 20%, (<b>c</b>) 30%, (<b>d</b>) 40%, (<b>e</b>) 50%, (<b>f</b>) 60%, (<b>g</b>) 70%, (<b>h</b>) 80%, and (<b>i</b>) 90% LMP.</p>
Full article ">Figure 6
<p>Impact of the initial conditions on the AKF model: (<b>a</b>) Initial estimate values <math display="inline"><semantics> <msub> <mi>N</mi> <mi>i</mi> </msub> </semantics></math>, (<b>b</b>) Initial mean estimate values <math display="inline"><semantics> <msub> <mi>m</mi> <mi>i</mi> </msub> </semantics></math>, and (<b>c</b>) Initial covariance estimate values <math display="inline"><semantics> <msub> <mi>P</mi> <mi>i</mi> </msub> </semantics></math>.</p>
Full article ">
11 pages, 3805 KiB  
Article
Development of a New Embedded Dynamometer for the Measurement of Forces and Torques at the Ski-Binding Interface
by Frédéric Meyer, Alain Prenleloup and Alain Schorderet
Sensors 2019, 19(19), 4324; https://doi.org/10.3390/s19194324 - 7 Oct 2019
Cited by 12 | Viewed by 5381
Abstract
In alpine skiing, understanding the interaction between skiers and snow is of primary importance for both injury prevention as well as performance analysis. Risk of injuries is directly linked to constraints undergone by the skier. A force platform placed as an interface between [...] Read more.
In alpine skiing, understanding the interaction between skiers and snow is of primary importance for both injury prevention as well as performance analysis. Risk of injuries is directly linked to constraints undergone by the skier. A force platform placed as an interface between the ski and the skier should allow a better understanding of these constraints to be obtained to thereby develop a more reliable release system of binding. It should also provide useful information to allow for better physical condition training of athletes and non-professional skiers to reduce the risk of injury. Force and torque measurements also allow for a better understanding of the skiers’ technique (i.e., load evolution during turns, force distribution between left and right leg…). Therefore, the aim of this project was to develop a new embedded force platform that could be placed between the ski boot and the binding. First, the physical specifications of the dynamometer are listed as well as the measurement scope. Then, several iterations were performed on parametric 3D modeling and finite element analysis to obtain an optimal design. Two platforms were then machined and equipped with strain gauges. Finally, the calibration was performed on a dedicated test bench. The accuracy of the system was between 1.3 and 12.8% of the applied load. These results show a very good linearity of the system, which indicate a great outcome of the design. Field tests also highlighted the ease of use and reliability. This new dynamometer will allow skiers to wear their own equipment while measuring force and torque in real skiing conditions. Full article
(This article belongs to the Special Issue Sensors for Biomechanics Application)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) First iteration of the design. (<b>B</b>) Second iteration of the design with two sensors fixed to a rigid bed. (<b>C</b>) Final iteration with an integrated bed and ski binding interface. Sensors were placed on the front and back of the plate. Colors represent the von Mises values (blue: no deformation; red: maximal deformation).</p>
Full article ">Figure 2
<p>(<b>A</b>) The two manufactured forces platforms with sensors position marked in red. (<b>B</b>) Schematic view of the sensor, composed of two stages. The upper part allows the <span class="html-italic">Fy</span> and <span class="html-italic">Mz</span> components to be measured, while the lower part aims to measure <span class="html-italic">Fz</span>, <span class="html-italic">Mx</span>, and <span class="html-italic">My</span>. <span class="html-italic">Φ</span> indicates the different recorded signals.</p>
Full article ">Figure 3
<p>Setup used to calibrate torque around the X axis. The load was applied in pulling up the horizontal iron bar. On the left side of the platform, the load was directly applied in the upper direction, while on the right hand side, the load transited through a mechanical system that transferred the load in the top–down direction. (<b>A</b>) Setup with the ski. Green arrows indicate the load transmission. (<b>B</b>) Same setup but without the ski.</p>
Full article ">Figure 4
<p>The three different fastening situations for the fifth test condition. The ski could bend between the two fastening elements (in green): (<b>A</b>) Distance of 0.5 m. (<b>B</b>) 0.9 m. (<b>C</b>) 1.3 m.</p>
Full article ">Figure 5
<p>An athlete in the giant slalom equipped with the platforms and the backpack. The referential of the right force platform is also represented.</p>
Full article ">Figure 6
<p>Mean Fy (<b>A</b>), Fz (<b>B</b>), Mx (<b>C</b>), My (<b>D</b>), and Mz (<b>E</b>) of three skiers and two runs for both skis during a turn cycle, with the 95% limit of agreement (grey area).</p>
Full article ">Figure 7
<p>Mean Fy (<b>A</b>), Fz (<b>B</b>) Mx (<b>C</b>), and My (<b>D</b>) load distribution between the external and internal ski along the turn cycle of three skiers and two runs for both skis during a turn cycle.</p>
Full article ">
14 pages, 1488 KiB  
Article
Are Existing Monocular Computer Vision-Based 3D Motion Capture Approaches Ready for Deployment? A Methodological Study on the Example of Alpine Skiing
by Mirela Ostrek, Helge Rhodin, Pascal Fua, Erich Müller and Jörg Spörri
Sensors 2019, 19(19), 4323; https://doi.org/10.3390/s19194323 - 6 Oct 2019
Cited by 17 | Viewed by 8784
Abstract
In this study, we compared a monocular computer vision (MCV)-based approach with the golden standard for collecting kinematic data on ski tracks (i.e., video-based stereophotogrammetry) and assessed its deployment readiness for answering applied research questions in the context of alpine skiing. The investigated [...] Read more.
In this study, we compared a monocular computer vision (MCV)-based approach with the golden standard for collecting kinematic data on ski tracks (i.e., video-based stereophotogrammetry) and assessed its deployment readiness for answering applied research questions in the context of alpine skiing. The investigated MCV-based approach predicted the three-dimensional human pose and ski orientation based on the image data from a single camera. The data set used for training and testing the underlying deep nets originated from a field experiment with six competitive alpine skiers. The normalized mean per joint position error of the MVC-based approach was found to be 0.08 ± 0.01 m. Knee flexion showed an accuracy and precision (in parenthesis) of 0.4 ± 7.1° (7.2 ± 1.5°) for the outside leg, and −0.2 ± 5.0° (6.7 ± 1.1°) for the inside leg. For hip flexion, the corresponding values were −0.4 ± 6.1° (4.4° ± 1.5°) and −0.7 ± 4.7° (3.7 ± 1.0°), respectively. The accuracy and precision of skiing-related metrics were revealed to be 0.03 ± 0.01 m (0.01 ± 0.00 m) for relative center of mass position, −0.1 ± 3.8° (3.4 ± 0.9) for lean angle, 0.01 ± 0.03 m (0.02 ± 0.01 m) for center of mass to outside ankle distance, 0.01 ± 0.05 m (0.03 ± 0.01 m) for fore/aft position, and 0.00 ± 0.01 m2 (0.01 ± 0.00 m2) for drag area. Such magnitudes can be considered acceptable for detecting relevant differences in the context of alpine skiing. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Setup overview visualizing the camera and gate positions measured in meters, as well as the capture volume spanned by three consecutive giant slalom gates. The small person size in relation to the camera distance highlights the difficulty in estimating skiing poses accurately.</p>
Full article ">Figure 2
<p>Segment model, including all 18 joint centers and connecting segments considered for analysis. 1: head; 2: neck; 3: left shoulder; 4: right shoulder; 5: left elbow; 6: right elbow; 7: left hand; 8: right hand; 9: left hip; 10: right hip; 11: left knee; 12: right knee; 13: left ankle; 14: right ankle; 15: left ski tail; 16: right ski tail; 17: left ski tip; 18: right ski tip.</p>
Full article ">Figure 3
<p>Definitions of the fore/aft position (<span class="html-italic">d<sub>Fore/Aft</sub></span>) and the lean angle (<span class="html-italic">λ<sub>Lean</sub></span>). (<b>a</b>) Local coordinate systems are defined relative to the ski and up direction. (<b>b</b>) The fore/aft position is the projection of the COM on the x’ axis. (<b>c</b>) The lean angle is the orientation of COM in the x”-y” plane.</p>
Full article ">Figure 4
<p>Normalized mean per joint position error (NMPJPE) for the best camera perspectives (CAM 3 and CAM 5; frontal-lateral) over one turn cycle.</p>
Full article ">
Previous Issue
Back to TopTop