[go: up one dir, main page]

Next Issue
Volume 19, September-1
Previous Issue
Volume 19, August-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 19, Issue 16 (August-2 2019) – 187 articles

Cover Story (view full-size image): 3D printing shown strong potential for the manufacture of microstructured polymer optical fiber (MPOF). Over 3 decades, many techniques to fabricate MPOF have been proposed, yet those are limited to some structures and required complex fabrication procedures. By combining the extrusion technique with the capabilities of 3D printers, which are built-in temperature controller and polymer filament feeding systems, we investigated a novel technique of manufacturing MPOFs via a single-step procedure by means of a 3D printer. The suspended-core polymer optical fiber (SC-MPOF) is the first structure that has been fabricated from this novel technique, owing to its potential in many sensing applications. This development can be interesting not only in engineering, but also in medical applications, and academia, where low-cost sensing devices with a fast fabrication process are crucial. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
27 pages, 5326 KiB  
Article
Development of an Eye Tracking-Based Human-Computer Interface for Real-Time Applications
by Radu Gabriel Bozomitu, Alexandru Păsărică, Daniela Tărniceriu and Cristian Rotariu
Sensors 2019, 19(16), 3630; https://doi.org/10.3390/s19163630 - 20 Aug 2019
Cited by 26 | Viewed by 8565
Abstract
In this paper, the development of an eye-tracking-based human–computer interface for real-time applications is presented. To identify the most appropriate pupil detection algorithm for the proposed interface, we analyzed the performance of eight algorithms, six of which we developed based on the most [...] Read more.
In this paper, the development of an eye-tracking-based human–computer interface for real-time applications is presented. To identify the most appropriate pupil detection algorithm for the proposed interface, we analyzed the performance of eight algorithms, six of which we developed based on the most representative pupil center detection techniques. The accuracy of each algorithm was evaluated for different eye images from four representative databases and for video eye images using a new testing protocol for a scene image. For all video recordings, we determined the detection rate within a circular target 50-pixel area placed in different positions in the scene image, cursor controllability and stability on the user screen, and running time. The experimental results for a set of 30 subjects show a detection rate over 84% at 50 pixels for all proposed algorithms, and the best result (91.39%) was obtained with the circular Hough transform approach. Finally, this algorithm was implemented in the proposed interface to develop an eye typing application based on a virtual keyboard. The mean typing speed of the subjects who tested the system was higher than 20 characters per minute. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Head-mounted eye tracking interface.</p>
Full article ">Figure 2
<p>Mapping between eye pupil image and user screen cursor [<a href="#B50-sensors-19-03630" class="html-bibr">50</a>] (© 2015 IEEE).</p>
Full article ">Figure 3
<p>Mapping results on the scene image (user screen) for circles with radii of 3, 5, and 7 pixels in the raw eye image: (<b>a</b>) low accuracy of the calibration procedure (mapping rate (MR) = 15.96); (<b>b</b>) high accuracy of the calibration procedure (MR = 8.86).</p>
Full article ">Figure 4
<p>Mapping rate determination for low and high accuracies of the calibration procedure.</p>
Full article ">Figure 5
<p>Pupil reconstruction stages when the corneal reflection is placed inside the pupil area and when it is located on the pupil edge: (<b>a</b>,<b>d</b>) eye pupil images after the binarization stage; (<b>b</b>,<b>e</b>) morphological reconstructions by dilation, filling the gaps due to corneal reflection and erosion; and (<b>c</b>,<b>f</b>) pupil contour detection.</p>
Full article ">Figure 6
<p>Elliptical Hough transform principle.</p>
Full article ">Figure 7
<p>Principle of projection function method algorithm.</p>
Full article ">Figure 8
<p>Detection rate vs. pixel error for all PDAs running on databases (<b>a</b>) DB<sub>1</sub>, (<b>b</b>) DB<sub>2</sub>-CIL, (<b>c</b>) DB<sub>3</sub>-SW-p1-left, (<b>d</b>) DB<sub>3</sub>-SW-p1-right, (<b>e</b>) DB<sub>3</sub>-SW-p2-left, (<b>f</b>) DB<sub>3</sub>-SW-p2-right, (<b>g</b>) DB<sub>4</sub>-ExCuSe.</p>
Full article ">Figure 8 Cont.
<p>Detection rate vs. pixel error for all PDAs running on databases (<b>a</b>) DB<sub>1</sub>, (<b>b</b>) DB<sub>2</sub>-CIL, (<b>c</b>) DB<sub>3</sub>-SW-p1-left, (<b>d</b>) DB<sub>3</sub>-SW-p1-right, (<b>e</b>) DB<sub>3</sub>-SW-p2-left, (<b>f</b>) DB<sub>3</sub>-SW-p2-right, (<b>g</b>) DB<sub>4</sub>-ExCuSe.</p>
Full article ">Figure 9
<p>Typical detection errors of analyzed PDAs: (<b>a</b>) CHT, detection error due to elliptical shape of the pupil; (<b>b</b>) EHT, canceling the error introduce by CHT algorithm; (<b>c</b>,<b>d</b>) RANSAC, two different runnings on the same eye image with corneal reflection; (<b>e</b>) ExCuSe, loss of detection in an image affected by corneal reflection and occluded by eyelashes; and (<b>f</b>) PROJ and (<b>g</b>) LSFE, images sensitive to binarization stage (noisy eye image or pupil with occlusions). Legend: Green line—ideal pupil contour and center; red line—detected pupil contour and center.</p>
Full article ">Figure 10
<p>Results provided by EHT and CHT algorithms on the same noisy eye image and circular shape of the pupil: (<b>a</b>) failed detection for the EHT algorithm; (<b>b</b>) accurate detection for the CHT algorithm.</p>
Full article ">Figure 11
<p>Typical detection errors provided by the EHT algorithm for noisy eye images.</p>
Full article ">Figure 12
<p>Real-time testing scenario: (<b>a</b>) cursor movement tracking on the user screen and (<b>b</b>) signals provided by the PDA on both axes of the coordinate system.</p>
Full article ">Figure 13
<p>Experimental results obtained on the video eye images for the (<b>a</b>) CENT, (<b>b</b>) CHT, (<b>c</b>) EHT, (<b>d</b>) ExCuSe, (<b>e</b>) LSFE, (<b>f</b>) PROJ, (<b>g</b>) RANSAC, and (<b>h</b>) Starburst algorithms.</p>
Full article ">Figure 13 Cont.
<p>Experimental results obtained on the video eye images for the (<b>a</b>) CENT, (<b>b</b>) CHT, (<b>c</b>) EHT, (<b>d</b>) ExCuSe, (<b>e</b>) LSFE, (<b>f</b>) PROJ, (<b>g</b>) RANSAC, and (<b>h</b>) Starburst algorithms.</p>
Full article ">Figure 14
<p>(<b>a</b>) Detection rate and (<b>b</b>) cluster detection rate depending on target area radius for all studied algorithms for real-time applications.</p>
Full article ">Figure 15
<p>(<b>a</b>) Cursor movement tracking on the virtual keyboard developed by OptiKey [<a href="#B56-sensors-19-03630" class="html-bibr">56</a>] and (<b>b</b>) signals provided by the PDA on both axes of the coordinate system during typing a sentence.</p>
Full article ">Figure 16
<p>Mean value and standard deviation of the (<b>a</b>) typing speed per subject and (<b>b</b>) TER per subject.</p>
Full article ">Figure 17
<p>System Usability Scale (SUS) score of each item of the questionnaire.</p>
Full article ">
23 pages, 3926 KiB  
Article
Adaptive Neuro-Fuzzy Fusion of Multi-Sensor Data for Monitoring a Pilot’s Workload Condition
by Xia Zhang, Youchao Sun, Zhifan Qiu, Junping Bao and Yanjun Zhang
Sensors 2019, 19(16), 3629; https://doi.org/10.3390/s19163629 - 20 Aug 2019
Cited by 5 | Viewed by 4118
Abstract
To realize an early warning of unbalanced workload in the aircraft cockpit, it is required to monitor the pilot’s real-time workload condition. For the purpose of building the mapping relationship from physiological and flight data to workload, a multi-source data fusion model is [...] Read more.
To realize an early warning of unbalanced workload in the aircraft cockpit, it is required to monitor the pilot’s real-time workload condition. For the purpose of building the mapping relationship from physiological and flight data to workload, a multi-source data fusion model is proposed based on a fuzzy neural network, mainly structured using a principal components extraction layer, fuzzification layer, fuzzy rules matching layer, and normalization layer. Aiming at the high coupling characteristic variables contributing to workload, principal component analysis reconstructs the feature data by reducing its dimension. Considering the uncertainty for a single variable to reflect overall workload, a fuzzy membership function and fuzzy control rules are defined to abstract the inference process. An error feedforward algorithm based on gradient descent is utilized for parameter learning. Convergence speed and accuracy can be adjusted by controlling the gradient descent rate and error tolerance threshold. Combined with takeoff and initial climbing tasks of a Boeing 737–800 aircraft, crucial performance indicators—including pitch angle, heading, and airspeed—as well as physiological indicators—including electrocardiogram (ECG), respiration, and eye movements—were featured. The mapping relationship between multi-source data and the comprehensive workload level synthesized using the NASA task load index was established. Experimental results revealed that the predicted workload corresponding to different flight phases and difficulty levels showed clear distinctions, thereby proving the validity of data fusion. Full article
(This article belongs to the Collection Multi-Sensor Information Fusion)
Show Figures

Figure 1

Figure 1
<p>Fuzzy neural network model with a multi-layer structure.</p>
Full article ">Figure 2
<p>Composite membership function graph.</p>
Full article ">Figure 3
<p>Flow chart of error feedforward algorithm based on gradient descent.</p>
Full article ">Figure 4
<p>Human–machine–environment closed-loop circuit in the cockpit.</p>
Full article ">Figure 5
<p>Experiment scenes of flight simulation. (<b>a</b>) Flight simulation experimental platform. (<b>b</b>) Flight personnel wearing physiological monitoring sensors during the experiment.</p>
Full article ">Figure 6
<p>Relationship between error cost and learning times under different learning rates (<span class="html-italic">n</span> = 4).</p>
Full article ">Figure 7
<p>Comparison between desired outputs and actual outputs (<span class="html-italic">β</span> = 0.75, RMSE = 4.85).</p>
Full article ">Figure 8
<p>Relationship between error cost and learning times under different learning rates (n = 5).</p>
Full article ">Figure 9
<p>Comparison between desired outputs and actual outputs (β = 0.75, RMSE = 4.96).</p>
Full article ">Figure 10
<p>Mean predicted workload values during different flight phases (Phase 1: Taxiing, Phase 2: Normal Climbing, Phase 3: Maneuvering Under Fault, &amp; Phase 4: Flaring Out).</p>
Full article ">Figure 11
<p>Mean predicted workload values under different mission difficulties.</p>
Full article ">
17 pages, 9231 KiB  
Article
Smoke Obscuration Measurements in Reduced-Scale Fire Modelling Based on Froude Number Similarity
by Wojciech Węgrzyński, Piotr Antosiewicz, Tomasz Burdzy, Mateusz Zimny and Adam Krasuski
Sensors 2019, 19(16), 3628; https://doi.org/10.3390/s19163628 - 20 Aug 2019
Cited by 6 | Viewed by 6234
Abstract
A common method for investigating various fire- and smoke-related phenoma is a reduced-scale fire modelling that uses the conservation concept of Froude number as its primary similarity criterion. Smoke obscuration measurements were not commonly used in this approach. In this paper, we propose [...] Read more.
A common method for investigating various fire- and smoke-related phenoma is a reduced-scale fire modelling that uses the conservation concept of Froude number as its primary similarity criterion. Smoke obscuration measurements were not commonly used in this approach. In this paper, we propose a new type of optical densitometer that allows for smoke obscuration density measurements on a reduced-scale. This device uses a set of mirrors to increase the optical path length, so that the device may follow the geometrical scale of the model, but that still measures smoke obscuration as if it were in full scale. The principle of operation is based on the Bougher-Lambert-Beer law, with modifications related to the Froude number-based scaling principles, to streamline the measurements. The proposed low-budget (< $1000) device was built, calibrated with a set of the reference optical filters, and used in a series of full- (1:1) and reduced-scale (1:4) experiments with n-Heptane fires in a small compartment. The main limitation of this study is the assumption that there is similar soot production in full- and reduced-scale fires, which may not be true for many Froude-number scaling applications. Therefore, it must be investigated in a case-by-case basis. In our case, the results are promising. The measured obscuration in the reduced-scale had a 10% error versus averaged measurements in full-scale measurements. Moreover, there were well represented transient changes of the smoke layer optical density during the combustion and after the smoke layer settled. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The idea of optical densitometers for smoke obscuration in fire experiments.</p>
Full article ">Figure 2
<p>The idea of the Froude number reduced-scale fire modelling. Illustrating the concept of scaling down the heat release rate of the fire to model a fire, that causes similar consequences in the scaled-down (geometrical) compartment.</p>
Full article ">Figure 3
<p>Dividing the optical path with a length of <span class="html-italic">l</span> into <span class="html-italic">n</span> beams with a length of <span class="html-italic">l/n</span>, to measure the transmittance in the reduced-scale model as in the full scale, based on Equations (14) and (16).</p>
Full article ">Figure 4
<p>A prototype of the optical densitometer for reduced-scale fire research before the final assembly: (<b>a</b>) Illustration of the idea of the operation of the densitometer (with a 633 nm laser); (<b>b</b>) pre-assembly fitting of important optical components of the device.</p>
Full article ">Figure 5
<p>The final prototype of the optical densitometer for reduced-scale fire experiments. (<b>a</b>) Overview of the device; (<b>b</b>) close up of the laser-photodiode component; and (<b>c</b>) close up of the reflecting component.</p>
Full article ">Figure 6
<p>Internal transmittance of the calibration filters, calculated with a tool provided by the manufacturer (<a href="https://www.schott.com/advanced_optics/" target="_blank">https://www.schott.com/advanced_optics/</a>).</p>
Full article ">Figure 7
<p>Measured transmittance versus known transmittance value of the calibration filters. Dashed lines represent a 10% difference in measurements.</p>
Full article ">Figure 8
<p>The relative error of transmittance measurements compared to known transmittance value of the calibration filters.</p>
Full article ">Figure 9
<p>(<b>a</b>) Full-scale and (<b>b</b>) reduced-scale (1:4) experiments on the free burning of n-heptane.</p>
Full article ">Figure 10
<p>The interior of a reduced-scale (1:4) chamber with visible reduced-scale optical densitometer mounted underneath the ceiling.</p>
Full article ">Figure 11
<p>The overview of the test chambers, with the localisation of the measuring equipment (OD—optical densitometer, T1–T4—four thermocouples that were placed symmetrically in four corners of the compartment).</p>
Full article ">Figure 12
<p>Heat Release Rate (HRR) measurements in the performed experiments—moving averages (5 s averaging time) calculated from mass loss measurements. Values measured in reduced-scale were scaled up based on Equations (5) and (6).</p>
Full article ">Figure 13
<p>Mean temperature in full- and reduced-scale experiments. Value averaged on four measurements points underneath the ceiling.</p>
Full article ">Figure 14
<p>Measurements of transmittance of the hot gas layer in the experiments.</p>
Full article ">
13 pages, 6369 KiB  
Article
Target Doppler Rate Estimation Based on the Complex Phase of STFT in Passive Forward Scattering Radar
by Karol Abratkiewicz, Piotr Krysik, Zbigniew Gajo and Piotr Samczyński
Sensors 2019, 19(16), 3627; https://doi.org/10.3390/s19163627 - 20 Aug 2019
Cited by 11 | Viewed by 4057
Abstract
This article presents a novel approach to the estimation of motion parameters of objects in passive forward scattering radars (PFSR). In such systems, most frequency modulated signals which are used have parameters that depend on the geometry of a radar scene and an [...] Read more.
This article presents a novel approach to the estimation of motion parameters of objects in passive forward scattering radars (PFSR). In such systems, most frequency modulated signals which are used have parameters that depend on the geometry of a radar scene and an object’s motion. Worth noting is that in bistatic (or multistatic) radars forward scattering geometry is present thus in this case only Doppler measurements are available while the range measurement is unambiguous. In this article the modulation factor, also called the Doppler rate, was determined based on the chirp rate (equivalent Doppler rate) estimation concept in the time-frequency (TF) domain. This approach utilizes the idea of the complex phase of the short-time Fourier transform (STFT) and its modification known from the literature. Mathematical dependencies were implemented and verified and the simulation results were described. The accuracy of the considered estimators were also verified using the Cramer-Rao lower bound (CRLB) to which simulated data for the considered estimators was compared. The proposed method was validated using a real-life signal collected from a radar operating in PFSR geometry. The Doppler rate provided by a car crossing the baseline between the receiver and the GSM transmitter was estimated. Finally, the concept of using CR estimation, which in the case of PFSR can be understood as Doppler rate, was confirmed on the basis of both simulated and real-life data. Full article
(This article belongs to the Special Issue Recent Advancements in Radar Imaging and Sensing Technology)
Show Figures

Figure 1

Figure 1
<p>Simplified passive forward scattering radar (PFSR) geometry. <math display="inline"><semantics> <mi>β</mi> </semantics></math>—bistatic angle, <math display="inline"><semantics> <msub> <mi mathvariant="normal">T</mi> <mi mathvariant="normal">x</mi> </msub> </semantics></math>—non-cooperative transmitter, <math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">x</mi> </msub> </semantics></math>—radar receiver, <math display="inline"><semantics> <mi>TGT</mi> </semantics></math>—target, <span class="html-italic">L</span>—baseline, <math display="inline"><semantics> <msub> <mi>R</mi> <mn>1</mn> </msub> </semantics></math>—distance from the transmitter to the target, <math display="inline"><semantics> <msub> <mi>R</mi> <mn>2</mn> </msub> </semantics></math>—distance from the target to the receiver, <span class="html-italic">D</span>—distance from the receiver to the crossing point.</p>
Full article ">Figure 2
<p>CR estimation in the TF domain—an interpretation.</p>
Full article ">Figure 3
<p>Comparison of the accuracy of the utilized estimators.</p>
Full article ">Figure 4
<p>Radar scene for the 1st simulation case.</p>
Full article ">Figure 5
<p>Spectrogram of the 1st simulation case.</p>
Full article ">Figure 6
<p>Accelerogram of the 1st simulation case.</p>
Full article ">Figure 7
<p>Comparison of the true Doppler rate and estimated CR for two amplitude cases.</p>
Full article ">Figure 8
<p>Radar scene for the 2nd simulation case.</p>
Full article ">Figure 9
<p>Spectrogram of the 2nd simulation case.</p>
Full article ">Figure 10
<p>Accelerogram of the 2nd simulation case.</p>
Full article ">Figure 11
<p>Measurement scene. In white—Range from the GSM transmitter to the receiver, in red—range from the receiver to the intersection point, in blue—the target trajectory.</p>
Full article ">Figure 12
<p>Measurement geometry diagram. <math display="inline"><semantics> <msub> <mi mathvariant="normal">T</mi> <mi mathvariant="normal">x</mi> </msub> </semantics></math>—GSM transmitter of opportunity, <math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mi>xr</mi> </msub> </semantics></math>—the reference antenna, <math display="inline"><semantics> <msub> <mi mathvariant="normal">R</mi> <mi>xs</mi> </msub> </semantics></math>—the surveillance antenna.</p>
Full article ">Figure 13
<p>Spectrograms of all considered cases.</p>
Full article ">Figure 14
<p>Accelerograms of all considered cases.</p>
Full article ">
15 pages, 5585 KiB  
Article
Implementation of Radiating Elements for Radiofrequency Front-Ends by Screen-Printing Techniques for Internet of Things Applications
by Imanol Picallo, Hicham Klaina, Peio Lopez-Iturri, Aitor Sánchez, Leire Méndez-Giménez and Francisco Falcone
Sensors 2019, 19(16), 3626; https://doi.org/10.3390/s19163626 - 20 Aug 2019
Cited by 4 | Viewed by 3638
Abstract
The advent of the Internet of Things (IoT) has led to embedding wireless transceivers into a wide range of devices, in order to implement context-aware scenarios, in which a massive amount of transceivers is foreseen. In this framework, cost-effective electronic and Radio Frequency [...] Read more.
The advent of the Internet of Things (IoT) has led to embedding wireless transceivers into a wide range of devices, in order to implement context-aware scenarios, in which a massive amount of transceivers is foreseen. In this framework, cost-effective electronic and Radio Frequency (RF) front-end integration is desirable, in order to enable straightforward inclusion of communication capabilities within objects and devices in general. In this work, flexible antenna prototypes, based on screen-printing techniques, with conductive inks on flexible low-cost plastic substrates is proposed. Different parameters such as substrate/ink characteristics are considered, as well as variations in fabrication process or substrate angular deflection in device performance. Simulation and measurement results are presented, as well as system validation results in a real test environment in wireless sensor network communications. The results show the feasibility of using screen-printing antenna elements on flexible low-cost substrates, which can be embedded in a wide array of IoT scenarios. Full article
(This article belongs to the Special Issue Sensor Systems for Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Schematic description of the fabrication of antenna prototypes based on the screen-printing process, divided into five steps: (<b>1</b>) material parameter election, (<b>2</b>) printing process, (<b>3</b>) tunnel drying, (<b>4</b>) oven drying, (<b>5</b>) sheet prototype and subsequent processing.</p>
Full article ">Figure 2
<p>(<b>a</b>) Printing stencil, (<b>b</b>) detail of the implemented mask, (<b>c</b>) screen printer, (<b>d</b>) tunnel, (<b>e</b>) box oven, (<b>f</b>) printed sheet (top layer) as an output from the aforementioned screen-printing process. Multiple layouts can be placed within the sheet, for calibration and test purposes. The tested prototypes can be seen on the top right of the image.</p>
Full article ">Figure 3
<p>(<b>a</b>) Dimensions of the microstrip antenna prototype; (<b>b</b>) the S11 parameter full wave simulation result (WT14 350 µm substrate).</p>
Full article ">Figure 4
<p>(<b>a</b>) 2D radiation diagram and (<b>b</b>) 3D radiation diagram for the WT14 350 µm antenna prototype.</p>
Full article ">Figure 5
<p>(<b>a</b>) Fabricated prototypes for radiating elements; (<b>b</b>): S11 parameter measurement results for different substrates (plastic/paper).</p>
Full article ">Figure 6
<p>(<b>a</b>): Fabricated prototypes for different ink types and temperature conditions. (<b>b</b>): S11 parameter measurement results.</p>
Full article ">Figure 7
<p>S11 parameter measurement results (WT14 substrate prototype), as a function of temperature variation in the prototype curing process.</p>
Full article ">Figure 8
<p>S11 parameter measurement results (WT14 substrate prototype), as a function of angular deflection of the substrate, within the −30° to +30° range, with respect to the horizontal plane containing the antenna.</p>
Full article ">Figure 9
<p>Lab scenario employed for system level validation: (<b>a</b>) Real lab environment, (<b>b</b>) volumetric scenario implemented in the 3D Ray Launching simulation tool, (<b>c</b>) detail of the radial validation layout, and (<b>d</b>) detail of the measurements with the ZigBee mote network.</p>
Full article ">Figure 10
<p>Comparison between simulation and measurement results of the received power levels for a linear Transmitter-Receiver radial within an indoor lab environment.</p>
Full article ">Figure 11
<p>Estimation of the received power levels, obtained with the in-house 3D ray launching simulation. Results were obtained for the complete volume of the scenario and have been particularized for the case of 2D height planes of 0.6 m, 1.2 m, 1.8 m and 2.4 m.</p>
Full article ">
21 pages, 3974 KiB  
Article
Analyzing Spinal Shape Changes During Posture Training Using a Wearable Device
by Katharina Stollenwerk, Jonas Müller, André Hinkenjann and Björn Krüger
Sensors 2019, 19(16), 3625; https://doi.org/10.3390/s19163625 - 20 Aug 2019
Cited by 8 | Viewed by 5478
Abstract
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which [...] Read more.
Lower back pain is one of the most prevalent diseases in Western societies. A large percentage of European and American populations suffer from back pain at some point in their lives. One successful approach to address lower back pain is postural training, which can be supported by wearable devices, providing real-time feedback about the user’s posture. In this work, we analyze the changes in posture induced by postural training. To this end, we compare snapshots before and after training, as measured by the Gokhale SpineTracker™. Considering pairs of before and after snapshots in different positions (standing, sitting, and bending), we introduce a feature space, that allows for unsupervised clustering. We show that resulting clusters represent certain groups of postural changes, which are meaningful to professional posture trainers. Full article
(This article belongs to the Special Issue Data Analytics and Applications of the Wearable Sensors in Healthcare)
Show Figures

Figure 1

Figure 1
<p>Photos of the <span class="html-italic">SpineTracker</span> sensor system. (<b>a</b>) Four of the five sensors are sitting in the charger. The sensor outside the charger is shown with its local coordinate system. A single sensor has the dimensions <math display="inline"><semantics> <mrow> <mn>33</mn> <mrow> <mi>mm</mi> </mrow> <mo>×</mo> <mn>16</mn> <mrow> <mi>mm</mi> </mrow> <mo>×</mo> <mn>10</mn> <mrow> <mi>mm</mi> </mrow> </mrow> </semantics></math>. Each sensor is attached to a person’s lumbar spine with double sided tape. (<b>b</b>,<b>c</b>) Back and side view of sensor positioning on the lumbar spine including directions of the sensor coordinate system. Sensors are overlayed with a reconstructed spine curve (green dots and line).</p>
Full article ">Figure 2
<p>Overview of the processing pipeline used. Individual steps are marked by boxes, arrows indicate the direction of the pipeline. Each arrow is annotated with dimensionality of the data output by the preceding step. Two dimension labels indicate that the data from the two postures in each posture pair is not yet combined.</p>
Full article ">Figure 3
<p>(<b>a</b>) Two spine shapes reconstructed from the <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics></math> input angles and (<b>b</b>) resulting <span class="html-italic">offset spine shape</span>. (<b>c</b>) Offset spine shape bundle representing a data cluster in the feature set’s T-stochastic neighbor embedding (t-SNE) map (<b>d</b>), colored by cluster ID.</p>
Full article ">Figure 4
<p>Boxplots of the angle distribution of the <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>t</mi> <mn>1</mn> </msub> </semantics></math> snapshots grouped by sensor. The box frames the lower (<math display="inline"><semantics> <msub> <mi>Q</mi> <mn>1</mn> </msub> </semantics></math>) to upper (<math display="inline"><semantics> <msub> <mi>Q</mi> <mn>3</mn> </msub> </semantics></math>) quartile values of the data. The horizontal line inside each box marks the data’s median. Whiskers include data between <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>1</mn> </msub> <mo>−</mo> <mn>1.5</mn> <mspace width="4.pt"/> <mi>IQR</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>Q</mi> <mn>3</mn> </msub> <mo>+</mo> <mn>1.5</mn> <mspace width="4.pt"/> <mi>IQR</mi> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>IQR</mi> <mo>=</mo> <msub> <mi>Q</mi> <mn>3</mn> </msub> <mo>−</mo> <msub> <mi>Q</mi> <mn>1</mn> </msub> </mrow> </semantics></math> abbreviates the interquartile range. Outliers outside the whisker range are marked with dots.</p>
Full article ">Figure 5
<p>The number of elements per cluster under variation of the perplexity value, plotted by cluster ID.</p>
Full article ">Figure 6
<p>Visual comparison of changes in clustering when varying perplexity. (<b>a</b>) A 2D t-SNE map and (<b>b</b>) offset spine shape bundles per cluster for a perplexity value of 40. (<b>c</b>) offset spine shape bundles per cluster for a perplexity value of 25. The offset spine shape bundles displayed represent the only two clusters that changed for a perplexity value of 40. All axis titles of the offset spine shape bundles contain information of the cluster ID and the number of elements in that cluster.</p>
Full article ">Figure 7
<p>Results for clustering posture pairs of <span class="html-italic">standing</span>. (<b>left</b>) The T-SNE map labelled with and colored by cluster ID. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span> including information about the cluster ID and the number of elements in that cluster in each axis title.</p>
Full article ">Figure 8
<p>Results for clustering posture pairs of <span class="html-italic">sitting</span>. (<b>left</b>) The t-SNE map labelled with and colored by cluster ID. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span> including information about the cluster ID and the number of elements in that cluster.</p>
Full article ">Figure 9
<p>Results for clustering posture pairs of <span class="html-italic">hip hinging</span>. (<b>left</b>) The t-SNE map labelled with and colored by cluster ID. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span>.</p>
Full article ">Figure A1
<p>Results for clustering posture pairs of <span class="html-italic">standing</span>. (<b>left</b>) The t-SNE map labelled with and colored by cluster ID. An ‘x’ marks the position of a cluster representative. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span>.</p>
Full article ">Figure A2
<p>Results for clustering posture pairs of <span class="html-italic">sitting</span>: Cluster representative sample posture pairs and offset spine shapes for each cluster. Each sample’s position within its cluster is marked by an ‘x’ in the colored t-SNE map.</p>
Full article ">Figure A3
<p>Results for clustering posture pairs of <span class="html-italic">sitting</span>. (<b>left</b>) The t-SNE map colored by cluster ID. An ‘x’ marks the position of a cluster representative. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span>.</p>
Full article ">Figure A4
<p>Results for clustering posture pairs of <span class="html-italic">sitting</span>: Cluster representative sample posture pairs and offset spine shapes for each cluster. Each sample’s position within its cluster is marked by an ‘x’ in the colored t-SNE map.</p>
Full article ">Figure A5
<p>Results for clustering posture pairs of <span class="html-italic">hip hinging</span>. (<b>left</b>) The t-SNE map colored by cluster ID. An ‘x’ marks the position of a cluster representative. (<b>right</b>) Corresponding cluster bundles of <span class="html-italic">offset spine shapes</span>.</p>
Full article ">Figure A6
<p>Results for clustering posture pairs of <span class="html-italic">hip hinging</span>. Cluster representative sample posture pairs and offset spine shapes for each cluster. Each sample’s position within its cluster is marked by an ‘x’ in the colored t-SNE map. The two posture snapshots are additionally rotated about the origin such that the topmost sensor of the unguided <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> snapshot lies on the <span class="html-italic">y</span>-axis.</p>
Full article ">
16 pages, 8486 KiB  
Article
Application and Optimization of Wavelet Transform Filter for North-Seeking Gyroscope Sensor Exposed to Vibration
by Ji Ma, Zhiqiang Yang, Zhen Shi, Xuewei Zhang and Chenchen Liu
Sensors 2019, 19(16), 3624; https://doi.org/10.3390/s19163624 - 20 Aug 2019
Cited by 10 | Viewed by 3735
Abstract
Conventional wavelet transform (WT) filters have less effect on de-noising and correction of a north-seeking gyroscope sensor exposed to vibration, since the optimal wavelet decomposed level for de-noising is difficult to determine. To solve this problem, this paper proposes an optimized WT filter [...] Read more.
Conventional wavelet transform (WT) filters have less effect on de-noising and correction of a north-seeking gyroscope sensor exposed to vibration, since the optimal wavelet decomposed level for de-noising is difficult to determine. To solve this problem, this paper proposes an optimized WT filter which is suited to the magnetic levitation gyroscope (GAT). The proposed method was tested on an equivalent mock-up network of the tunnels associated with the Hong Kong‒Zhuhai‒Macau Bridge. The gyro-observed signals exposed to vibration were collected in our experiment, and the empirical values of the optimal wavelet decomposed levels (from 6 to 10) for observed signals were constrained and validated by the high-precision Global Navigation Satellite System (GNSS) network. The result shows that the lateral breakthrough error of the tunnel was reduced from 12.1 to 3.8 mm with a ratio of 68.7%, which suggests that the method is able to correct the abnormal signal of a north-seeking gyroscope sensor exposed to vibration. Full article
(This article belongs to the Special Issue Gyroscopes and Accelerometers)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Scale function and wavelet function of Sym10.</p>
Full article ">Figure 2
<p>The flow chart of the experiment of the optimal wavelet decomposed level.</p>
Full article ">Figure 3
<p>Design of the experimental network.</p>
Full article ">Figure 4
<p>Time series of observed signals: (<b>a</b>) steady; (<b>b</b>) periodic; (<b>c</b>) jitter; (<b>d</b>) jumping.</p>
Full article ">Figure 5
<p>Comparison of H for different types of reconstructed signals under different decomposed levels.</p>
Full article ">Figure 6
<p>Comparison of <span class="html-italic">D<sub>F</sub></span> for different types of reconstructed signals under different decomposed levels.</p>
Full article ">Figure 7
<p>Time series of reconstructed signals: (<b>a</b>) steady; (<b>b</b>) periodic; (<b>c</b>) jitter; (<b>d</b>) jumping.</p>
Full article ">Figure 8
<p>3D time-frequency spectra of four types of signals under different decomposed levels: (<b>a</b>) steady (level = 0); (<b>b</b>) steady (level = 3); (<b>c</b>) steady (level = 6); (<b>d</b>) periodic (level = 0); (<b>e</b>) periodic (level = 8); (<b>f</b>) periodic (level = 12); (<b>g</b>) jitter (level = 0); (<b>h</b>) jitter (level = 6); (<b>i</b>) jitter (level = 9); (<b>j</b>) jumping (level = 0); (<b>k</b>) jumping (level = 7); (<b>l</b>) jumping (level = 10).</p>
Full article ">Figure 9
<p>The boxplots of <span class="html-italic">D<sub>T</sub></span>: (<b>a</b>) <span class="html-italic">D<sub>T</sub></span> for different filtering schemes; (<b>b</b>) <span class="html-italic">D<sub>T</sub></span> for each type of observed signal.</p>
Full article ">Figure 10
<p>Comparison of lateral breakthrough error (LBE) for different filtering schemes.</p>
Full article ">
13 pages, 1174 KiB  
Article
Simultaneous Calibration of Odometry and Head-Eye Parameters for Mobile Robots with a Pan-Tilt Camera
by Nachaya Chindakham, Young-Yong Kim, Alongkorn Pirayawaraporn and Mun-Ho Jeong
Sensors 2019, 19(16), 3623; https://doi.org/10.3390/s19163623 - 20 Aug 2019
Cited by 1 | Viewed by 3952
Abstract
In the field of robot navigation, the odometric parameters, such as wheel radii and wheelbase length, and the relative pose of the optical sensing camera with respect to the robot are very important criteria for accurate operation. Hence, these parameters are necessary to [...] Read more.
In the field of robot navigation, the odometric parameters, such as wheel radii and wheelbase length, and the relative pose of the optical sensing camera with respect to the robot are very important criteria for accurate operation. Hence, these parameters are necessary to be estimated for more precise operation. However, the odometric and head-eye parameters are typically estimated separately, which is an inconvenience and requires longer calibration time. Even though several researchers have proposed simultaneous calibration methods that obtain both odometric and head-eye parameters simultaneously to reduce the calibration time, they are only applicable to a mobile robot with a fixed camera mounted, not for mobile robots equipped with a pan-tilt motorized camera systems, which is a very common configuration and widely used for wide view. Previous approaches could not provide the z-axis translation parameter between head-eye coordinate systems on mobile robots equipped with a pan-tilt camera. In this paper, we present a full simultaneous mobile robot calibration of head–eye and odometric parameters, which is appropriate for a mobile robot equipped with a camera mounted on the pan-tilt motorized device. After a set of visual features obtained from a chessboard or natural scene and the odometry measurements are synchronized and received, both odometric and head-eye parameters are iteratively adjusted until convergence prior to using a nonlinear optimization method for more accuracy. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Mobile robot configuration. (<b>a</b>) Robot having a pan-tilt neck equipped with a camera (front-view); (<b>b</b>) coordinate system of mobile robot configuration (side-view).</p>
Full article ">Figure 2
<p>Mobile robot odometry and its relevant variables.</p>
Full article ">Figure 3
<p>Closed-loop transformation between any frame <span class="html-italic">i</span> and <span class="html-italic">j</span>.</p>
Full article ">Figure 4
<p>Natural features matching.</p>
Full article ">Figure 5
<p>Reprojection results: (<b>a</b>) Reprojection of image <span class="html-italic">i</span>; (<b>b</b>) transformed reprojection of image <span class="html-italic">j</span>.</p>
Full article ">Figure 6
<p>Rate of change related to number of iterative estimation.</p>
Full article ">Figure 7
<p>3D back-projection error after optimization related to number of iterations.</p>
Full article ">Figure 8
<p>3D back-projection error after optimization related to number of poses.</p>
Full article ">
18 pages, 1896 KiB  
Article
Pressure-Pair-Based Floor Localization System Using Barometric Sensors on Smartphones
by Chungheon Yi, Wonik Choi, Youngjun Jeon and Ling Liu
Sensors 2019, 19(16), 3622; https://doi.org/10.3390/s19163622 - 20 Aug 2019
Cited by 4 | Viewed by 3440
Abstract
As smartphone technology advances and its market penetration increases, indoor positioning for smartphone users is becoming an increasingly important issue. Floor localization is especially critical to indoor positioning techniques. Numerous research efforts have been proposed for improving the floor localization accuracy using information [...] Read more.
As smartphone technology advances and its market penetration increases, indoor positioning for smartphone users is becoming an increasingly important issue. Floor localization is especially critical to indoor positioning techniques. Numerous research efforts have been proposed for improving the floor localization accuracy using information from barometers, accelerometers, Bluetooth Low Energy (BLE), and Wi-Fi signals. Despite these existing efforts, no approach has been able to determine what floor smartphone users are on with near 100% accuracy. To address this problem, we present a novel pressure-pair based method called FloorPair, which offers near 100% accurate floor localization. The rationale of FloorPair is to construct a relative pressure map using highly accurate relative pressure values from smartphones with two novel features: first, we marginalized the uncertainty from sensor drifts and unreliable absolute pressure values of barometers by paring the pressure values of two floors, and second, we maintained high accuracy over time by applying an iterative optimization method, making our method sustainable. We evaluated the validity of the FloorPair approach by conducting extensive field experiments in various types of buildings to show that FloorPair is an accurate and sustainable floor localization method. Full article
(This article belongs to the Section State-of-the-Art Sensors Technologies)
Show Figures

Figure 1

Figure 1
<p>Noise differences between 2014 and 2018 smartphones. (<b>a</b>) Samsung Galaxy Note 4; (<b>b</b>) LG V40 ThinQ.</p>
Full article ">Figure 2
<p>Steady and constant pressure differences across various devices and weather conditions.</p>
Full article ">Figure 3
<p>Pressure differences at the same place: (<b>a</b>) LG V40; (<b>b</b>) LG V10; (<b>c</b>) Samsung Note 5; (<b>d</b>) Samsung Note 5.</p>
Full article ">Figure 4
<p>Pair relationship tree after forward and backward merging: (<b>a</b>) tree after forward merging; (<b>b</b>) tree after backward merging.</p>
Full article ">Figure 5
<p>Illustration of Algorithm 4.</p>
Full article ">Figure 6
<p>Graphic representation of our iterative optimization method: (<b>a</b>) before applying our optimization method; (<b>b</b>) after applying our optimization method.</p>
Full article ">
25 pages, 7763 KiB  
Article
Nonintrusive Appliance Load Monitoring: An Overview, Laboratory Test Results and Research Directions
by Augustyn Wójcik, Robert Łukaszewski, Ryszard Kowalik and Wiesław Winiecki
Sensors 2019, 19(16), 3621; https://doi.org/10.3390/s19163621 - 20 Aug 2019
Cited by 19 | Viewed by 5514
Abstract
Nonintrusive appliance load monitoring (NIALM) allows disaggregation of total electricity consumption into particular appliances in domestic or industrial environments. NIALM systems operation is based on processing of electrical signals acquired at one point of a monitored area. The main objective of this paper [...] Read more.
Nonintrusive appliance load monitoring (NIALM) allows disaggregation of total electricity consumption into particular appliances in domestic or industrial environments. NIALM systems operation is based on processing of electrical signals acquired at one point of a monitored area. The main objective of this paper was to present the state-of-the-art in NIALM technologies for the smart home. This paper focuses on sensors and measurement methods. Different intelligent algorithms for processing signals have been presented. Identification accuracy for an actual set of appliances has been compared. This article depicts the architecture of a unique NIALM laboratory, presented in detail. Results of developed NIALM methods exploiting different measurement data are discussed and compared to known methods. New directions of NIALM research are proposed. Full article
(This article belongs to the Special Issue Sensor Technology for Smart Homes)
Show Figures

Figure 1

Figure 1
<p>Appliance load monitoring systems: (<b>a</b>) intrusive appliance load monitoring (IALM); (<b>b</b>) nonintrusive appliance load monitoring (NIALMS).</p>
Full article ">Figure 2
<p>General architecture of a nonintrusive appliance load monitoring system.</p>
Full article ">Figure 3
<p>Block diagram of the measurement setup.</p>
Full article ">Figure 4
<p>The sequence of switching on and off tested appliances.</p>
Full article ">Figure 5
<p>Moment 3, the vacuum cleaner switching on, in the scale of milliseconds.</p>
Full article ">Figure 6
<p>Moment 1, switching on the LED bulb (<b>A</b>).</p>
Full article ">Figure 7
<p>Moment 3, the vacuum cleaner switching on, in the scale of microseconds, transient state (TS).</p>
Full article ">Figure 8
<p>The block diagram of electromagnetic interference (EMI) measurement setup.</p>
Full article ">Figure 9
<p>Current and high-pass (HP)-filtered voltage measured during LED bulb operation.</p>
Full article ">Figure 10
<p>Extra-high-frequency (EHF) voltage and current recorded with frequency sampling of 2 Gs/s during LED bulb (A) operation in steady state (SS).</p>
Full article ">Figure 11
<p>EHF voltage and current recorded with frequency sampling of 2 Gs/s during LED bulb (A) switching on (TS).</p>
Full article ">Figure 12
<p>Low-frequency NIALM architecture [<a href="#B52-sensors-19-03621" class="html-bibr">52</a>].</p>
Full article ">Figure 13
<p>Architecture of the NIALM laboratory.</p>
Full article ">Figure 14
<p>(<b>a</b>) Schematic diagram of the high-frequency voltage and current measurement setup; (<b>b</b>) schematic diagram of NIALM high-frequency voltage sensor (HFV).</p>
Full article ">Figure 15
<p>Main window of data acquisition software.</p>
Full article ">Figure 16
<p>The proposed NIALM architecture.</p>
Full article ">Figure 17
<p>Current harmonic components (1–16) of (<b>a</b>) electric kettle; (<b>b</b>) iron; (<b>c</b>) hairdryer; (<b>d</b>) vacuum cleaner.</p>
Full article ">Figure 18
<p>Conductance harmonics (102–117) of (<b>a</b>) LED bulb; (<b>b</b>) incandescent bulb.</p>
Full article ">Figure 19
<p>Testing sequences of appliance operation. Different colors in the bottom refer to different appliances.</p>
Full article ">Figure 20
<p>Scalogram of voltage signal during fluorescent lamp (CFL) and pedestal fan operation.</p>
Full article ">
13 pages, 9099 KiB  
Article
Improving the Performance of Pseudo-Random Single-Photon Counting Ranging Lidar
by Yang Yu, Bo Liu and Zhen Chen
Sensors 2019, 19(16), 3620; https://doi.org/10.3390/s19163620 - 20 Aug 2019
Cited by 17 | Viewed by 3796
Abstract
A new encoding method is proposed to improve the performance of pseudo-random single-photon counting ranging (PSPCR) Lidar. The encoding principle and methodology are presented. In addition, the influence of detector’s dead time on the detection probability is analyzed with theoretical derivation and Monte [...] Read more.
A new encoding method is proposed to improve the performance of pseudo-random single-photon counting ranging (PSPCR) Lidar. The encoding principle and methodology are presented. In addition, the influence of detector’s dead time on the detection probability is analyzed with theoretical derivation and Monte Carlo simulation. Meanwhile, we propose using macro code as the analysis unit to quantitatively analyze the detection probability and single-photon detection efficiency of the traditional PSPCR Lidar and the modulated PSPCR Lidar. The Monte Carlo simulation and experiment prove that the proposed method exhibited better ranging performance than the traditional PSPCR Lidar system. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of pseudo-random ranging system.</p>
Full article ">Figure 2
<p>The schematic diagram of pseudo-random ranging principle. (<b>a</b>) The transmitted pseudo-random laser pulse sequence (reference signal); (<b>b</b>) The detected pulse sequence of the target response; (<b>c</b>) The auto-correlation function of the reference and the target response.</p>
Full article ">Figure 3
<p>The detection probability of each code in pseudo-random single-photon counting ranging (PSPCR) Lidar system and the cross-correlation function. (<b>a</b>,<b>d</b>) are the PSPCR method detection probabilities of theory derivation while the dead time is 0 and 45 ns, respectively; (<b>b</b>,<b>e</b>) are the PSPCR method detection probabilities of Monte Carlo simulation while the dead time is 0 and 45 ns, respectively; (<b>c</b>,<b>f</b>) are the normalized cross-correlations of the PSPCR method.</p>
Full article ">Figure 4
<p>The schematic diagram of the modulated pseudo-random sequence.</p>
Full article ">Figure 5
<p>The detection probability of the traditional PSPCR Lidar and modulation-encoded PSPCR Lidar at different primary photoelectron number.</p>
Full article ">Figure 6
<p>The signal photon detection efficiency of the traditional PSPCR Lidar and modulation-encoded PSPCR Lidar at different primary photoelectron number.</p>
Full article ">Figure 7
<p>Cross-correlation range images with three different levels of noise photoelectrons. The first column is the Monte Carlo simulation of the traditional pseudo-random sequence, while the second is the modulated pseudo-random sequence. The noise levels of (<b>a</b>–<b>c</b>) are represented by the mean number of photoelectron noise. They are <math display="inline"><semantics> <mrow> <mn>1</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mn>5</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>10</mn> <mo>×</mo> <msup> <mrow> <mn>10</mn> </mrow> <mrow> <mo>−</mo> <mn>4</mn> </mrow> </msup> </mrow> </semantics></math> per bit, respectively.</p>
Full article ">Figure 8
<p>Experiment platform for the modulation-encoded PSPCR Lidar and the traditional PSPCR Lidar.</p>
Full article ">Figure 9
<p>Cross-correlation range images with three different echo photon numbers for the traditional PSPCR Lidar and the modulation-encoded PSPCR Lidar. The first column is the traditional PSPCR Lidar, and the second column is the modulation-encoded PSPCR Lidar. The mean echo photon number per ‘1’ bit in (<b>a</b>), (<b>b</b>) and (<b>c</b>) is 1, 3, and 5, respectively.</p>
Full article ">Figure 9 Cont.
<p>Cross-correlation range images with three different echo photon numbers for the traditional PSPCR Lidar and the modulation-encoded PSPCR Lidar. The first column is the traditional PSPCR Lidar, and the second column is the modulation-encoded PSPCR Lidar. The mean echo photon number per ‘1’ bit in (<b>a</b>), (<b>b</b>) and (<b>c</b>) is 1, 3, and 5, respectively.</p>
Full article ">Figure 10
<p>The detection probability statistical results of the modulation-encoded PSPCR Lidar and the traditional PSPCR Lidar at different mean echo photon number.</p>
Full article ">
18 pages, 9088 KiB  
Article
Prototyping a System for Truck Differential Lock Control
by Pavel Kučera and Václav Píštěk
Sensors 2019, 19(16), 3619; https://doi.org/10.3390/s19163619 - 20 Aug 2019
Cited by 14 | Viewed by 4369
Abstract
The article deals with the development of a mechatronic system for locking vehicle differentials. An important benefit of this system is that it prevents the jamming of the vehicle in difficult adhesion conditions. The system recognizes such a situation much sooner than the [...] Read more.
The article deals with the development of a mechatronic system for locking vehicle differentials. An important benefit of this system is that it prevents the jamming of the vehicle in difficult adhesion conditions. The system recognizes such a situation much sooner than the driver and is able to respond immediately, ensuring smooth driving in off-road or snowy conditions. This article describes the control algorithm of this mechatronic system, which is designed for firefighting, military, or civilian vehicles with a drivetrain configuration of up to 10 × 10, and also explains the input signal processing and the control of actuators. The main part of this article concerns prototype testing on a vehicle. The results are an evaluation of one of the many experiments and monitor the proper function of the developed mechatronic system. Full article
(This article belongs to the Special Issue Advance in Sensors and Sensing Systems for Driving and Transportation)
Show Figures

Figure 1

Figure 1
<p>Prototype of a civilian vehicle.</p>
Full article ">Figure 2
<p>Sensors and actuators. (<b>A</b>) Figure caption; (<b>B</b>) figure caption; (<b>C</b>) figure caption; (<b>D</b>) figure caption; (<b>E</b>) figure caption; (<b>F</b>) figure caption; (<b>G</b>) figure caption; (<b>H</b>) figure caption.</p>
Full article ">Figure 3
<p>Diagram of the control algorithm (DI—Digital inputs, AI—Analog inputs, DO—Digital outputs).</p>
Full article ">Figure 4
<p>The hardware used for testing in the prototype vehicle.</p>
Full article ">Figure 5
<p>Workspace NI VeriStand software window with customized graphical elements for visualization and control of the algorithm.</p>
Full article ">Figure 6
<p>Pedal values.</p>
Full article ">Figure 7
<p>Wheel speed values.</p>
Full article ">Figure 8
<p>Slip values.</p>
Full article ">Figure 9
<p>Values of shaft speed differences.</p>
Full article ">Figure 10
<p>Vehicle speed value.</p>
Full article ">Figure 11
<p>Values of the steering angle.</p>
Full article ">Figure 12
<p>Engine torque value.</p>
Full article ">Figure 13
<p>Air pressure value in the pneumatic circuit.</p>
Full article ">Figure 14
<p>Status of locking/unlocking and feedback.</p>
Full article ">Figure 15
<p>Status of lock (a value of 1 is represented by the indication of a particular actuator on the vertical axis)/unlock.</p>
Full article ">
20 pages, 5289 KiB  
Article
Analysis of Sensitivity, Linearity, Hysteresis, Responsiveness, and Fatigue of Textile Knit Stretch Sensors
by An Liang, Rebecca Stewart and Nick Bryan-Kinns
Sensors 2019, 19(16), 3618; https://doi.org/10.3390/s19163618 - 20 Aug 2019
Cited by 41 | Viewed by 6798
Abstract
Wearable technology is widely used for collecting information about the human body and its movement by placing sensors on the body. This paper presents research into electronic textile strain sensors designed specifically for wearable applications which need to be lightweight, robust, and comfortable. [...] Read more.
Wearable technology is widely used for collecting information about the human body and its movement by placing sensors on the body. This paper presents research into electronic textile strain sensors designed specifically for wearable applications which need to be lightweight, robust, and comfortable. In this paper, sixteen stretch sensors, each with different conductive stretch fabrics, are evaluated: EeonTex (Eeonyx Corporation), knitted silver-plated yarn, and knitted spun stainless steel yarn. The sensors’ performance is tested using a tensile tester while monitoring their resistance with a microcontroller. Each sensor was analyzed for its sensitivity, linearity, hysteresis, responsiveness, and fatigue through a series of dynamic and static tests. The findings show that for wearable applications a subset of the silver-plated yarn sensors had better ranked performance in terms of sensitivity, linearity, and steady state. EeonTex was found to be the most responsive, and the stainless steel yarn performed the worst, which may be due to the characteristics of the knit samples under test. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Photographs of the sixteen sensors along with their sensor number.</p>
Full article ">Figure 2
<p>Sensor made with commercially produced conductive fabric.</p>
Full article ">Figure 3
<p>An example of linearity fitting of the dynamic test for Sample 04. The linear region of stretch and relaxation are identified and then fit with a line.</p>
Full article ">Figure 4
<p>An example of the maximum hysteresis in the aggregated data from the dynamic test (Sample 04).</p>
Full article ">Figure 5
<p>An example of linearity fitting a static test with Sample 9. The linear region is fit with a line when the sample is held in a stretched and relaxed position.</p>
Full article ">Figure 6
<p>Sample 15, a silver-plated sensor (Technik-tex P130B), response and the fitted line from 32% strain to 60% strain while stretching.</p>
Full article ">Figure 7
<p>Sample 01 (EeonTex) sensor response and the fitted line while stretching between 20% strain and 70% strain.</p>
Full article ">Figure 8
<p>Sample 08, a lab-knitted sensor, response and fitted line of one hundred cycles of stretching between 10% strain and 70% strain.</p>
Full article ">Figure 9
<p>The error between each cycle of stretch and the fitted line of Sample 15. This shows the best repeatability result from all the samples‘ stretch and relax.</p>
Full article ">Figure 10
<p>The error between each cycle of relaxation and the fitted line of Sample 01 (EeonTex). This plot shows the typical repeatability result from all the samples‘ stretch and relax. Most of the samples showed fatigue after 50 cycles of stretching.</p>
Full article ">Figure 11
<p>The error between each cycle of stretching and the fitted line of Sample 07. This plot shows a relatively worse repeatability result from all the samples‘ stretch and relax.</p>
Full article ">Figure 12
<p>Comparison of each sensor‘s response time in static (<b>top</b>) and dynamic (<b>bottom</b>) tests.</p>
Full article ">Figure 13
<p>The fitting of Sample 1 while the sensor is at 70% strain. The EeonTex is a representational average result of all the samples.</p>
Full article ">
24 pages, 2562 KiB  
Review
Photoacoustic Imaging with Capacitive Micromachined Ultrasound Transducers: Principles and Developments
by Jasmine Chan, Zhou Zheng, Kevan Bell, Martin Le, Parsin Haji Reza and John T.W. Yeow
Sensors 2019, 19(16), 3617; https://doi.org/10.3390/s19163617 - 20 Aug 2019
Cited by 31 | Viewed by 8585
Abstract
Photoacoustic imaging (PAI) is an emerging imaging technique that bridges the gap between pure optical and acoustic techniques to provide images with optical contrast at the acoustic penetration depth. The two key components that have allowed PAI to attain high-resolution images at deeper [...] Read more.
Photoacoustic imaging (PAI) is an emerging imaging technique that bridges the gap between pure optical and acoustic techniques to provide images with optical contrast at the acoustic penetration depth. The two key components that have allowed PAI to attain high-resolution images at deeper penetration depths are the photoacoustic signal generator, which is typically implemented as a pulsed laser and the detector to receive the generated acoustic signals. Many types of acoustic sensors have been explored as a detector for the PAI including Fabry–Perot interferometers (FPIs), micro ring resonators (MRRs), piezoelectric transducers, and capacitive micromachined ultrasound transducers (CMUTs). The fabrication technique of CMUTs has given it an edge over the other detectors. First, CMUTs can be easily fabricated into given shapes and sizes to fit the design specifications. Moreover, they can be made into an array to increase the imaging speed and reduce motion artifacts. With a fabrication technique that is similar to complementary metal-oxide-semiconductor (CMOS), CMUTs can be integrated with electronics to reduce the parasitic capacitance and improve the signal to noise ratio. The numerous benefits of CMUTs have enticed researchers to develop it for various PAI purposes such as photoacoustic computed tomography (PACT) and photoacoustic endoscopy applications. For PACT applications, the main areas of research are in designing two-dimensional array, transparent, and multi-frequency CMUTs. Moving from the table top approach to endoscopes, some of the different configurations that are being investigated are phased and ring arrays. In this paper, an overview of the development of CMUTs for PAI is presented. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Diagram of the working principle of Fabry–Perot interferometers (FPI). An incoming ultrasound wave causes a variation in thickness which in turn results in a phase modulation (Reproduced from [<a href="#B88-sensors-19-03617" class="html-bibr">88</a>], with the permission of AIP Publishing.); (<b>b</b>) schematic diagram of an micro ring resonators (MRR); (<b>c</b>) schematic diagram of a forward-viewing photoacoustic probe for endoscopy imaging used in [<a href="#B69-sensors-19-03617" class="html-bibr">69</a>]; (<b>d</b>) photoacoustic endoscopy with a MRR detector used in (Adapted with permission from ref [<a href="#B82-sensors-19-03617" class="html-bibr">82</a>], [The Optical Society]).</p>
Full article ">Figure 2
<p>(<b>a</b>) Capacitive micromachined ultrasound transducers (CMUT) transmission mode; (<b>b</b>) CMUT receiving mode.</p>
Full article ">Figure 3
<p>(<b>a</b>) Model of the chicken breast phantom, (<b>b</b>) ultrasonic imaging, (<b>c</b>) PAI, and (<b>d</b>) a combination of photoacoustic and ultrasonic imaging (© [2009] IEEE. Reprinted, with permission, from [<a href="#B117-sensors-19-03617" class="html-bibr">117</a>]); (<b>e</b>) working principle of top orthogonal to bottom electrode (TOBE) (© [2014] IEEE. Reprinted, with permission, from [<a href="#B124-sensors-19-03617" class="html-bibr">124</a>]).</p>
Full article ">Figure 4
<p>(<b>a</b>) Optical absorption of silicon under different wavelength (© [2010] IEEE. Reprinted, with permission, from [<a href="#B118-sensors-19-03617" class="html-bibr">118</a>]) and (<b>b</b>) structure of optically transparent CMUT (© [2018] IEEE. Reprinted, with permission, from [<a href="#B126-sensors-19-03617" class="html-bibr">126</a>]), (<b>c</b>) imaging of mouse brain using the different frequencies of the CMUT (© [2018] IEEE. Reprinted, with permission, from [<a href="#B130-sensors-19-03617" class="html-bibr">130</a>]), (<b>d</b>) interlaced CMUT (© [2017] IEEE. Reprinted, with permission, from [<a href="#B128-sensors-19-03617" class="html-bibr">128</a>]), (<b>e</b>) multi-band CMUT (Adapted with permission from ref [<a href="#B129-sensors-19-03617" class="html-bibr">129</a>], [The Optical Society]), (<b>f</b>) monolithic multiband CMUT with five frequencies (© [2018] IEEE. Reprinted, with permission, from [<a href="#B130-sensors-19-03617" class="html-bibr">130</a>]).</p>
Full article ">Figure 5
<p>Timeline of CMUT designs for PAI endoscopes [<a href="#B118-sensors-19-03617" class="html-bibr">118</a>,<a href="#B119-sensors-19-03617" class="html-bibr">119</a>,<a href="#B120-sensors-19-03617" class="html-bibr">120</a>,<a href="#B121-sensors-19-03617" class="html-bibr">121</a>]; (<b>a</b>) inward-looking cylindrical transducer (© [2006] IEEE. Reprinted, with permission, from [<a href="#B120-sensors-19-03617" class="html-bibr">120</a>]); (<b>b</b>) 9F MicroLinear CMUT ICE catheter (© [2012] IEEE. Reprinted, with permission, from [<a href="#B119-sensors-19-03617" class="html-bibr">119</a>]); (<b>c</b>) miniature needle-shaped CMUT (© [2010] IEEE. Reprinted, with permission, from [<a href="#B118-sensors-19-03617" class="html-bibr">118</a>]); (<b>d</b>) integrated ring CMUT array (© [2013] IEEE. Reprinted, with permission, from [<a href="#B121-sensors-19-03617" class="html-bibr">121</a>]).</p>
Full article ">
22 pages, 3469 KiB  
Article
Micrometer Backstepping Control System for Linear Motion Single Axis Robot Machine Drive
by Chih-Hong Lin and Kuo-Tsai Chang
Sensors 2019, 19(16), 3616; https://doi.org/10.3390/s19163616 - 20 Aug 2019
Viewed by 3073
Abstract
In order to cut down influence on the uncertainty disturbances of a linear motion single axis robot machine, such as the external load force, the cogging force, the column friction force, the Stribeck force, and the parameters variations, the micrometer backstepping control system, [...] Read more.
In order to cut down influence on the uncertainty disturbances of a linear motion single axis robot machine, such as the external load force, the cogging force, the column friction force, the Stribeck force, and the parameters variations, the micrometer backstepping control system, using an amended recurrent Gottlieb polynomials neural network and altered ant colony optimization (AACO) with the compensated controller, is put forward for a linear motion single axis robot machine drive system mounted on the linear-optical ruler with 1 um resolution. To achieve high-precision control performance, an adaptive law of the amended recurrent Gottlieb polynomials neural network based on the Lyapunov function is proposed to estimate the lumped uncertainty. Besides this, a novel error-estimated law of the compensated controller is also proposed to compensate for the estimated error between the lumped uncertainty and the amended recurrent Gottlieb polynomials neural network with the adaptive law. Meanwhile, the AACO is used to regulate two variable learning rates in the weights of the amended recurrent Gottlieb polynomials neural network to speed up the convergent speed. The main contributions of this paper are: (1) The digital signal processor (DSP)-based current-regulation pulse width modulation (PWM) control scheme being successfully applied to control the linear motion single axis robot machine drive system; (2) the micrometer backstepping control system using an amended recurrent Gottlieb polynomials neural network with the compensated controller being successfully derived according to the Lyapunov function to diminish the lumped uncertainty effect; (3) achieving high-precision control performance, where an adaptive law of the amended recurrent Gottlieb polynomials neural network based on the Lyapunov function is successfully applied to estimate the lumped uncertainty; (4) a novel error-estimated law of the compensated controller being successfully used to compensate for the estimated error; and (5) the AACO being successfully used to regulate two variable learning rates in the weights of the amended recurrent Gottlieb polynomials neural network to speed up the convergent speed. Finally, the effectiveness of the proposed control scheme is also verified by the experimental results. Full article
(This article belongs to the Special Issue Sensors and Robot Control)
Show Figures

Figure 1

Figure 1
<p>Makeup of linear motion single axis robot machine and drive system.</p>
Full article ">Figure 2
<p>Simplified block diagram of linear motion single axis robot machine drive system.</p>
Full article ">Figure 3
<p>Micrometer backstepping control system using an amended recurrent Gottlieb polynomials neural network and altered ant colony optimization with the compensated controller.</p>
Full article ">Figure 4
<p>Makeup of the three-layer amended recurrent Gottlieb polynomials neural network.</p>
Full article ">Figure 5
<p>A picture of the experimental set-up of the linear motion single axis robot machine drive system.</p>
Full article ">Figure 6
<p>Micrometer backstepping control system using switching function with upper bound.</p>
Full article ">Figure 7
<p>Experimental results of the micrometer backstepping control system using the switching function with an upper bound for the periodic step command: (<b>a</b>) mover position in the rated case; (<b>b</b>) control intensity in the rated case; (<b>c</b>) mover position in the parametric variation case; (<b>d</b>) control intensity in the parametric variation case.</p>
Full article ">Figure 8
<p>Experimental results of the micrometer backstepping control system using the switching function with an upper bound for the periodic sinusoid command: (<b>a</b>) mover position in the rated case; (<b>b</b>) control intensity in the rated case; (<b>c</b>) mover position in the parametric variation case; (<b>d</b>) control intensity in the parametric variation case.</p>
Full article ">Figure 9
<p>Experimental results of the micrometer backstepping control system using an amended recurrent Gottlieb polynomials neural network and altered ant colony optimization (AACO) with the compensated controller for the periodic step command: (<b>a</b>) mover position in the rated case; (<b>b</b>) control intensity in the rated case; (<b>c</b>) mover position in the parametric variation case; (<b>d</b>) control intensity in the parametric variation case.</p>
Full article ">Figure 10
<p>Experimental results of the micrometer backstepping control system using an amended recurrent Gottlieb polynomials neural network and AACO with the compensated controller for the periodic sinusoid command: (<b>a</b>) mover position in the rated case; (<b>b</b>) control intensity in the rated case; (<b>c</b>) mover position in the parametric variation case; (<b>d</b>) control intensity in the parametric variation case.</p>
Full article ">Figure 11
<p>Experimental results of measured mover position response under the step force disturbance with adding load <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>L</mi> </msub> <mo>=</mo> <mn>2</mn> <mtext> </mtext> <mi>N</mi> </mrow> </semantics></math> in the 200um: (<b>a</b>) for the micrometer backstepping control system using switching function with upper bound; (<b>b</b>) for the micrometer backstepping control system using an amended recurrent Gottlieb polynomials neural network and AACO with the compensated controller.</p>
Full article ">
21 pages, 6136 KiB  
Article
A Real-Time, Non-Contact Method for In-Line Inspection of Oil and Gas Pipelines Using Optical Sensor Array
by Santhakumar Sampath, Bishakh Bhattacharya, Pouria Aryan and Hoon Sohn
Sensors 2019, 19(16), 3615; https://doi.org/10.3390/s19163615 - 20 Aug 2019
Cited by 25 | Viewed by 7517
Abstract
Corrosion is considered as one of the most predominant causes of pipeline failures in the oil and gas industry and normally cannot be easily detected at the inner surface of pipelines without service disruption. The real-time inspection of oil and gas pipelines is [...] Read more.
Corrosion is considered as one of the most predominant causes of pipeline failures in the oil and gas industry and normally cannot be easily detected at the inner surface of pipelines without service disruption. The real-time inspection of oil and gas pipelines is extremely vital to mitigate accidents and maintenance cost as well as to improve the oil and gas transport efficiency. In this paper, a new, non-contact optical sensor array method for real-time inspection and non-destructive evaluation (NDE) of pipelines is presented. The proposed optical method consists of light emitting diodes (LEDs) and light dependent resistors (LDRs) to send light and receive reflected light from the inner surface of pipelines. The uniqueness of the proposed method lies in its accurate detection as well as its localization of corrosion defects, based on the utilization of optical sensor array in the pipeline, and also the flexibility with which this system can be adopted for pipelines with different services, sizes, and materials, as well as the method’s economic viability. Experimental studies are conducted considering corrosion defects with different features and dimensions to confirm the robustness and accuracy of the method. The obtained data are processed with discrete wavelet transform (DWT) for noise cancelation and feature extraction. The estimated sizes of the corrosion defects for different physical parameters, such as inspection speed and lift-off distance, are investigated and, finally, some preliminary tests are conducted based on the implementation of the proposed method on an in-line developed smart pipeline inspection gauge (PIG) for in-line inspection (ILI) application, with resulting success. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Complete procedure of the proposed method: (<b>a</b>) schematic picture of the optical sensor working principle and (<b>b</b>) the built circuit diagram.</p>
Full article ">Figure 2
<p>Schematic of the optical sensor array housing system; (<b>a</b>) side view and top view of the 2-D optical sensor array system, (<b>b</b>) photographs of a side view and the optical sensor array system.</p>
Full article ">Figure 3
<p>The specimens for the experimental studies with dimensions and three different types of corrosion. (<b>a</b>) deposit and cavity corrosions, (<b>b</b>) uniform corrosion</p>
Full article ">Figure 3 Cont.
<p>The specimens for the experimental studies with dimensions and three different types of corrosion. (<b>a</b>) deposit and cavity corrosions, (<b>b</b>) uniform corrosion</p>
Full article ">Figure 4
<p>Schematic diagram of the self-designed experimental setup with interconnections.</p>
Full article ">Figure 5
<p>Picture of the self-designed laboratory testbed with different parts of the optical sensor array housing system, rack and pinion mechanism and the segmented pipeline.</p>
Full article ">Figure 6
<p>The records acquired sensor signals in the presence of deposit corrosion for 20-mm lift-off and 2.9 mm/s; (<b>a</b>) original time-domain sensor signal, (<b>b</b>) corresponding distance sensor signal.</p>
Full article ">Figure 7
<p>At each level of decompositions: (<b>a</b>) signal to noise ratio (SNR) values, (<b>b</b>) root mean square error (RMSE) values.</p>
Full article ">Figure 8
<p>Approximation and detailed coefficients of the sensor signal at 2.9 mm/s.</p>
Full article ">Figure 9
<p>De-noised acquired sensor signal for deposit corrosion defects scanned at 2.9 mm/s for 20-mm lift-off; (<b>a</b>) time-domain sensor signal, (<b>b</b>) corresponding distance signal.</p>
Full article ">Figure 10
<p>Discrete wavelet transform (DWT) de-noised sensor signals at different inspection speeds at the third decomposing level for 20-mm lift-off; (<b>a</b>) 7.3 mm/s, (<b>b</b>) 11 mm/s and, (<b>c</b>) 13 mm/s.</p>
Full article ">Figure 11
<p>Percentage of error length at different inspection speeds for the different lift-offs; (<b>a</b>) 20-mm lift-off and, (<b>b</b>) 30-mm lift-off.</p>
Full article ">Figure 12
<p>Peak voltage corresponding to the height of deposit corrosion defects for different lift-offs at different inspection speeds; (<b>a</b>) 20 mm and, (<b>b</b>) 30 mm.</p>
Full article ">Figure 13
<p>De-noised distance sensor signal for cavity corrosion defects scanned at 2.9 mm/s and 20-mm lift-off.</p>
Full article ">Figure 14
<p>De-noised distance sensor signal for uniform corrosion defects with various widths at 2.9 mm/s and 20-mm lift-off.</p>
Full article ">Figure 15
<p>Images of the field test for in-line application; (<b>a</b>) gas transporting pipeline, (<b>b</b>) the designed PIG.</p>
Full article ">Figure 16
<p>The results for the real-world application using the proposed method; (<b>a</b>) photo of the pipeline joint, (<b>b</b>) output voltage of the sensor for the whole pipe inspection, (<b>c</b>) 2D image of the scanned pipeline with zoomed abnormality area at a location between 70 m and 80 m.</p>
Full article ">Figure 16 Cont.
<p>The results for the real-world application using the proposed method; (<b>a</b>) photo of the pipeline joint, (<b>b</b>) output voltage of the sensor for the whole pipe inspection, (<b>c</b>) 2D image of the scanned pipeline with zoomed abnormality area at a location between 70 m and 80 m.</p>
Full article ">
13 pages, 10808 KiB  
Article
Railway Wheel Flat Detection System Based on a Parallelogram Mechanism
by Run Gao, Qixin He and Qibo Feng
Sensors 2019, 19(16), 3614; https://doi.org/10.3390/s19163614 - 20 Aug 2019
Cited by 16 | Viewed by 5569
Abstract
Wheel flats are a key fault in railway systems, which can bring great harm to vehicle operation safety. At present, most wheel flat detection methods use qualitative detection and do not meet practical demands. In this paper, we used a railway wheel flat [...] Read more.
Wheel flats are a key fault in railway systems, which can bring great harm to vehicle operation safety. At present, most wheel flat detection methods use qualitative detection and do not meet practical demands. In this paper, we used a railway wheel flat measurement method based on a parallelogram mechanism to detect wheel flats dynamically and quantitatively. Based on our experiments, we found that system performance was influenced by the train speed. When the train speed was higher than a certain threshold, the wheel impact force would cause vibration of the measuring mechanism and affect the detection accuracy. Since the measuring system was installed at the on-site entrance of the train garage, to meet the speed requirement, a three-dimensional simulation model was established, which was based on the rigid-flexible coupled multibody dynamics theory. The speed threshold of the measuring mechanism increased by the reasonable selection of the damping coefficients of the hydraulic damper, the measuring positions, and the downward displacements of the measuring ruler. Finally, we applied the selected model parameters to the parallelogram mechanism, where field measurements showed that the experimental results were consistent with the simulation results. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The vertical motion of a wheel flat: (<b>a</b>) The geometry of a wheel flat; (<b>b</b>) The vertical displacement response curve of point <span class="html-italic">O</span>.</p>
Full article ">Figure 2
<p>A schematic diagram of the parallelogram mechanism.</p>
Full article ">Figure 3
<p>An output waveform of the measuring device under ideal conditions.</p>
Full article ">Figure 4
<p>The dynamic simulation model: (<b>a</b>) The front view; (<b>b</b>) The side view.</p>
Full article ">Figure 5
<p>The measuring ruler motion waveform: (<b>a</b>) The simulation curve and the experimental curve; (<b>b</b>) The motion curve with different Young’s modulus.</p>
Full article ">Figure 6
<p>The influence of velocity on the motion process of the measuring ruler: (<b>a</b>) Motion curve with different speeds; (<b>b</b>) Relationship between speeds and maximum vibration displacements.</p>
Full article ">Figure 7
<p>The motion states of the measuring ruler with different damping coefficients.</p>
Full article ">Figure 8
<p>The schematic diagram of different measuring positions at the measuring ruler.</p>
Full article ">Figure 9
<p>The motion state of the measuring ruler at different measuring positions.</p>
Full article ">Figure 10
<p>The motion state of the measuring ruler with different downward displacement: (<b>a</b>) The motion curves of the measuring ruler with different displacements at 30 km/h; (<b>b</b>) The motion curves of the measuring ruler with different displacements at 40 km/h.</p>
Full article ">Figure 11
<p>The architecture of the wheel flat detection system.</p>
Full article ">Figure 12
<p>The motion process of the measuring ruler at different hydraulic damper gears on site.</p>
Full article ">Figure 13
<p>The measuring waveform of the simulated wheel flat.</p>
Full article ">Figure 14
<p>The measuring waveform of a wheel flat.</p>
Full article ">Figure 15
<p>The photo of the installed parallelogram mechanism.</p>
Full article ">Figure 16
<p>The motion process of the measuring ruler at different vehicle speeds on site.</p>
Full article ">Figure 17
<p>The motion process of the mechanism parameters before and after reasonable selection.</p>
Full article ">
11 pages, 4130 KiB  
Article
Fiber Ring Laser Directional Torsion Sensor with Ultra-Wide Linear Response
by Xianjin Liu, Fengjuan Wang, Jiuru Yang, Xudong Zhang and Xiliang Du
Sensors 2019, 19(16), 3613; https://doi.org/10.3390/s19163613 - 20 Aug 2019
Cited by 16 | Viewed by 3231
Abstract
In this paper, a comprehensive passive torsion measurement is performed firstly in a 40-cm-long polarization maintaining fiber-based Sagnac interferometer (PMF-SI), and the non-linear torsion response is found and investigated. Then, a fiber laser torsion sensor (FLTS) with a dual-ring-cavity structure is proposed and [...] Read more.
In this paper, a comprehensive passive torsion measurement is performed firstly in a 40-cm-long polarization maintaining fiber-based Sagnac interferometer (PMF-SI), and the non-linear torsion response is found and investigated. Then, a fiber laser torsion sensor (FLTS) with a dual-ring-cavity structure is proposed and experimentally demonstrated, in which the PMF-SI is utilized as the optical filter as well as the sensing unit. In particular, the highly sensitive linear range is adjusted through fine phase modulation, and owing to the flat-top feature of fringes, an ~83.6% sensitivity difference is effectively compressed by the generated lasing. The experimental results show that, without any pre-twisting, the ultra-wide linear response from –175 to 175 rad/m is gained, and the torsion sensitivities are 2.46 and 1.55 nm/rad with high linearity (>0.99) in the clockwise and anti-clockwise directions, respectively. Additionally, a high extinction ratio (>42 dB) and small line-width (~0.14 nm) are obtained in the proposed FLTS, and the corresponding detection limit reaches 0.015 rad/m. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the Sagnac interferometer.</p>
Full article ">Figure 2
<p>(<b>a</b>) The experimental setup for passive torsion-sensing and (<b>b</b>) the transmission spectra of BBS and PMF-SI. PMF: polarization-maintaining fiber, PC: polarization controller BBS: broadband source, OSA: optical spectrum analyzer, SI: Sagnac interferometer.</p>
Full article ">Figure 3
<p>Spectra evolution of PMF-SI in the clockwise (CW) direction from (<b>a</b>) 0° to 180° and (<b>b</b>) 180° to 360° and the anticlockwise (ACW) direction from (<b>c</b>) 0° to −180° and (<b>d</b>) −180° to −360°.</p>
Full article ">Figure 4
<p>Relationships between the wavelength shift and twist angle in (<b>a</b>) the clockwise direction and (<b>b</b>) the anticlockwise direction.</p>
Full article ">Figure 5
<p>(<b>a</b>) Numerical simulation of the spectral evolution with the varied <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>3</mn> </msub> </mrow> </semantics></math> and (<b>b</b>) the relationship between wavelength shift and <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mn>3</mn> </msub> </mrow> </semantics></math></p>
Full article ">Figure 6
<p>Interference fringe peak and dip wavelength shift with the changed <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>τ</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>The schematic diagram of the fiber laser torsion sensor (FLTS). WDM: wavelength division multiplexer, EDF: erbium-doped fiber, ISO: isolator.</p>
Full article ">Figure 8
<p>(<b>a</b>) The output spectra, (<b>b</b>) extinction ratio (ER) and (<b>c</b>) line width (LW) of the fiber laser ring (FRL) under a varied pump current (from 80 to 200 mA).</p>
Full article ">Figure 9
<p>(<b>a</b>) Twist-induced lasing-wavelength shift and (<b>b</b>) response in the CW direction; (<b>c</b>) Twist-induced lasing-wavelength shift and (<b>d</b>) response in the ACW direction.</p>
Full article ">Figure 10
<p>The torsion responses with varied ambient temperature.</p>
Full article ">Figure 11
<p>The stability of laser wavelengths within 2 hours.</p>
Full article ">
19 pages, 3501 KiB  
Article
An Edge-Fog Secure Self-Authenticable Data Transfer Protocol
by Algimantas Venčkauskas, Nerijus Morkevicius, Vaidas Jukavičius, Robertas Damaševičius, Jevgenijus Toldinas and Šarūnas Grigaliūnas
Sensors 2019, 19(16), 3612; https://doi.org/10.3390/s19163612 - 19 Aug 2019
Cited by 21 | Viewed by 4931
Abstract
Development of the Internet of Things (IoT) opens many new challenges. As IoT devices are getting smaller and smaller, the problems of so-called “constrained devices” arise. The traditional Internet protocols are not very well suited for constrained devices comprising localized network nodes with [...] Read more.
Development of the Internet of Things (IoT) opens many new challenges. As IoT devices are getting smaller and smaller, the problems of so-called “constrained devices” arise. The traditional Internet protocols are not very well suited for constrained devices comprising localized network nodes with tens of devices primarily communicating with each other (e.g., various sensors in Body Area Network communicating with each other). These devices have very limited memory, processing, and power resources, so traditional security protocols and architectures also do not fit well. To address these challenges the Fog computing paradigm is used in which all constrained devices, or Edge nodes, primarily communicate only with less-constrained Fog node device, which collects all data, processes it and communicates with the outside world. We present a new lightweight secure self-authenticable transfer protocol (SSATP) for communications between Edge nodes and Fog nodes. The primary target of the proposed protocol is to use it as a secure transport for CoAP (Constrained Application Protocol) in place of UDP (User Datagram Protocol) and DTLS (Datagram Transport Layer Security), which are traditional choices in this scenario. SSATP uses modified header fields of standard UDP packets to transfer additional protocol handling and data flow management information as well as user data authentication information. The optional redundant data may be used to provide increased resistance to data losses when protocol is used in unreliable networks. The results of experiments presented in this paper show that SSATP is a better choice than UDP with DTLS in the cases, where the CoAP block transfer mode is used and/or in lossy networks. Full article
Show Figures

Figure 1

Figure 1
<p>Three-layer Fog computing-based eHealth architecture.</p>
Full article ">Figure 2
<p>Datagram Transport Layer Security (DTLS)-secured Constrained Application Protocol (CoAP) architecture.</p>
Full article ">Figure 3
<p>Proposed modifications to the User Datagram Protocol (UDP) packet’s header. Standard UDP header (<b>a</b>) and modified header (<b>b</b>).</p>
Full article ">Figure 4
<p>Energy consumption measurement setup: (<b>a</b>) Overall picture of the setup, (<b>b</b>) principal diagram of the setup, here EDM—Edge device module, EMM—energy measuring module [<a href="#B50-sensors-19-03612" class="html-bibr">50</a>].</p>
Full article ">Figure 5
<p>Power consumption of the Wi-Fi network card while transferring 1 MB of data using 2048 B data packets and different transport protocols.</p>
Full article ">Figure 6
<p>Comparison of time needed to transfer 1 MB of data using different transport protocols.</p>
Full article ">Figure 7
<p>Comparison of user data losses using different transport protocols in lossy network.</p>
Full article ">Figure 8
<p>Comparison of energy consumption.</p>
Full article ">
14 pages, 4196 KiB  
Article
Seasonal Time Series Forecasting by F1-Fuzzy Transform
by Ferdinando Di Martino and Salvatore Sessa
Sensors 2019, 19(16), 3611; https://doi.org/10.3390/s19163611 - 19 Aug 2019
Cited by 6 | Viewed by 3537
Abstract
We present a new seasonal forecasting method based on F1-transform (fuzzy transform of order 1) applied on weather datasets. The objective of this research is to improve the performances of the fuzzy transform-based prediction method applied to seasonal time series. The [...] Read more.
We present a new seasonal forecasting method based on F1-transform (fuzzy transform of order 1) applied on weather datasets. The objective of this research is to improve the performances of the fuzzy transform-based prediction method applied to seasonal time series. The time series’ trend is obtained via polynomial fitting: then, the dataset is partitioned in S seasonal subsets and the direct F1-transform components for each seasonal subset are calculated as well. The inverse F1-transforms are used to predict the value of the weather parameter in the future. We test our method on heat index datasets obtained from daily weather data measured from weather stations of the Campania Region (Italy) during the months of July and August from 2003 to 2017. We compare the results obtained with the statistics Autoregressive Integrated Moving Average (ARIMA), Automatic Design of Artificial Neural Networks (ADANN), and the seasonal F-transform methods, showing that the best results are just given by our approach. Full article
(This article belongs to the Special Issue Intelligent Systems in Sensor Networks and Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Schema of the TSSF1 algorithm.</p>
Full article ">Figure 2
<p>Trend of the heat index (HI) in the months of July and August (from 1 July 2003 to 16 August 2017) obtained from the Napoli Capodichino station dataset by using a ninth-degree polynomial fitting.</p>
Full article ">Figure 3
<p>Plot of HI index time series from the Napoli Capodichino station dataset obtained by using the Seasonal Autoregressive Integrated Moving Average (ARIMA) algorithm.</p>
Full article ">Figure 4
<p>Plot of HI index time series from the Napoli Capodichino station dataset obtained by using the Automatic Design of Artificial Neural Networks (ADANN) algorithm.</p>
Full article ">Figure 5
<p>Plot of HI index time series from the Napoli Capodichino station dataset obtained by using the TSSF algorithm.</p>
Full article ">Figure 6
<p>Plot of HI index time series from the Napoli Capodichino station dataset obtained by using the TSSF1 algorithm.</p>
Full article ">
15 pages, 2134 KiB  
Article
Radio Frequency Fingerprint-Based Intelligent Mobile Edge Computing for Internet of Things Authentication
by Songlin Chen, Hong Wen, Jinsong Wu, Aidong Xu, Yixin Jiang, Huanhuan Song and Yi Chen
Sensors 2019, 19(16), 3610; https://doi.org/10.3390/s19163610 - 19 Aug 2019
Cited by 29 | Viewed by 5498
Abstract
In this paper, a light-weight radio frequency fingerprinting identification (RFFID) scheme that combines with a two-layer model is proposed to realize authentications for a large number of resource-constrained terminals under the mobile edge computing (MEC) scenario without relying on encryption-based methods. In the [...] Read more.
In this paper, a light-weight radio frequency fingerprinting identification (RFFID) scheme that combines with a two-layer model is proposed to realize authentications for a large number of resource-constrained terminals under the mobile edge computing (MEC) scenario without relying on encryption-based methods. In the first layer, signal collection, extraction of RF fingerprint features, dynamic feature database storage, and access authentication decision are carried out by the MEC devices. In the second layer, learning features, generating decision models, and implementing machine learning algorithms for recognition are performed by the remote cloud. By this means, the authentication rate can be improved by taking advantage of the machine-learning training methods and computing resource support of the cloud. Extensive simulations are performed under the IoT application scenario. The results show that the novel method can achieve higher recognition rate than that of traditional RFFID method by using wavelet feature effectively, which demonstrates the efficiency of our proposed method. Full article
(This article belongs to the Special Issue Green, Energy-Efficient and Sustainable Networks)
Show Figures

Figure 1

Figure 1
<p>Mobile edge computing-Internet-of-Things (MEC-IoT) architecture.</p>
Full article ">Figure 2
<p>Radio frequency fingerprinting identification (RFFID) authentication method.</p>
Full article ">Figure 3
<p>Detailed authentication process.</p>
Full article ">Figure 4
<p>Flow chart of RFFID-MEC method algorithm.</p>
Full article ">Figure 5
<p>Typical application scenarios of RFFID-MEC authentication method.</p>
Full article ">Figure 6
<p>Correct identification probability versus SNR for RFFID-MEC and RFFID using four different RF fingerprint features including: Envelope, phase, STFT, and wavelet feature.</p>
Full article ">
27 pages, 5490 KiB  
Article
Evaluation of Sentinel-3A OLCI Products Derived Using the Case-2 Regional CoastColour Processor over the Baltic Sea
by Dmytro Kyryliuk and Susanne Kratzer
Sensors 2019, 19(16), 3609; https://doi.org/10.3390/s19163609 - 19 Aug 2019
Cited by 67 | Viewed by 7062
Abstract
In this study, the Level-2 products of the Ocean and Land Colour Instrument (OLCI) data on Sentinel-3A are derived using the Case-2 Regional CoastColour (C2RCC) processor for the SentiNel Application Platform (SNAP) whilst adjusting the specific scatter of Total Suspended Matter (TSM) for [...] Read more.
In this study, the Level-2 products of the Ocean and Land Colour Instrument (OLCI) data on Sentinel-3A are derived using the Case-2 Regional CoastColour (C2RCC) processor for the SentiNel Application Platform (SNAP) whilst adjusting the specific scatter of Total Suspended Matter (TSM) for the Baltic Sea in order to improve TSM retrieval. The remote sensing product “kd_z90max” (i.e., the depth of the water column from which 90% of the water-leaving irradiance are derived) from C2RCC-SNAP showed a good correlation with in situ Secchi depth (SD). Additionally, a regional in-water algorithm was applied to derive SD from the attenuation coefficient Kd(489) using a local algorithm. Furthermore, a regional in-water relationship between particle scatter and bench turbidity was applied to generate turbidity from the remote sensing product “iop_bpart” (i.e., the scattering coefficient of marine particles at 443 nm). The spectral shape of the remote sensing reflectance (Rrs) data extracted from match-up stations was evaluated against reflectance data measured in situ by a tethered Attenuation Coefficient Sensor (TACCS) radiometer. The L2 products were evaluated against in situ data from several dedicated validation campaigns (2016–2018) in the NW Baltic proper. All derived L2 in-water products were statistically compared to in situ data and the results were also compared to results for MERIS validation from the literature and the current S3 Level-2 Water (L2W) standard processor from EUMETSAT. The Chl-a product showed a substantial improvement (MNB 21%, RMSE 88%, APD 96%, n = 27) compared to concentrations derived from the Medium Resolution Imaging Spectrometer (MERIS), with a strong underestimation of higher values. TSM performed within an error comparable to MERIS data with a mean normalized bias (MNB) 25%, root-mean square error (RMSE) 73%, average absolute percentage difference (APD) 63% n = 23). Coloured Dissolved Organic Matter (CDOM) absorption retrieval has also improved substantially when using the product “iop_adg” (i.e., the sum of organic detritus and Gelbstoff absorption at 443 nm) as a proxy (MNB 8%, RMSE 56%, APD 54%, n = 18). The local SD (MNB 6%, RMSE 62%, APD 60%, n = 35) and turbidity (MNB 3%, RMSE 35%, APD 34%, n = 29) algorithms showed very good agreement with in situ data. We recommend the use of the SNAP C2RCC with regionally adjusted TSM-specific scatter for water product retrieval as well as the regional turbidity algorithm for Baltic Sea monitoring. Besides documenting the evaluation of the C2RCC processor, this paper may also act as a handbook on the validation of Ocean Colour data. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Colour: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>ESA’s operational Copernicus’s program with the Sentinel family. ©ESA (modified from: [<a href="#B7-sensors-19-03609" class="html-bibr">7</a>]). We investigate here data from OLCI on Sentinel-3 (S3) for Baltic Sea applications.</p>
Full article ">Figure 2
<p>Location of in-situ sampling stations used for validation of Sentinel-3A during dedicated sampling campaigns in May 2016, June–July 2017 and April–May 2018. Bathymetry data [<a href="#B27-sensors-19-03609" class="html-bibr">27</a>]; HELCOM sub-basins [<a href="#B28-sensors-19-03609" class="html-bibr">28</a>]. European coastline shapefile (European Environment Agency, [<a href="#B29-sensors-19-03609" class="html-bibr">29</a>]). Country boundaries (Natural Earth, [<a href="#B30-sensors-19-03609" class="html-bibr">30</a>]).</p>
Full article ">Figure 3
<p>Overview of sampling stations during dedicated Sentinel-3A OLCI validation campaigns in the Baltic proper during May 2016, July–August 2017 and April–May 2018 super-imposed on full resolution True Color satellite images of the stations sampled in Baltic Sea coastal waters. The main challenge associated with validation efforts in the Baltic Sea is the extensive cloud cover as can be seen here on several satellite images. About 50% of the match-up stations were flagged.</p>
Full article ">Figure 4
<p>(<b>a</b>) Remote sensing reflectance <span class="html-italic">R<sub>rs</sub></span> derived from S3A OLCI FR using C2RCC processor plotted in the 400–673 nm range for all sampling stations from all three validation campaign seasons in HF, 2016–2018; (<b>b</b>). Remote sensing reflectance, <span class="html-italic">Rrs</span>, derived from the TACCS processor during the field campaigns in the Baltic proper 2018. H and B stations denote stations in the Baltic proper. The in situ reflectance data does not indicate the observed shift towards 490 nm as indicated by the satellite data.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Remote sensing reflectance <span class="html-italic">R<sub>rs</sub></span> derived from S3A OLCI FR using C2RCC processor plotted in the 400–673 nm range for all sampling stations from all three validation campaign seasons in HF, 2016–2018; (<b>b</b>). Remote sensing reflectance, <span class="html-italic">Rrs</span>, derived from the TACCS processor during the field campaigns in the Baltic proper 2018. H and B stations denote stations in the Baltic proper. The in situ reflectance data does not indicate the observed shift towards 490 nm as indicated by the satellite data.</p>
Full article ">Figure 5
<p>Concentration of (<b>a</b>) Total Suspended Matter (conc_tsm) and (<b>b</b>) Chlorophyll-a (conc_chl) derived from S3A OLCI data using C2RCC plotted again in situ concentrations.</p>
Full article ">Figure 6
<p>Absorption coefficient of (<b>a</b>) Gelbstoff at 443 nm (iop_agelb) and (<b>b</b>) detritus + gelbstoff absorption at 443 nm (iop_adg) derived from S3A OLCI using C2RCC-SNAP both compared to in situ absorption of CDOM, <span class="html-italic">a</span><sub>CDOM</sub> (440).</p>
Full article ">Figure 7
<p>A proxy for Secchi depth (<b>a</b>) “kd_z90max”, and (<b>b</b>) the Secchi depth algorithm based on <span class="html-italic">Kd</span>(489) (diffuse attenuation coefficient at 489 nm) from Alikas et al. [<a href="#B24-sensors-19-03609" class="html-bibr">24</a>] derived from S3A OLCI data using the C2RCC with locally adapted parameters and compared to in-water Secchi depth measurements.</p>
Full article ">Figure 8
<p>Turbidity products (<b>a</b>) Turb1 and (<b>b</b>) Turb2 derived from “iop_bpart” using the algorithms described above and applied to S3A OLCI data generated by the C2RCC-SNAP with locally adapted parameters and both compared to in situ turbidity.</p>
Full article ">
18 pages, 4968 KiB  
Article
Modeling and Control of a Six Degrees of Freedom Maglev Vibration Isolation System
by Qianqian Wu, Ning Cui, Sifang Zhao, Hongbo Zhang and Bilong Liu
Sensors 2019, 19(16), 3608; https://doi.org/10.3390/s19163608 - 19 Aug 2019
Cited by 12 | Viewed by 3667
Abstract
The environment in space provides favorable conditions for space missions. However, low frequency vibration poses a great challenge to high sensitivity equipment, resulting in performance degradation of sensitive systems. Due to the ever-increasing requirements to protect sensitive payloads, there is a pressing need [...] Read more.
The environment in space provides favorable conditions for space missions. However, low frequency vibration poses a great challenge to high sensitivity equipment, resulting in performance degradation of sensitive systems. Due to the ever-increasing requirements to protect sensitive payloads, there is a pressing need for micro-vibration suppression. This paper deals with the modeling and control of a maglev vibration isolation system. A high-precision nonlinear dynamic model with six degrees of freedom was derived, which contains the mathematical model of Lorentz actuators and umbilical cables. Regarding the system performance, a double closed-loop control strategy was proposed, and a sliding mode control algorithm was adopted to improve the vibration isolation performance. A simulation program of the system was developed in a MATLAB environment. A vibration isolation performance in the frequency range of 0.01–100 Hz and a tracking performance below 0.01 Hz were obtained. In order to verify the nonlinear dynamic model and the isolation performance, a principle prototype of the maglev isolation system equipped with accelerometers and position sensors was developed for the experiments. By comparing the simulation results and the experiment results, the nonlinear dynamic model of the maglev vibration isolation system was verified and the control strategy of the system was proved to be highly effective. Full article
(This article belongs to the Special Issue Advance in Sensors and Sensing Systems for Driving and Transportation)
Show Figures

Figure 1

Figure 1
<p>Diagram of the maglev vibration isolation platform: (<b>a</b>) General structure diagram; (<b>b</b>) Layout of actuators and sensors.</p>
Full article ">Figure 2
<p>Schematic diagram of coordinate systems.</p>
Full article ">Figure 3
<p>Schematic diagram of attitude change between the coil and the magnet groups.</p>
Full article ">Figure 4
<p>Position vectors of the actuators and installation points of the cables.</p>
Full article ">Figure 5
<p>Configuration of Lorentz actuators.</p>
Full article ">Figure 6
<p>Block diagram of the control system. The absolute displacement error was defined as <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">e</mi> <mstyle mathvariant="normal"> <mi>a</mi> </mstyle> </msub> </mrow> </semantics></math>, then it can be written as.</p>
Full article ">Figure 7
<p>Motion of the platform with and without isolation control along X direction: (<b>a</b>) Absolute displacement; (<b>b</b>) absolute linear velocity; (<b>c</b>) absolute linear acceleration.</p>
Full article ">Figure 8
<p>Motion of the platform with and without sliding mode control along the x direction: (<b>a</b>) Absolute displacement; (<b>b</b>) absolute linear velocity; (<b>c</b>) absolute linear acceleration.</p>
Full article ">Figure 9
<p>Motion of the platform with and without tracking control along the X direction: (<b>a</b>) Relative displacement; (<b>b</b>) relative linear velocity; (<b>c</b>) relative linear acceleration.</p>
Full article ">Figure 10
<p>Experimental setup: (<b>a</b>) Prototype and control box; (<b>b</b>) vibration isolation experiment setup.</p>
Full article ">Figure 11
<p>Comparison between simulation results and test results: (<b>a</b>) Position comparison along X; (<b>b</b>) position comparison along Y; (<b>c</b>) position comparison along Z; (<b>d</b>) angle comparison around X; (<b>e</b>) angle comparison around Y; (<b>f</b>) angle comparison around Z.</p>
Full article ">Figure 12
<p>Comparison of the acceleration response of the platform with disturbance: (<b>a</b>) Acceleration along X; (<b>b</b>) acceleration along Y; (<b>c</b>) acceleration along Z; (<b>d</b>) acceleration around X; (<b>e</b>) acceleration around Y; (<b>f</b>) acceleration around Z.</p>
Full article ">Figure 13
<p>Spectrum analysis of the acceleration of the platform and disturbance: (<b>a</b>) Amplitude along X; (<b>b</b>) amplitude along Y; (<b>c</b>) amplitude along Z; (<b>d</b>) amplitude around X; (<b>e</b>) amplitude around Y; (<b>f</b>) amplitude around Z.</p>
Full article ">Figure 14
<p>Comparison between simulation results and test results: (<b>a</b>) Acceleration along X; (<b>b</b>) acceleration along Y; (<b>c</b>) acceleration along Z; (<b>d</b>) acceleration around X; (<b>e</b>) acceleration around Y; (<b>f</b>) acceleration around Z.</p>
Full article ">
18 pages, 11466 KiB  
Article
Joint Banknote Recognition and Counterfeit Detection Using Explainable Artificial Intelligence
by Miseon Han and Jeongtae Kim
Sensors 2019, 19(16), 3607; https://doi.org/10.3390/s19163607 - 19 Aug 2019
Cited by 22 | Viewed by 5102
Abstract
We investigated machine learning-based joint banknote recognition and counterfeit detection method. Unlike existing methods, since the proposed method simultaneously recognize banknote type and detect counterfeit detection, it is significantly faster than existing serial banknote recognition and counterfeit detection methods. Furthermore, we propose an [...] Read more.
We investigated machine learning-based joint banknote recognition and counterfeit detection method. Unlike existing methods, since the proposed method simultaneously recognize banknote type and detect counterfeit detection, it is significantly faster than existing serial banknote recognition and counterfeit detection methods. Furthermore, we propose an explainable artificial intelligence method for visualizing regions that contributed to the recognition and detection. Using the visualization, it is possible to understand the behavior of the trained machine learning system. In experiments using the United State Dollar and the European Union Euro banknotes, the proposed method shows significant improvement in computation time from conventional serial method. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Grad-CAM results for some example banknotes. The most left column to the third column show visible, infrared transmission, and infrared reflection images of banknotes. The forth column and the most right column show Grad-CAM results for banknote recognition and counterfeit detection, reflectively.</p>
Full article ">Figure 2
<p>Possible banknote directions.</p>
Full article ">Figure 3
<p>Different modality images. Leftmost column shows visible, center column shows infrared transmission, and rightmost column shows infrared reflection images.</p>
Full article ">Figure 4
<p>Joint banknote recognition and counterfeit detection system.</p>
Full article ">Figure 5
<p>pGrad-Cam flow.</p>
Full article ">Figure 6
<p>Average batch losses of convergence graphs.</p>
Full article ">
17 pages, 5293 KiB  
Article
The Heading Weight Function: A Novel LiDAR-Based Local Planner for Nonholonomic Mobile Robots
by El Houssein Chouaib Harik and Audun Korsaeth
Sensors 2019, 19(16), 3606; https://doi.org/10.3390/s19163606 - 19 Aug 2019
Cited by 5 | Viewed by 5574
Abstract
In this paper, we present a novel method for obstacle avoidance designed for a nonholonomic mobile robot. The method relies on light detection and ranging (LiDAR) readings, which are mapped into a polar coordinate system. Obstacles are taken into consideration when they are [...] Read more.
In this paper, we present a novel method for obstacle avoidance designed for a nonholonomic mobile robot. The method relies on light detection and ranging (LiDAR) readings, which are mapped into a polar coordinate system. Obstacles are taken into consideration when they are within a predefined radius from the robot. A central part of the approach is a new Heading Weight Function (HWF), in which the beams within the aperture angle of the LiDAR are virtually weighted in order to generate the best trajectory candidate for the robot. The HWF is designed to find a solution also in the case of a local-minima situation. The function is coupled with the robot’s controller in order to provide both linear and angular velocities. We tested the method both by simulations in a digital environment with a range of different static obstacles, and in a real, experimental environment including static and dynamic obstacles. The results showed that when utilizing the novel HWF, the robot was able to navigate safely toward the target while avoiding all obstacles included in the tests. Our findings thus show that it is possible for a robot to navigate safely in a populated environment using this method, and that sufficient efficiency in navigation may be obtained without basing the method on a global planner. This is particularly promising for navigation challenges occurring in unknown environments where models of the world cannot be obtained. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Mobile robot kinematics within a 2D planar surface (see text for symbol and parameter explanation).</p>
Full article ">Figure 2
<p>Representation of the Lidar beams on the mobile robot. The actual number of beams is much higher than illustrated here.</p>
Full article ">Figure 3
<p>Obstacle presence in the path of the mobile robot.</p>
Full article ">Figure 4
<p>The case of the local-minima configuration.</p>
Full article ">Figure 5
<p>Free space threshold (FST) representation.</p>
Full article ">Figure 6
<p>Three cases of nonexistence of FST, with the goal being either slightly to the right of the current course (<b>A</b>), straight ahead (<b>B</b>), or slightly to the left of the current course (<b>C</b>).</p>
Full article ">Figure 7
<p>Simulation results showing the robot navigating toward waypoint n1, with the navigation area on Gazebo (<b>A</b>), and the simulated state of the robot on Rviz (<b>B</b>). On Gazebo (<b>A</b>), the red, green, and blue lines represent the 3D axis related to the navigation area. On Rviz (<b>B</b>), the robot is shown in white (3D model), and the waypoint is shown as a blue dot. The green line represents the saved path of the robot, the red arrow represents the heading β between the robot and the waypoint in the global frame, and the black arrow represents the actual heading θ of the robot (i.e., the result of using the heading to the waypoint α and the HWF for obstacle avoidance).</p>
Full article ">Figure 8
<p>Simulation results showing the path of the robot from its initial location to waypoint n1, with the navigation area on Gazebo (<b>A</b>), and the simulated state of the robot on Rviz (<b>B</b>).</p>
Full article ">Figure 9
<p>Simulation results showing the global navigation path of the robot, with the navigation area on Gazebo (<b>A</b>), and the simulated state of the robot on Rviz (<b>B</b>). The waypoints are enumerated according to their chronological order of selection. Every time the robot reaches a given waypoint, a new one is given.</p>
Full article ">Figure 10
<p>The variation over time of the distance between the robot’s position and the given waypoints.</p>
Full article ">Figure 11
<p>The variation over time of the angle between the robot’s position and the given waypoints.</p>
Full article ">Figure 12
<p>Simulation environment of the nonsatisfaction of the FST condition with the navigation area on Gazebo (<b>A</b>), and the simulated state of the robot on Rviz (<b>B</b>).</p>
Full article ">Figure 13
<p>Simulation results of the nonsatisfaction of the FST condition when the selected waypoint is either behind and slightly to the right of the obstacle (<b>A</b>), straight ahead and behind the obstacle (local-minima configuration) (<b>B</b>), or behind and slightly to the left of the obstacle (<b>C</b>).</p>
Full article ">Figure 14
<p>Experimental setup.</p>
Full article ">Figure 15
<p>Experimental results showing static obstacle avoidance. The mobile robot traveled across the room past randomly placed boxes to a given waypoint. The virtual overlaid green line on the ground represents the path of the robot.</p>
Full article ">Figure 16
<p>Experimental results showing static and dynamic obstacle avoidance. In (<b>A</b>), the human enters the scene and keeps obstructing the robot’s path (<b>B</b>–<b>D</b>). In (<b>E</b>), the human stops obstructing the path and leaves the scene in (<b>F</b>). In (<b>G</b>), the robot is parked at its final destination. The virtual overlaid green line on the ground represents the path of the robot.</p>
Full article ">
13 pages, 846 KiB  
Article
Physical and Tactical Demands of the Goalkeeper in Football in Different Small-Sided Games
by Daniel Jara, Enrique Ortega, Miguel-Ángel Gómez-Ruano, Matthias Weigelt, Brittany Nikolic and Pilar Sainz de Baranda
Sensors 2019, 19(16), 3605; https://doi.org/10.3390/s19163605 - 19 Aug 2019
Cited by 21 | Viewed by 6339
Abstract
Background: Several studies have examined the differences between the different small-sided game (SSG) formats. However, only one study has analysed how the different variables that define SSGs can modify the goalkeeper’s behavior. The aim of the present study was to analyze how the [...] Read more.
Background: Several studies have examined the differences between the different small-sided game (SSG) formats. However, only one study has analysed how the different variables that define SSGs can modify the goalkeeper’s behavior. The aim of the present study was to analyze how the modification of the pitch size in SSGs affects the physical demands of the goalkeepers. Methods: Three professional male football goalkeepers participated in this study. Three different SSG were analysed (62 m × 44 m for a large pitch; 50 m × 35 m for a medium pitch and 32 m × 23 m for a small pitch). Positional data of each goalkeeper was gathered using an 18.18 Hz global positioning system. The data gathered was used to compute players’ spatial exploration index, standard ellipse area, prediction ellipse area The distance covered, distance covered in different intensities and accelerations/decelerations were used to assess the players’ physical performance. Results and Conclusions: There were differences between small and large SSGs in relation to the distances covered at different intensities and pitch exploration. Intensities were lower when the pitch size was larger. Besides that, the pitch exploration variables increased along with the increment of the pitch size. Full article
(This article belongs to the Special Issue Advanced Sensors Technology in Education)
Show Figures

Figure 1

Figure 1
<p>Three different pitch sizes of the SSG compared to the normal pitch size of a football pitch. The individual playing area did not take goalkeepers into account [<a href="#B36-sensors-19-03605" class="html-bibr">36</a>].</p>
Full article ">Figure 2
<p>Representation of the PEA and SEA. Major and minor-axis of PEA and SEA are represented. The rotation (α) of both ellipses is established using the formulae 4 and is applied through the <span class="html-italic">x</span>-axis.</p>
Full article ">Figure 3
<p>Representation of the SEA and PEA in a SSG small (32 m × 23 m).</p>
Full article ">
20 pages, 14408 KiB  
Article
Real-Time Photometric Calibrated Monocular Direct Visual SLAM
by Peixin Liu, Xianfeng Yuan, Chengjin Zhang, Yong Song, Chuanzheng Liu and Ziyan Li
Sensors 2019, 19(16), 3604; https://doi.org/10.3390/s19163604 - 19 Aug 2019
Cited by 13 | Viewed by 4977
Abstract
To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation [...] Read more.
To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising. Full article
(This article belongs to the Special Issue Intelligent Vehicles)
Show Figures

Figure 1

Figure 1
<p>The flow chart of the proposed system. We divided the approach into three elements: The indirect feature tracker, the sliding windowed photometric compensation and the back-end optimization. The green one is the last frame that joins in the localization, mapping, and photometric parameters’ calculation. Then, the parameters were utilized to compensate the red frame based on the photometric parameters.</p>
Full article ">Figure 2
<p>The flow chart of photometric image formation. The original energy that is emitted from the scene, which is called the radiance, is affected by the vignetting effort of the lens and the exposure time of the shutter.</p>
Full article ">Figure 3
<p>The partial photometric calibration results of KITTI sequence 00. The subfigures (<b>a</b>) are the tracked original frames, and the subfigures (<b>b</b>) are the compensated frames. It can be seen in subfigure (<b>b</b>) that the global exposure was enhanced, especially at the edge of the image. In addition, the brightness values of subfigure (<b>b</b>) remained continuous.</p>
Full article ">Figure 4
<p>The factor graph of the direct formulation. The tracked pixel of the host frame is represented by the solid red line, which is linked to co-visual frames by the dotted blue line. For each term of the tracked pixel, an energy function of the residual was established to calculate the inverse depth and photometric parameters, which are shown by the black line. Then, the parameters were utilized to compensate the next frame.</p>
Full article ">Figure 5
<p>The photometric error based on the photometric formation. The pixel intensity of tracked point <math display="inline"><semantics> <mi>p</mi> </semantics></math>, which is called <math display="inline"><semantics> <mrow> <msup> <mi>p</mi> <mo>′</mo> </msup> </mrow> </semantics></math>, was restored to the estimated scene radiance and then the residual with the current scene radiance was calculated to establish the photometric error equation. For the camera pose change between <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>j</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>I</mi> <mi>i</mi> </msub> </mrow> </semantics></math>, the pose equation based on the locations of <math display="inline"><semantics> <mrow> <msup> <mi>p</mi> <mo>′</mo> </msup> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>p</mi> </semantics></math> was utilized to calculate the <math display="inline"><semantics> <mrow> <mi mathvariant="fraktur">s</mi> <mi mathvariant="fraktur">e</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>The experimental results on the EuRoC V1_03_difficult dataset. Subfigures (<b>d</b>), (<b>e</b>) and (<b>f</b>) are modified from subfigures (<b>a</b>), (<b>b</b>) and (<b>c</b>) respectively.</p>
Full article ">Figure 7
<p>The experimental results on the TUM-Mono dataset. Subfigures (<b>a</b>) and (<b>b</b>) show the trajectories of sequences 04 and 31, respectively, along the <span class="html-italic">x</span>-axis and <span class="html-italic">z</span>-axis on our system. Down are direct sparse odometry (DSO); DSO with loop closure (LDSO); and enhanced DSO, which was integrated with the algorithms proposed in [<a href="#B12-sensors-19-03604" class="html-bibr">12</a>] and [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>].</p>
Full article ">Figure 8
<p>The error of the 6-degree of freedom (6-DoF) on TUM-Mono sequence 04 with respect to the ground truth between our system and the enhanced DSO [<a href="#B12-sensors-19-03604" class="html-bibr">12</a>] and [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>].</p>
Full article ">Figure 8 Cont.
<p>The error of the 6-degree of freedom (6-DoF) on TUM-Mono sequence 04 with respect to the ground truth between our system and the enhanced DSO [<a href="#B12-sensors-19-03604" class="html-bibr">12</a>] and [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>].</p>
Full article ">Figure 9
<p>The photometric parameters of randomly selected frames. It can be seen that the estimated exposure times were very close to the ground truth. However, the parameters of response function and vignette were acutely adjusted to fit the pixels’ intensity of the different scenes.</p>
Full article ">Figure 9 Cont.
<p>The photometric parameters of randomly selected frames. It can be seen that the estimated exposure times were very close to the ground truth. However, the parameters of response function and vignette were acutely adjusted to fit the pixels’ intensity of the different scenes.</p>
Full article ">Figure 10
<p>The exposure condition and pixel tracking of TUM mono dataset sequence 04.</p>
Full article ">Figure 11
<p>The experimental trajectories results of our system, DSO, LDSO and the enhanced DSO which was integrated with the algorithms proposed in [<a href="#B12-sensors-19-03604" class="html-bibr">12</a>] and [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>] along the <span class="html-italic">x</span>-axis and <span class="html-italic">z</span>-axis of the KITTI dataset sequences 00.</p>
Full article ">Figure 12
<p>The lost ORB features during tracking on the sequence 01 of KITTI dataset.</p>
Full article ">Figure 13
<p>The residuals of the 6-DoF on KITTI sequence 00 including the translations and Euler angle of rotations. The residuals of subfigure (<b>b</b>) were obviously larger in the both proposed system and LDSO because of the introduction of loop closure detection. However, the error tendencies on <span class="html-italic">x</span>-axis and <span class="html-italic">z</span>-axis were primarily lower than LDSO and the enhanced DSO [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>]. In general, the translation error and rotational error of the proposed system were stably maintained as reasonable values along all frames of KITTI sequence 00.</p>
Full article ">Figure 13 Cont.
<p>The residuals of the 6-DoF on KITTI sequence 00 including the translations and Euler angle of rotations. The residuals of subfigure (<b>b</b>) were obviously larger in the both proposed system and LDSO because of the introduction of loop closure detection. However, the error tendencies on <span class="html-italic">x</span>-axis and <span class="html-italic">z</span>-axis were primarily lower than LDSO and the enhanced DSO [<a href="#B14-sensors-19-03604" class="html-bibr">14</a>]. In general, the translation error and rotational error of the proposed system were stably maintained as reasonable values along all frames of KITTI sequence 00.</p>
Full article ">Figure 14
<p>The contrast of precision-recall ratios between our system and LDSO on KITTI dataset.</p>
Full article ">Figure 15
<p>The segmental experimental results of LDSO and proposed system on the EuRoC dataset, V1_03_difficult sequence.</p>
Full article ">Figure 16
<p>The camera and notebook were installed on the mobile ground equipment. Then, the camera was calibrated using a checkerboard to eliminate radial distortion.</p>
Full article ">Figure 17
<p>The actual environmental experiment of LDSO and the proposed algorithm. Subfigure (<b>a</b>) shows the adjusted environmental illumination during the experiment. Then, we tested the LDSO and proposed algorithm with respect to subfigure (<b>a</b>).</p>
Full article ">
20 pages, 4831 KiB  
Article
Energy Efficient Range-Free Localization Algorithm for Wireless Sensor Networks
by Rekha Goyat, Mritunjay Kumar Rai, Gulshan Kumar, Rahul Saha and Tai-Hoon Kim
Sensors 2019, 19(16), 3603; https://doi.org/10.3390/s19163603 - 19 Aug 2019
Cited by 20 | Viewed by 3849
Abstract
In this paper, an energy-efficient localization algorithm is proposed for precise localization in wireless sensor networks (WSNs) and the process is accomplished in three steps. Firstly, the beacon nodes discover their one-hop neighbor nodes with additional tone requests and reply packets over the [...] Read more.
In this paper, an energy-efficient localization algorithm is proposed for precise localization in wireless sensor networks (WSNs) and the process is accomplished in three steps. Firstly, the beacon nodes discover their one-hop neighbor nodes with additional tone requests and reply packets over the media access control (MAC) layer to avoid collision of packets. Secondly, the discovered one-hop unknown nodes are divided into two sets, i.e. unknown nodes with direct communication, and with indirect communication for energy efficiency. In direct communication, source beacon nodes forward the information directly to the unknown nodes, but a common beacon node is selected for communication which reduces overall energy consumption during transmission in indirect communication. Finally, a correction factor is also introduced, and localized unknown nodes are upgraded into helper nodes for reducing the localization error. To analyze the efficiency and effectiveness of the proposed algorithm, various simulations are conducted and compared with the existing algorithms. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Radio pattern with degree of irregularity (DOI).</p>
Full article ">Figure 2
<p>Flowchart of the proposed methodology. Nearest Neighbor Request (NNReQ) and Nearest Neighbor Reply (NNReP).</p>
Full article ">Figure 3
<p>Neighbored node discovery. Nearest Neighbor Request Tone (NNReQT) and Nearest Neighbor Reply Tone (NNRePT), short inter-frame space (SIFS).</p>
Full article ">Figure 4
<p>Unknown nodes with direct and indirect communication.</p>
Full article ">Figure 5
<p>Node distribution with communication cost.</p>
Full article ">Figure 6
<p>Deployment of sensor nodes.</p>
Full article ">Figure 7
<p>Impact on ALE by varying (<b>a</b>) ratio of beacon nodes; (<b>b</b>) node density.</p>
Full article ">Figure 8
<p>(<b>a</b>) Probability of finding the true location; (<b>b</b>) impact of the sensing field on ALE.</p>
Full article ">Figure 9
<p>Impact of network connectivity on (<b>a</b>) proportion of unplaced sensor nodes (PUSN) (<b>b</b>) proportion of placed sensor nodes (PPSN).</p>
Full article ">Figure 10
<p>Impact of DOI on ALE.</p>
Full article ">Figure 11
<p>Impact of variable transmission range on ALE (DOI = 0).</p>
Full article ">Figure 12
<p>Impact of simulation time on residual energy.</p>
Full article ">
31 pages, 15283 KiB  
Article
Bin-Picking for Planar Objects Based on a Deep Learning Network: A Case Study of USB Packs
by Tuan-Tang Le and Chyi-Yeu Lin
Sensors 2019, 19(16), 3602; https://doi.org/10.3390/s19163602 - 19 Aug 2019
Cited by 30 | Viewed by 8324
Abstract
Random bin-picking is a prominent, useful, and challenging industrial robotics application. However, many industrial and real-world objects are planar and have oriented surface points that are not sufficiently compact and discriminative for those methods using geometry information, especially depth discontinuities. This study solves [...] Read more.
Random bin-picking is a prominent, useful, and challenging industrial robotics application. However, many industrial and real-world objects are planar and have oriented surface points that are not sufficiently compact and discriminative for those methods using geometry information, especially depth discontinuities. This study solves the above-mentioned problems by proposing a novel and robust solution for random bin-picking for planar objects in a cluttered environment. Different from other research that has mainly focused on 3D information, this study first applies an instance segmentation-based deep learning approach using 2D image data for classifying and localizing the target object while generating a mask for each instance. The presented approach, moreover, serves as a pioneering method to extract 3D point cloud data based on 2D pixel values for building the appropriate coordinate system on the planar object plane. The experimental results showed that the proposed method reached an accuracy rate of 100% for classifying two-sided objects in the unseen dataset, and 3D appropriate pose prediction was highly effective, with average translation and rotation errors less than 0.23 cm and 2.26°, respectively. Finally, the system success rate for picking up objects was over 99% at an average processing time of 0.9 s per step, fast enough for continuous robotic operation without interruption. This showed a promising higher successful pickup rate compared to previous approaches to random bin-picking problems. Successful implementation of the proposed approach for USB packs provides a solid basis for other planar objects in a cluttered environment. With remarkable precision and efficiency, this study shows significant commercialization potential. Full article
(This article belongs to the Special Issue Sensors and Robot Control)
Show Figures

Figure 1

Figure 1
<p>Architecture of the proposed system.</p>
Full article ">Figure 2
<p>Both front and back sides of the object.</p>
Full article ">Figure 3
<p>Three examples of actual scenarios.</p>
Full article ">Figure 4
<p><b>Top</b>: Original image and three other versions after the original data passed through the augmentation module. <b>Bottom</b>: Corresponding binary image.</p>
Full article ">Figure 5
<p>Typical image processing task.</p>
Full article ">Figure 6
<p>Implementation of deep learning for image processing.</p>
Full article ">Figure 7
<p>Flowchart of the proposed 3D object pose prediction method.</p>
Full article ">Figure 8
<p>Example of two acceptable coordinate systems on the target plane.</p>
Full article ">Figure 9
<p>Kinect v1 and v2 overlap regions in the captured scene. The green rectangle represents the RGB view, and the red rectangle represents the IR view [<a href="#B47-sensors-19-03602" class="html-bibr">47</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) IR image, (<b>b</b>) original image, (<b>c</b>) registered color image and IR obtained from transformation and cropping, (<b>d</b>) depth image.</p>
Full article ">Figure 11
<p>Demonstration of mapping methodology for Kinect v2.</p>
Full article ">Figure 12
<p>Method of calculating the final target for the front side case.</p>
Full article ">Figure 13
<p>Example of the nonrectangular object with background used as the query image with reference vectors to locate the destination point.</p>
Full article ">Figure 14
<p>Overall results of the proposed technique on a nonrectangular object.</p>
Full article ">Figure 15
<p>Procedure for building an appropriate coordinate system on the target object to create a rotation matrix with respect to the camera.</p>
Full article ">Figure 16
<p>Camera and object position with respect to the robot base.</p>
Full article ">Figure 17
<p>(<b>a</b>) Kinematic model, (<b>b</b>) a supportive platform to capture the object’s ground truth with respect to the robot (for both the front and back sides).</p>
Full article ">Figure 18
<p>Loss graph for the deep learning model.</p>
Full article ">Figure 19
<p>(<b>a</b>,<b>c</b>) Ground truth of two test cases; (<b>b</b>,<b>d</b>) the results of the prediction corresponded to two ground truths.</p>
Full article ">Figure 20
<p>Confusion matrix of the classification results from the model.</p>
Full article ">Figure 21
<p>Another case in the test set—(<b>a</b>) ground truth; (<b>b</b>) the results of the prediction.</p>
Full article ">Figure 22
<p>Image preprocessing to remove unnecessary information—(<b>a</b>) the original RGB image captured by the camera; (<b>b</b>) the result after removing the unnecessary information.</p>
Full article ">Figure 23
<p>Raw output from the deep learning network.</p>
Full article ">Figure 24
<p>Result after passing through the filter at a fixed area threshold percentage criteria: (<b>a</b>) 90%, (<b>b</b>) 95%.</p>
Full article ">Figure 25
<p>Result after calculating the center points (red points) and deciding the final target (blue point).</p>
Full article ">Figure 26
<p>Result of mapping from a point in the color image to a point that had the same relative position in the depth image.</p>
Full article ">Figure 27
<p>(<b>a</b>) Final target as the blue point, and (<b>b</b>) mapping result as the black point.</p>
Full article ">Figure 28
<p>Support platform to take the ground truth of objects in the working area.</p>
Full article ">Figure 29
<p>Set-up for evaluating the final pickup success rate.</p>
Full article ">
15 pages, 1434 KiB  
Article
Active Learning on Dynamic Clustering for Drift Compensation in an Electronic Nose System
by Tao Liu, Dongqi Li, Jianjun Chen, Yanbing Chen, Tao Yang and Jianhua Cao
Sensors 2019, 19(16), 3601; https://doi.org/10.3390/s19163601 - 19 Aug 2019
Cited by 14 | Viewed by 3717
Abstract
Drift correction is an important concern in Electronic noses (E-nose) for maintaining stable performance during continuous work. A large number of reports have been presented for dealing with E-nose drift through machine-learning approaches in the laboratory. In this study, we aim to counter [...] Read more.
Drift correction is an important concern in Electronic noses (E-nose) for maintaining stable performance during continuous work. A large number of reports have been presented for dealing with E-nose drift through machine-learning approaches in the laboratory. In this study, we aim to counter the drift effect in more challenging situations in which the category information (labels) of the drifted samples is difficult or expensive to obtain. Thus, only a few of the drifted samples can be used for label querying. To solve this problem, we propose an innovative methodology based on Active Learning (AL) that selectively provides sample labels for drift correction. Moreover, we utilize a dynamic clustering process to balance the sample category for label querying. In the experimental section, we set up two E-nose drift scenarios—a long-term and a short-term scenario—to evaluate the performance of the proposed methodology. The results indicate that the proposed methodology is superior to the other state-of-art methods presented. Furthermore, the increasing tendencies of parameter sensitivity and accuracy are analyzed. In addition, the Label Efficiency Index (LEI) is adopted to measure the efficiency and labelling cost of the AL methods. The LEI values indicate that our proposed methodology exhibited better performance than the other presented AL methods in the online drift correction of E-noses. Full article
(This article belongs to the Collection Electronic Noses)
Show Figures

Figure 1

Figure 1
<p>Diagram of active learning.</p>
Full article ">Figure 2
<p>(<b>a</b>) Effects of traditional active learning; (<b>b</b>) effects of proposed AL-DC.</p>
Full article ">Figure 3
<p>(<b>a</b>) Data distribution of Batch 1; (<b>b</b>) data distribution of Batch 2; (<b>c</b>) data distribution of Batch 3; (<b>d</b>) data distribution of Batch 4&amp;5; (<b>e</b>) data distribution of Batch 6; (<b>f</b>) data distribution of Batch 7; (<b>g</b>) data distribution of Batch 8; (<b>h</b>) data distribution of Batch 9; (<b>i</b>) data distribution of Batch 10.</p>
Full article ">Figure 4
<p>(<b>a</b>) Accuracy of AL-US-type methods in Setting 1; (<b>b</b>) accuracy of AL-QBC-type methods in Setting 1; (<b>c</b>) accuracy of AL-ER-type methods in Setting 1; (<b>d</b>) accuracy of AL-US-type methods in Setting 2; (<b>e</b>) accuracy of AL-QBC-type methods in Setting 2; (<b>f</b>) accuracy of AL-ER-type methods in Setting 2.</p>
Full article ">Figure 5
<p>(<b>a</b>) Accuracy fluctuation on US with ELM; (<b>b</b>) accuracy fluctuation on QBC with ELM; (<b>c</b>) accuracy fluctuation on ER with ELM; (<b>d</b>) accuracy fluctuation on US with SVM; (<b>e</b>) accuracy fluctuation on QBC with SVM; (<b>f</b>) accuracy fluctuation on ER with SVM.</p>
Full article ">
Previous Issue
Back to TopTop