[go: up one dir, main page]

Next Issue
Volume 22, March-2
Previous Issue
Volume 22, February-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 22, Issue 5 (March-1 2022) – 382 articles

Cover Story (view full-size image): The MoBiMet (Mobile Biometeorology System) is a low-cost device for thermal comfort monitoring. It measures air temperature, humidity, globe temperature, brightness temperature, light intensity, and wind and is capable of calculating thermal indices on site. It visualizes its data on an integrated display and sends them to a server, where web-based visualizations are available in real time. Data from many MoBiMets deployed in real occupational settings were used to demonstrate their suitability for large-scale and continued monitoring thermal comfort. This article describes the design and performance of the MoBiMet. Alternative methods to determine mean radiant temperature were tested. Networked MoBiMets can detect differences of thermal comfort at workplaces within the same building, and between different companies in the same city. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 3261 KiB  
Article
Sensing Control Parameters of Flute from Microphone Sound Based on Machine Learning from Robotic Performer
by Jin Kuroda and Gou Koutaki
Sensors 2022, 22(5), 2074; https://doi.org/10.3390/s22052074 - 7 Mar 2022
Cited by 3 | Viewed by 3807
Abstract
When learning to play a musical instrument, it is important to improve the quality of self-practice. Many systems have been developed to assist practice. Some practice assistance systems use special sensors (pressure, flow, and motion sensors) to acquire the control parameters of the [...] Read more.
When learning to play a musical instrument, it is important to improve the quality of self-practice. Many systems have been developed to assist practice. Some practice assistance systems use special sensors (pressure, flow, and motion sensors) to acquire the control parameters of the musical instrument, and provide specific guidance. However, it is difficult to acquire the control parameters of wind instruments (e.g., saxophone or flute) such as flow and angle between the player and the musical instrument, since it is not possible to place sensors into the mouth. In this paper, we propose a sensorless control parameter estimation system based on the recorded sound of a wind instrument using only machine learning. In the machine learning framework, many training samples that have both sound and correct labels are required. Therefore, we generated training samples using a robotic performer. This has two advantages: (1) it is easy to obtain many training samples with exhaustive control parameters, and (2) we can use the correct labels as the given control parameters of the robot. In addition to the samples generated by the robot, some human performance data were also used for training to construct an estimation model that enhanced the feature differences between robot and human performance. Finally, a flute control parameter estimation system was developed, and its estimation accuracy for eight novice flute players was evaluated using the Spearman’s rank correlation coefficient. The experimental results showed that the proposed system was able to estimate human control parameters with high accuracy. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Classification of musical instrument practice assistance systems. In this study, we propose a new system that can provide movement teaching without the need to measure control parameters using sensors.</p>
Full article ">Figure 2
<p>System diagram of the control parameter estimation system. Data were created from a flute playing robot and human performance. We used the dataset to train a model for control parameter estimation, which was separated for each estimated parameter. The trained models were used to estimate the parameters of human performance. Thus, it is possible to obtain control parameters only from the performance recordings. (<b>a</b>) Dataset Creation. (<b>b</b>) Model Training. (<b>c</b>) Parameter Estimation.</p>
Full article ">Figure 3
<p>Overall diagram of the flute-playing robot. A servo motor is connected to the flute body, and the angle at which the breath is blown into the instrument can be adjusted by rotating the motor. In addition, the flow rate of the breath into the instrument can be adjusted by adjusting the opening of the flow valve in the air circuit.</p>
Full article ">Figure 4
<p>A head fixation device was developed for collecting human performance data. (<b>a</b>) Head fixer device. (<b>b</b>) Usage example.</p>
Full article ">Figure 5
<p>Angle labels for human performance. (<b>a</b>) Label “BACK”. (<b>b</b>) Label “FRONT”.</p>
Full article ">Figure 6
<p>Data creation procedure.</p>
Full article ">Figure 7
<p>Structure of the parameter estimation model. We used the MLP-Mixer as the base of the model. The model estimates a single control parameter from a log-mel spectrogram image generated from a recording. The output from the model was a value between 0 and 1, corresponding to the magnitude of the estimated parameters.</p>
Full article ">Figure 8
<p>Ideal plot results for each parameter. (<b>a</b>) Ideal plot of flow rate. (<b>b</b>) Ideal plot of angles.</p>
Full article ">Figure 9
<p>Examples of flow estimation results. (<b>a</b>) MobileNet and human dataset (conventional method). (<b>b</b>) MLP-Mixer and robot dataset (proposed method). (<b>c</b>) MLP-Mixer and robot + human datasets (proposed method).</p>
Full article ">Figure 10
<p>Example of angle estimation results. (<b>a</b>) MobileNet and human dataset (conventional method). (<b>b</b>) MLP-Mixer and robot dataset (proposed method). (<b>c</b>) MLP-Mixer and robot + human datasets (proposed method).</p>
Full article ">Figure 11
<p>Example of failure of angle estimation by the proposed method.</p>
Full article ">
19 pages, 4188 KiB  
Article
A Custom-Made Electronic Dynamometer for Evaluation of Peak Ankle Torque after COVID-19
by Iulia Iovanca Dragoi, Florina Georgeta Popescu, Teodor Petrita, Florin Alexa, Romulus Fabian Tatu, Cosmina Ioana Bondor, Carmen Tatu, Frank L. Bowling, Neil D. Reeves and Mihai Ionac
Sensors 2022, 22(5), 2073; https://doi.org/10.3390/s22052073 - 7 Mar 2022
Cited by 3 | Viewed by 3057
Abstract
The negative effects of SARS-CoV-2 infection on the musculoskeletal system include symptoms of fatigue and sarcopenia. The aim of this study is to assess the impact of COVID-19 on foot muscle strength and evaluate the reproducibility of peak ankle torque measurements in time [...] Read more.
The negative effects of SARS-CoV-2 infection on the musculoskeletal system include symptoms of fatigue and sarcopenia. The aim of this study is to assess the impact of COVID-19 on foot muscle strength and evaluate the reproducibility of peak ankle torque measurements in time by using a custom-made electronic dynamometer. In this observational cohort study, we compare two groups of four participants, one exposed to COVID-19 throughout measurements and one unexposed. Peak ankle torque was measured using a portable custom-made electronic dynamometer. Ankle plantar flexor and dorsiflexor muscle strength was captured for both feet at different ankle angles prior and post COVID-19. Average peak torque demonstrated no significant statistical differences between initial and final moment for both groups (p = 0.945). An increase of 4.8%, p = 0.746 was obtained in the group with COVID-19 and a decrease of 1.3%, p = 0.953 was obtained in the group without COVID-19. Multivariate analysis demonstrated no significant differences between the two groups (p = 0.797). There was a very good test–retest reproducibility between the measurements in initial and final moments (ICC = 0.78, p < 0.001). In conclusion, peak torque variability is similar in both COVID-19 and non-COVID-19 groups and the custom-made electronic dynamometer is a reproducible method for repetitive ankle peak torque measurements. Full article
(This article belongs to the Special Issue Sensors and Technologies in Skeletal Muscle Disorder)
Show Figures

Figure 1

Figure 1
<p>Block diagram components of the used measurement system.</p>
Full article ">Figure 2
<p>Flowchart for PicoScope6 parameters selection guide. Detailed graphic user interface and the selected parameters for the oscilloscope configuration: Channel A on, direct current (DC) coupling, input = 2 V and time/div = 100 ms/div (32 s length record) and 10 bits resolution enhancement.</p>
Full article ">Figure 3
<p>Voltage time-graph representation of six different outcomes: (trace <b>A</b>) an example of a negative displacement of voltage during three consecutive MVIC, while participant actively dorsiflexs the foot; pedal own mass voltage level, participant’s own limb mass voltage level and belt fixation-derived voltage levels are added to the voltage derived from the participant’s action placed on the pedal; (trace <b>B</b>) a similar example of a positive displacement of voltage, while participant voluntarily actions on the plate during three consecutive MVIC of active ankle plantarflexion; pedal own mass voltage level, participant’s own limb mass voltage level and belt fixation-derived voltage levels are added to the voltage derived from the participant’ action placed on the pedal; (trace <b>C</b>) off-set level with pedal own mass, participant’s own limb mass and un-tightened belt, without any voluntary action placed by the participant; (trace <b>D</b>) off-set level with pedal own mass, participant’s own limb mass without the presence of belt and no voluntary action placed by the participant; (trace <b>E</b>) off-set level with pedal own mass, participant’s own limb mass with tightened belt just above the knee joint, without any voluntary action placed by the participant; (trace <b>F</b>) pedal off-set level without the presence of the participant’s foot on the plate.</p>
Full article ">Figure 4
<p>Three time-graphs of recordings, representing three MVIC of 5 s each followed by 5 s of relaxation between contractions: (<b>a</b>) resulted time graph of voltage in V during ankle dorsiflexion; (<b>b</b>) the same ankle dorsiflexion measurement represented as torque in Nm; (<b>c</b>) another example of valid time graph during ankle plantarflexion, representing torque in Nm.</p>
Full article ">Figure 5
<p>The participant’s position sat on a chair with trunk on the chair backrest: (<b>a</b>) fixation of strap just above the knee level, with tibia long axis perpendicular to the ground parallel and knee joint flexed at 90–110°; (<b>b</b>) strap fixation on the dorsum of the foot just above the MPJ joints, for a rigid fixation of the foot position on the dynamometer plate. The ankle malleoli axis is right above the pivotal line marked on the apparatus plate with a red line.</p>
Full article ">Figure 6
<p>Examples of improper acquisition due to: (<b>a</b>) participant error; (<b>b</b>) testator command error; (<b>c</b>) an invalid time graph due to participants’ improper discipline/behaviour, mainly induced by pain during testation.</p>
Full article ">Figure 7
<p>Representation of two overlapped voltage time-graphs selected for one participant during one type of muscle effort. Green line representing dorsiflexion at 0° of pedal inclination at initial moment; red line representing dorsiflexion at 0° of pedal inclination at final moment.</p>
Full article ">Figure 8
<p>Arithmetic means of peak torque in each participant: (<b>a</b>) arithmetic means of peak torque in each participant from group 1 with COVID-19; (<b>b</b>) arithmetic means of peak torque in each participant from group 2 without COVID-19.</p>
Full article ">Figure 9
<p>Arithmetic means of peak torque in group 1 with COVID-19: (<b>a</b>) arithmetic means of peak torque between flexion, <span class="html-italic">p</span>-value for flexion factor in multivariate analysis; (<b>b</b>) arithmetic means of peak torque between foot, <span class="html-italic">p</span>-value for foot factor in multivariate analysis; (<b>c</b>) arithmetic means of peak torque between angle, <span class="html-italic">p</span>-value for angle factor in multivariate analysis.</p>
Full article ">
19 pages, 8081 KiB  
Article
Naked-Eye Detection of Morphine by Au@Ag Nanoparticles-Based Colorimetric Chemosensors
by Tahereh Rohani Bastami, Mansour Bayat and Roberto Paolesse
Sensors 2022, 22(5), 2072; https://doi.org/10.3390/s22052072 - 7 Mar 2022
Cited by 23 | Viewed by 3790
Abstract
In this study, we report a novel and facile colorimetric assay based on silver citrate-coated Au@Ag nanoparticles (Au@AgNPs) as a chemosensor for the naked-eye detection of morphine (MOR). The developed optical sensing approach relied on the aggregation of Au@Ag NPs upon exposure to [...] Read more.
In this study, we report a novel and facile colorimetric assay based on silver citrate-coated Au@Ag nanoparticles (Au@AgNPs) as a chemosensor for the naked-eye detection of morphine (MOR). The developed optical sensing approach relied on the aggregation of Au@Ag NPs upon exposure to morphine, which led to an evident color variation from light-yellow to brown. Au@Ag NPs have been prepared by two different protocols, using high- and low-power ultrasonic irradiation. The sonochemical method was essential for the sensing properties of the resulting nanoparticles. This facile sensing method has several advantages including excellent stability, selectivity, prompt detection, and cost-effectiveness. Full article
(This article belongs to the Collection Optical Chemical Sensors: Design and Applications)
Show Figures

Figure 1

Figure 1
<p>UV-vis spectra of the Au@Ag NPs synthesized under different conditions; (<b>a</b>) type of additive, (<b>b</b>) acoustic intensity, and (<b>c</b>) reaction conditions.</p>
Full article ">Figure 2
<p>XRD patterns of (<b>a</b>) Au@Ag (P); (<b>b</b>) Au@Ag (B); and (<b>c</b>) FT-IR spectra of Au@Ag (P) and Au@Ag (B).</p>
Full article ">Figure 3
<p>XPS spectra of (<b>a</b>) Au@Ag (P); (<b>b</b>) C 1s spectrum of Au@Ag (P); (<b>c</b>) O 1s spectrum of Au@Ag (P); and (<b>d</b>) Ag 3d spectrum of Au@Ag (P).</p>
Full article ">Figure 4
<p>TEM images of (<b>a</b>) Au@Ag (B); (<b>b</b>) Au@Ag (B)-MOR; (<b>c</b>) Au@Ag (P); and (<b>d</b>) Au@Ag (P)-MOR.</p>
Full article ">Figure 5
<p>EDX spectrum and chemical composition of (<b>a</b>) Au@Ag (P); (<b>b</b>) Au@Ag (B).</p>
Full article ">Figure 6
<p>UV-visible absorption spectra of Au@Ag NPs upon addition of MOR solution at different pH. (Photographic images from left to right: Au@Ag NPs + 0 µg/mL (blank), Au@Ag NPs + 70 µg/mL MOR at different pH, incubation time 5 min).</p>
Full article ">Figure 7
<p>Effect of incubation time for determination of MOR using Au@Ag NPs (condition: pH = 6.5 ± 0.5, MOR conc.; 70 µg/mL).</p>
Full article ">Figure 8
<p>UV-visible spectra of (<b>a</b>,<b>b</b>) Au@Ag (P) solutions with different concentrations of MOR, calibration graphs for the quantification of MOR by using Au@Ag (P) as a colorimetric probe, and photographic images of Au@Ag (P) in the concentration range of 0−100 µg/mL; (<b>c</b>,<b>d</b>) Au@Ag (B) solutions with different concentrations of MOR, calibration graphs for the quantification of MOR by using Au@Ag (B) as a colorimetric probe, and photographic images of Au@Ag (B) in the concentration range of 0–100 µg/mL.</p>
Full article ">Figure 9
<p>(<b>a</b>) UV-visible spectra of (<b>a</b>) Au@Ag (P) and (<b>b</b>) Au@Ag (B) NPs solutions with different drugs, urea, cations, and anions where the concentration of drugs and all other interferences was 100 µg/mL, and related photographic images.</p>
Full article ">Scheme 1
<p>The synthetic strategy for Au@Ag NPs. Step (1) Nucleation of Au° as a seed; Step (2) the nucleation and growth of Ag° onto the surface of the gold seeds; Step (3) UV-vis spectrum of Au@Ag NPs with λ<sub>max</sub> ~440 nm (In the absence of ultrasonic irradiation (stirrer method), no significant peak was observed).</p>
Full article ">Scheme 2
<p>Illustration of the detection mechanism of MOR using Au@Ag NPs.</p>
Full article ">
16 pages, 1115 KiB  
Article
Design Guidelines for Sensors Based on Spiral Resonators
by Mahmoud Elgeziry, Filippo Costa and Simone Genovesi
Sensors 2022, 22(5), 2071; https://doi.org/10.3390/s22052071 - 7 Mar 2022
Cited by 11 | Viewed by 2885
Abstract
Wireless microwave sensors provide a practical alternative where traditional contact-based measurement techniques are not possible to implement or suffer from performance deterioration. Resonating elements are commonly used in these sensors as the sensing concept relies on the resonance properties of the employed structure. [...] Read more.
Wireless microwave sensors provide a practical alternative where traditional contact-based measurement techniques are not possible to implement or suffer from performance deterioration. Resonating elements are commonly used in these sensors as the sensing concept relies on the resonance properties of the employed structure. This work presents some simple guidelines for designing displacement sensors based on spiral resonator (SR) tags. The working principle of this sensor is based on the variation of the coupling strength between the SR tag and a probing microstrip loop with the distance between them. The performance of the sensor depends on the main design parameters, such as tag dimensions, filling factor, number of turns, and the size of probing loop. The guidelines provided herein can be used for the initial phase of the design process by helping to select a preliminary set of parameters according to the desired application requirements. The provided conclusions are supported using electromagnetic simulations and analytical expressions. Finally, a corrected equivalent circuit model that takes into account the phenomenon of the resonant frequency shift at small distances is provided. The findings are compared against experimental measurements to verify their validity. Full article
(This article belongs to the Special Issue RFID and Zero-Power Backscatter Sensors)
Show Figures

Figure 1

Figure 1
<p>SR-based distance sensor presented in [<a href="#B27-sensors-22-02071" class="html-bibr">27</a>]. (<b>a</b>) 3D view of the sensor setup. (<b>b</b>) Top view of the SR. (<b>c</b>) Equivalent circuit model of the sensor; the SR is modeled using the series RLC circuit (right side) coupled to the probing loop by the mutual inductance, <span class="html-italic">M</span>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Real input impedance vs. frequency for the standalone square probing loop with side length 30 mm, 40 mm, and 50 mm (black lines), and the case with an SR in close proximity having <math display="inline"><semantics> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> </semantics></math> = 50 mm shown for <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 1 mm. (<b>b</b>) Real input impedance vs. frequency for sensor with <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> </mrow> </semantics></math> 2 turns, and <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 50 mm plotted at various values of <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math>, showing the dependence on the normal distance.</p>
Full article ">Figure 3
<p>(<b>a</b>) Sensitivity vs. normal distance for different sizes of the sensor while keeping the side length of the probe and the SR equal. (<b>b</b>) Sensitivity vs. normal distance while varying the probe dimensions for a constant side length of the SR, <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm. The dashed red lines represent a threshold sensitivity of 2 <math display="inline"><semantics> <mo>Ω</mo> </semantics></math>/mm.</p>
Full article ">Figure 4
<p>Mutual inductance between two coaxial single-turn square-shaped filament coils against the normal distance between them, <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math>, for values of <span class="html-italic">r</span> for a constant value of <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm.</p>
Full article ">Figure 5
<p>Input impedance variation, expressed as percentage of maximum real input impedance, as the degree of lateral misalignment changes for an SR with <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm interrogated by a probe of varying side length.</p>
Full article ">Figure 6
<p>Effect of the number of turns <span class="html-italic">N</span> on the input impedance at the probe terminals and the resonant frequency of the SR. Results are plotted for <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 50 mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, and <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 20 mm.</p>
Full article ">Figure 7
<p>(<b>a</b>) Effect of the trace width <span class="html-italic">w</span> on the input impedance at the probe terminals and the resonant frequency of the SR. Results are plotted for <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 50 mm, <math display="inline"><semantics> <mrow> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> </mrow> </semantics></math> 2 turns, and <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 20 mm. (<b>b</b>) Effect of the trace separation <span class="html-italic">s</span> on the input impedance at the probe terminals and the resonant frequency of the SR. Results are plotted for <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 50 mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> </mrow> </semantics></math> 2 turns, and <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 20 mm.</p>
Full article ">Figure 8
<p>(<b>a</b>) Real input impedance vs. frequency as <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math> varies from 5 to 10 mm for the sensor with <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> </mrow> </semantics></math> 2 turns. (<b>b</b>) Resonant frequency vs. <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math> for varying <math display="inline"><semantics> <msub> <mi>L</mi> <mi>p</mi> </msub> </semantics></math> at <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> </mrow> </semantics></math> 2 turns.</p>
Full article ">Figure 9
<p>(<b>a</b>) 3D view of the sensor (dielectric substrates are hidden for better visibility) showing the section plane <span class="html-italic">P</span> where the electric field distribution is plotted for (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 2 mm (at 379 MHz), (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 5 mm (at 406 MHz), (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>=</mo> </mrow> </semantics></math> 10 mm (at 416 MHz).</p>
Full article ">Figure 10
<p>Flowchart for the proposed additional capacitance model where <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>0</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> is the resonant frequency and <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mi>d</mi> <mi>z</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> is the SR total capacitance, both expressed as a function of <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math>, and <span class="html-italic">e</span> is the error between the model and simulation results. <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> is the step increment of the fitted variable <math display="inline"><semantics> <mover accent="true"> <mi>A</mi> <mo>˜</mo> </mover> </semantics></math>, and <math display="inline"><semantics> <mi>θ</mi> </semantics></math> is the desired max error.</p>
Full article ">Figure 11
<p>Fitted equivalent strip area (<math display="inline"><semantics> <mover accent="true"> <mi>A</mi> <mo>˜</mo> </mover> </semantics></math>) vs. the size of the probing loop <math display="inline"><semantics> <msub> <mi>L</mi> <mi>p</mi> </msub> </semantics></math> for a constant tag size <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm.</p>
Full article ">Figure 12
<p>Comparison between the resonant frequency of the SR obtained from numerical simulations and those from the corrected equivalent model. The resonant frequency from the original model is shown by the yellow line for reference. Results are plotted for <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> </mrow> </semantics></math> 30 mm and <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mi>s</mi> <mo>=</mo> </mrow> </semantics></math> 2 mm, while varying <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math>.</p>
Full article ">Figure 13
<p>Proposed improved equivalent circuit model, taking into account the additional capacitive coupling (<math display="inline"><semantics> <msub> <mi>C</mi> <mrow> <mi>p</mi> <mi>p</mi> <mi>c</mi> </mrow> </msub> </semantics></math>).</p>
Full article ">Figure 14
<p>(<b>a</b>) Experimental setup showing the minimum value of <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math> that can be measured due to the size of the soldered SMA connector. (<b>b</b>) Comparison between the resonant frequency of the SR obtained from numerical simulations, equivalent circuit model, and experimentally. Results are plotted for a sensor prototype with <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mi>p</mi> </msub> <mo>=</mo> <mn>22</mn> </mrow> </semantics></math> mm, <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mrow> <mi>t</mi> <mi>a</mi> <mi>g</mi> </mrow> </msub> <mo>=</mo> <mn>18</mn> </mrow> </semantics></math> mm, <math display="inline"><semantics> <mrow> <mi>w</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> mm, <span class="html-italic">s</span> = 2 mm, and <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> turns while varying <math display="inline"><semantics> <msub> <mi>d</mi> <mi>z</mi> </msub> </semantics></math>.</p>
Full article ">
16 pages, 554 KiB  
Article
On-Axis Optical Bench for Laser Ranging Instruments in Future Gravity Missions
by Yichao Yang, Kohei Yamamoto, Miguel Dovale Álvarez, Daikang Wei, Juan José Esteban Delgado, Vitali Müller, Jianjun Jia and Gerhard Heinzel
Sensors 2022, 22(5), 2070; https://doi.org/10.3390/s22052070 - 7 Mar 2022
Cited by 5 | Viewed by 4168
Abstract
The laser ranging interferometer onboard the Gravity Recovery and Climate Experiment Follow-On mission proved the feasibility of an interferometric sensor for inter-satellite length tracking with sub-nanometer precision, establishing an important milestone for space laser interferometry and the general expectation that future gravity missions [...] Read more.
The laser ranging interferometer onboard the Gravity Recovery and Climate Experiment Follow-On mission proved the feasibility of an interferometric sensor for inter-satellite length tracking with sub-nanometer precision, establishing an important milestone for space laser interferometry and the general expectation that future gravity missions will employ heterodyne laser interferometry for satellite-to-satellite ranging. In this paper, we present the design of an on-axis optical bench for next-generation laser ranging which enhances the received optical power and the transmit beam divergence, enabling longer interferometer arms and relaxing the optical power requirement of the laser assembly. All design functionalities and requirements are verified by means of computer simulations. A thermal analysis is carried out to investigate the robustness of the proposed optical bench to the temperature fluctuations found in orbit. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Design concept of an on-axis optical bench for a next generation LRI. The receive and transmit beams (RX and TX, respectively) propagate into and out of the optical bench through the same aperture, where their paths coincide.</p>
Full article ">Figure 2
<p>Optical bench layout. The green arrows depict the path and direction of the local oscillator and transmit beams, which originate from a single beam injected into the bench via a fiber injector. The red arrows depict the path and direction of the receive beam, which couples into the bench via mirror M1. The polarization state of each beam is indicated (RHC—right hand circular; LHC—left hand circular). The receive and transmit reference points (RX/TX RP) coincide in the left focal plane of lens L3, along the optical axis between mirrors M3 and M1. The RX/TX RP is imaged into the spacecraft center of mass, where an accelerometer is located. The range measurement is invariant under rotations of the spacecraft around this point. The receive and local oscillator beams interfere at polarizing beamsplitter PBS1, and are captured by the pair of quadrant photodiodes QPD1 and QPD2 in balanced detection configuration.</p>
Full article ">Figure 3
<p>Amplitude and phase of the “flat-top” RX beam at the receive aperture and at the surface of QPD1, in the horizontal direction. The dashed lines indicate the boundaries of the aperture and the active area of the photodiode, respectively. The RX beam is modeled in IFOCAD using the mode-expansion method. The amplitude of the local oscillator beam at QPD1 is shown in red, showing good spatial overlap with the RX beam, in spite of the much larger peak amplitude.</p>
Full article ">Figure 4
<p>LRI OB as modeled in IFOCAD and drawn in OPTOCAD. The RX beams starts at the receive aperture (red). The TX and LO beams stem from the beam injected into the OB by the fiber injector (blue). The baseplate assumed in the thermal analysis is drawn as a rectangle enclosing all of the OB components.</p>
Full article ">Figure 5
<p>Depiction of local S/C TTL coupling simulation. The RX beam (red) starts at the local RX RP with the expected size after having propagated for the inter-satellite distance, and a certain rotation angle to simulate S/C angular motion. The RX beam interferes with the LO beam (green) and their beatnote is captured by the photodetectors.</p>
Full article ">Figure 6
<p>Depiction of the TX beam TTL coupling simulation. The TX beam (green) propagates to the preset distant OB where it interferes at a single-element photodiode with a Gaussian beam large enough such that the distant system acts as a perfect transponder.</p>
Full article ">Figure 7
<p>Angular motion of the local S/C causes the received beam’s incidence angle at the receive aperture to change (the abscissas). A steering mirror is actuated (<b>a</b>,<b>b</b>) via two independent DWS control loops that zero the <math display="inline"><semantics> <msub> <mi>DWS</mi> <mi mathvariant="normal">h</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>DWS</mi> <mi mathvariant="normal">v</mi> </msub> </semantics></math> signals in QPD1 (<b>c</b>,<b>d</b>), keeping the RX and LO phase fronts nearly parallel at both detectors. QPD2 is out of loop and measures nearly zero <math display="inline"><semantics> <msub> <mi>DWS</mi> <mi mathvariant="normal">h</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>DWS</mi> <mi mathvariant="normal">v</mi> </msub> </semantics></math> (<b>e</b>,<b>f</b>). The TX IS ensures that the TX beam is antiparallel to the RX beam in the inter-S/C path (<b>g</b>,<b>h</b>). The RX IS and LO IS ensure that the RX and LO beams experience minimal beam walk at the detectors (<b>i</b>–<b>l</b>). In the resulting configuration, TTL coupling is minimized, as measured both in the local S/C (<b>m</b>,<b>n</b>), and at the distant S/C (<b>o</b>,<b>p</b>).</p>
Full article ">Figure 8
<p>Thermal analysis. The temperature is swept in the <math display="inline"><semantics> <mrow> <mo>±</mo> <mn>3</mn> </mrow> </semantics></math> K range in seven steps, and the TTL coupling simulation is carried out for each step of temperature and RX beam tilt in the pitch and yaw degrees of freedom. The shaded regions shown in the plots are bounded by the maximum and minimum results for TX pointing error (<b>a</b>,<b>b</b>), RX beam walk (<b>c</b>,<b>d</b>), LO beam walk (<b>e</b>,<b>f</b>), local TTL coupling (<b>g</b>,<b>h</b>), and TX beam TTL coupling (<b>i</b>,<b>j</b>) throughout the temperature range.</p>
Full article ">
41 pages, 1540 KiB  
Review
A Review of Recent Developments in Driver Drowsiness Detection Systems
by Yaman Albadawi, Maen Takruri and Mohammed Awad
Sensors 2022, 22(5), 2069; https://doi.org/10.3390/s22052069 - 7 Mar 2022
Cited by 91 | Viewed by 27650
Abstract
Continuous advancements in computing technology and artificial intelligence in the past decade have led to improvements in driver monitoring systems. Numerous experimental studies have collected real driver drowsiness data and applied various artificial intelligence algorithms and feature combinations with the goal of significantly [...] Read more.
Continuous advancements in computing technology and artificial intelligence in the past decade have led to improvements in driver monitoring systems. Numerous experimental studies have collected real driver drowsiness data and applied various artificial intelligence algorithms and feature combinations with the goal of significantly enhancing the performance of these systems in real-time. This paper presents an up-to-date review of the driver drowsiness detection systems implemented over the last decade. The paper illustrates and reviews recent systems using different measures to track and detect drowsiness. Each system falls under one of four possible categories, based on the information used. Each system presented in this paper is associated with a detailed description of the features, classification algorithms, and used datasets. In addition, an evaluation of these systems is presented, in terms of the final classification accuracy, sensitivity, and precision. Furthermore, the paper highlights the recent challenges in the area of driver drowsiness detection, discusses the practicality and reliability of each of the four system types, and presents some of the future trends in the field. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Driver drowsiness detection measures.</p>
Full article ">Figure 2
<p>Driver drowsiness detection systems data flow.</p>
Full article ">
17 pages, 1846 KiB  
Article
Real-Time Object Detection and Classification by UAV Equipped With SAR
by Krzysztof Gromada, Barbara Siemiątkowska, Wojciech Stecz, Krystian Płochocki and Karol Woźniak
Sensors 2022, 22(5), 2068; https://doi.org/10.3390/s22052068 - 7 Mar 2022
Cited by 15 | Viewed by 6111
Abstract
The article presents real-time object detection and classification methods by unmanned aerial vehicles (UAVs) equipped with a synthetic aperture radar (SAR). Two algorithms have been extensively tested: classic image analysis and convolutional neural networks (YOLOv5). The research resulted in a new method that [...] Read more.
The article presents real-time object detection and classification methods by unmanned aerial vehicles (UAVs) equipped with a synthetic aperture radar (SAR). Two algorithms have been extensively tested: classic image analysis and convolutional neural networks (YOLOv5). The research resulted in a new method that combines YOLOv5 with post-processing using classic image analysis. It is shown that the new system improves both the classification accuracy and the location of the identified object. The algorithms were implemented and tested on a mobile platform installed on a military-class UAV as the primary unit for online image analysis. The usage of objective low-computational complexity detection algorithms on SAR scans can reduce the size of the scans sent to the ground control station. Full article
(This article belongs to the Special Issue State-of-the-Art Sensors Technology in Poland 2021-2022)
Show Figures

Figure 1

Figure 1
<p>Graphical representation of the SAR scan’s geometrical parameters: R—slant range, D—ground distance, h—flight height (above ground level), a—offset, <math display="inline"> <semantics> <msub> <mi>L</mi> <mi>A</mi> </msub> </semantics> </math>—synthetic aperture length required of flight distance. The scanned area is marked on the map with a polygon.</p>
Full article ">Figure 2
<p>Sample metadata of the SAR’s scan. Pixel resolution is presented.</p>
Full article ">Figure 3
<p>Extracted parts of the image of an urban area.</p>
Full article ">Figure 4
<p>Extracted part of the image of the group of tanks.</p>
Full article ">Figure 5
<p>YOLOv5 architecture [<a href="#B44-sensors-22-02068" class="html-bibr">44</a>].</p>
Full article ">Figure 6
<p>Camera images and corresponding SAR images from MSTAR database [<a href="#B42-sensors-22-02068" class="html-bibr">42</a>].</p>
Full article ">Figure 7
<p>R curve for networks with different preprocessing setups.</p>
Full article ">Figure 8
<p>Smoothed metrics’ courses during training for validation dataset of 4 described configurations, featuring mean average precision, precision, and recall—defined in Equations (<a href="#FD1-sensors-22-02068" class="html-disp-formula">1</a>)–(<a href="#FD3-sensors-22-02068" class="html-disp-formula">3</a>).</p>
Full article ">Figure 9
<p>Bounding boxes: green rectangles represent the bounding box generated by YOLO; the red one represents the area marked by classical image-processing methods.</p>
Full article ">Figure 10
<p>The image of the desert: stones recognized as tanks.</p>
Full article ">Figure 11
<p>The algorithm’s data-flow graph.</p>
Full article ">Figure 12
<p>The stages of post-processing: (<b>a</b>) bounding box of object detected by YOLO network; (<b>b</b>) the result of Otsu segmentation; (<b>c</b>) the result of morphological closing; (<b>d</b>) the result of applying Canny edge detector; (<b>e</b>) minimal area bounding box.</p>
Full article ">Figure 13
<p>The lines found in the sub-image—comparison of the Hough algorithm (<b>a</b>) versus the fast line detector (<b>b</b>).</p>
Full article ">
28 pages, 12652 KiB  
Article
Single-Shot Intrinsic Calibration for Autonomous Driving Applications
by Abraham Monrroy Cano, Jacob Lambert, Masato Edahiro and Shinpei Kato
Sensors 2022, 22(5), 2067; https://doi.org/10.3390/s22052067 - 7 Mar 2022
Cited by 3 | Viewed by 3972
Abstract
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these [...] Read more.
In this paper, we present a first-of-its-kind method to determine clear and repeatable guidelines for single-shot camera intrinsic calibration using multiple checkerboards. With the help of a simulator, we found the position and rotation intervals that allow optimal corner detector performance. With these intervals defined, we generated thousands of multiple checkerboard poses and evaluated them using ground truth values, in order to obtain configurations that lead to accurate camera intrinsic parameters. We used these results to define guidelines to create multiple checkerboard setups. We tested and verified the robustness of the guidelines in the simulator, and additionally in the real world with cameras with different focal lengths and distortion profiles, which help generalize our findings. Finally, we used a 3D LiDAR (Light Detection and Ranging) to project and confirm the quality of the intrinsic parameters projection. We found it possible to obtain accurate intrinsic parameters for 3D applications, with at least seven checkerboard setups in a single image that follow our positioning guidelines. Full article
Show Figures

Figure 1

Figure 1
<p>Our modeled checkerboard simulated in the LGSVL.</p>
Full article ">Figure 2
<p>Effects of roll (<math display="inline"><semantics> <mi>α</mi> </semantics></math>), pitch (<math display="inline"><semantics> <mi>β</mi> </semantics></math>) and yaw (<math display="inline"><semantics> <mi>γ</mi> </semantics></math>) rotations on the checkerboard corner detector. (<b>a</b>) Effects of Roll (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) Angle, (<b>b</b>) Effects of Pitch (<math display="inline"><semantics> <mi>β</mi> </semantics></math>) Angle, (<b>c</b>) Effects of Simultaneous Roll (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) and Pitch (<math display="inline"><semantics> <mi>β</mi> </semantics></math>), and (<b>d</b>) Effects of Yaw (<math display="inline"><semantics> <mi>γ</mi> </semantics></math>) Angle.</p>
Full article ">Figure 3
<p>Effects of distance between the checkerboard and the camera on the checkerboard corner detector. (<b>a</b>) Effects of Distance, 4 to 51 m interval, (<b>b</b>) Effects of Pitch (<math display="inline"><semantics> <mi>β</mi> </semantics></math>) Angle.</p>
Full article ">Figure 4
<p>Control Points systematically located inside the baseline camera frustum.</p>
Full article ">Figure 5
<p>Control points as an auxiliary metric. Green marks represent the “Control points” projected with the ground truth intrinsic parameters, while purple marks represent the projection of the “Control Points” using the estimated intrinsic parameters. Both (<b>a</b>,<b>b</b>) have the same subpixel checkerboard corner re-projection error value, and corners in the checkerboard are correctly re-projected in both cases. However, the estimated intrinsic parameters have a large error in (<b>a</b>), correlating to Control Point re-projection error.</p>
Full article ">Figure 6
<p>Quantitative result summary for the simulation experiments with multiple checkerboards.</p>
Full article ">Figure 7
<p>Multiple checkerboard verification experiments in a garage. (<b>a</b>) shows the composed image by the camera stream and the checkerboards overlays. (<b>b</b>) shows the camera and the screen we used during the experiments to help us match the checkerboards poses.</p>
Full article ">Figure 8
<p>Point cloud projection on the Lucid camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 8 Cont.
<p>Point cloud projection on the Lucid camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 9
<p>Point cloud projection on the FLIR camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. Figures on the left column colored the projected point cloud by distance. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 9 Cont.
<p>Point cloud projection on the FLIR camera with the wide angle lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. Figures on the left column colored the projected point cloud by distance. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 10
<p>Point cloud projection on the Lucid camera with the Telephoto lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 10 Cont.
<p>Point cloud projection on the Lucid camera with the Telephoto lens, using the intrinsic parameters by the one-shot experiments, replicating the <math display="inline"><semantics> <mrow> <msub> <mi>δ</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>δ</mi> <mi>C</mi> </msub> <mo>,</mo> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> <mo>,</mo> <mi>a</mi> <mi>n</mi> <mi>d</mi> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </mrow> </semantics></math> experiments. In the left column, the projected point cloud is colored by distance. In the right column, the projected point cloud is colored by the laser intensity of each return value. (<b>a</b>) Projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>b</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>c</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>d</b>) projection using <math display="inline"><semantics> <msub> <mi>δ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>e</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, (<b>f</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mn>1</mn> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity, (<b>g</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by depth, and (<b>h</b>) projection using <math display="inline"><semantics> <msub> <mi>ϵ</mi> <mi>C</mi> </msub> </semantics></math>’s intrinsic parameters, point cloud colored by laser intensity.</p>
Full article ">Figure 11
<p>Real-world results for the multi-checkerboard experiments. The left axis denotes the error in pixels for the sum of both focal lengths (<span class="html-italic">f</span>) and the sum of the center point (<span class="html-italic">c</span>). The right axis denotes the absolute error for the sum of distortion coefficients (<math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) and checkerboard corner re-projection error (<math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>). (<b>a</b>) Absolute error comparison for the Lucid camera with the wide angle lens, (<b>b</b>) absolute error comparison for the FLIR camera with the wide angle lens, and (<b>c</b>) absolute error comparison for the Lucid camera with the telephoto lens.</p>
Full article ">Figure 11 Cont.
<p>Real-world results for the multi-checkerboard experiments. The left axis denotes the error in pixels for the sum of both focal lengths (<span class="html-italic">f</span>) and the sum of the center point (<span class="html-italic">c</span>). The right axis denotes the absolute error for the sum of distortion coefficients (<math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>k</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> </mrow> </semantics></math>) and checkerboard corner re-projection error (<math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>). (<b>a</b>) Absolute error comparison for the Lucid camera with the wide angle lens, (<b>b</b>) absolute error comparison for the FLIR camera with the wide angle lens, and (<b>c</b>) absolute error comparison for the Lucid camera with the telephoto lens.</p>
Full article ">Figure 12
<p>Third-order radial distortion coefficient effect comparison on the telephoto lens.</p>
Full article ">
18 pages, 4044 KiB  
Article
On-Orbit Absolute Radiometric Calibration and Validation of ZY3-02 Satellite Multispectral Sensor
by Hongzhao Tang, Junfeng Xie, Xinming Tang, Wei Chen and Qi Li
Sensors 2022, 22(5), 2066; https://doi.org/10.3390/s22052066 - 7 Mar 2022
Cited by 11 | Viewed by 2956
Abstract
This study described the on-orbit vicarious radiometric calibration of Chinese civilian high-resolution stereo mapping satellite ZY3-02 multispectral imager (MUX). The calibration was based on gray-scale permanent artificial targets, and multiple radiometric calibration tarpaulins (tarps) using a reflectance-based approach between July and September 2016 [...] Read more.
This study described the on-orbit vicarious radiometric calibration of Chinese civilian high-resolution stereo mapping satellite ZY3-02 multispectral imager (MUX). The calibration was based on gray-scale permanent artificial targets, and multiple radiometric calibration tarpaulins (tarps) using a reflectance-based approach between July and September 2016 at Baotou calibration site in China was described. The calibration results reveal a good linear relationship between DN and TOA radiances of ZY3-02 MUX. The uncertainty of this radiometric calibration was 4.33%, indicating that radiometric coefficients of ZY3-02 MUX are reliable. A detailed discussion on the validation analysis of the comparison results between the different radiometric calibration coefficients is presented in this paper. To further validate the reliability of the three coefficients, the calibrated ZY3-02 MUX was compared with Landsat-8 Operational Land Imager (OLI). The results also indicate that radiometric characteristics of ZY3-02 MUX imagery are reliable and highly accurate for quantitative applications. Full article
(This article belongs to the Collection Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Normalized spectral response function of the ZY3 MUX.</p>
Full article ">Figure 2
<p>The RadCalNet Baotou site.</p>
Full article ">Figure 3
<p>The gray-scale permanent artificial targets at RadCalNet Baotou site.</p>
Full article ">Figure 4
<p>The arrangement of the calibration tarps at the RadCalNet Baotou site.</p>
Full article ">Figure 5
<p>Flow chart of the reflectance-based radiometric calibration approach.</p>
Full article ">Figure 6
<p>Spectral reflectance measurement data of gray-scale permanent targets.</p>
Full article ">Figure 7
<p>Spectral reflectance measurement data of calibration tarps.</p>
Full article ">Figure 8
<p>The measurement of atmospheric parameter of AERONET Baotou site.</p>
Full article ">Figure 9
<p>Calibration result of the gray-scale permanent artificial targets (coefficient A).</p>
Full article ">Figure 10
<p>Calibration result of the radiometric calibration tarps (coefficient B).</p>
Full article ">Figure 11
<p>Calibration result of combining the permanent artificial targets and the radiometric calibration tarps (coefficient C).</p>
Full article ">Figure 12
<p>The desert area at Baotou calibration site.</p>
Full article ">Figure 13
<p>The relative difference between the TOA predicted and measured radiance with different coefficients.</p>
Full article ">Figure 14
<p>The Dunhuang test site in Landsat-8 OLI and ZY3-02 MUX imagery.</p>
Full article ">Figure 15
<p>Surface reflectance and standard deviation of Gobi in Dunhuang test site.</p>
Full article ">Figure 16
<p>Flow chart of cross-validation with Landsat-8 OLI.</p>
Full article ">Figure 17
<p>The relative difference between the Landsat-8 OLI TOA radiance after SBAF correction and the measured radiance with different coefficients.</p>
Full article ">
21 pages, 9230 KiB  
Article
Tree Trunk Recognition in Orchard Autonomous Operations under Different Light Conditions Using a Thermal Camera and Faster R-CNN
by Ailian Jiang, Ryozo Noguchi and Tofael Ahamed
Sensors 2022, 22(5), 2065; https://doi.org/10.3390/s22052065 - 7 Mar 2022
Cited by 23 | Viewed by 4738
Abstract
In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to [...] Read more.
In an orchard automation process, a current challenge is to recognize natural landmarks and tree trunks to localize intelligent robots. To overcome low-light conditions and global navigation satellite system (GNSS) signal interruptions under a dense canopy, a thermal camera may be used to recognize tree trunks using a deep learning system. Therefore, the objective of this study was to use a thermal camera to detect tree trunks at different times of the day under low-light conditions using deep learning to allow robots to navigate. Thermal images were collected from the dense canopies of two types of orchards (conventional and joint training systems) under high-light (12–2 PM), low-light (5–6 PM), and no-light (7–8 PM) conditions in August and September 2021 (summertime) in Japan. The detection accuracy for a tree trunk was confirmed by the thermal camera, which observed an average error of 0.16 m for 5 m, 0.24 m for 15 m, and 0.3 m for 20 m distances under high-, low-, and no-light conditions, respectively, in different orientations of the thermal camera. Thermal imagery datasets were augmented to train, validate, and test using the Faster R-CNN deep learning model to detect tree trunks. A total of 12,876 images were used to train the model, 2318 images were used to validate the training process, and 1288 images were used to test the model. The mAP of the model was 0.8529 for validation and 0.8378 for the testing process. The average object detection time was 83 ms for images and 90 ms for videos with the thermal camera set at 11 FPS. The model was compared with the YOLO v3 with same number of datasets and training conditions. In the comparisons, Faster R-CNN achieved a higher accuracy than YOLO v3 in tree truck detection using the thermal camera. Therefore, the results showed that Faster R-CNN can be used to recognize objects using thermal images to enable robot navigation in orchards under different lighting conditions. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

Figure 1
<p>Object detection under different lighting conditions at a range of 25 m from the thermal camera with orientations of 0°, −30°, and 30° directions (green indicates the measured distance by camera).</p>
Full article ">Figure 2
<p>Aerial view of experimental pear orchards: (<b>a</b>) conventional planted pear orchard, and (<b>b</b>) joint-tree training pear orchard in the Tsukuba-Plant Innovation Research Center (T-PIRC), University of Tsukuba, Japan.</p>
Full article ">Figure 3
<p>Analysis of thermal images under different lighting conditions: (<b>a</b>,<b>b</b>) no-light conditions (7–8 PM), (<b>c</b>,<b>d</b>) high-light conditions (12–2 PM), and (<b>e</b>,<b>f</b>) low-light conditions (5–6 PM).</p>
Full article ">Figure 4
<p>Faster R-CNN structure for tree truck detection used in this research.</p>
Full article ">Figure 5
<p>Faster R-CNN network structure-focusing regional proposal network for feature map.</p>
Full article ">Figure 6
<p>VGG16 for target of tree trunk detection.</p>
Full article ">Figure 7
<p>Measured distance under different lighting and orientations. (<b>a</b>–<b>c</b>) 0°, −30°, and 30° under high-light condition; (<b>d</b>–<b>f</b>) 0°, −30°, and 30° under low-light condition; (<b>g</b>–<b>i</b>) 0°, −30°, and 30° under no-light condition.</p>
Full article ">Figure 7 Cont.
<p>Measured distance under different lighting and orientations. (<b>a</b>–<b>c</b>) 0°, −30°, and 30° under high-light condition; (<b>d</b>–<b>f</b>) 0°, −30°, and 30° under low-light condition; (<b>g</b>–<b>i</b>) 0°, −30°, and 30° under no-light condition.</p>
Full article ">Figure 8
<p>Measurement errors of target objects from a distance of 0 to 25 m under different lighting conditions and orientations of the thermal camera.</p>
Full article ">Figure 9
<p>Faster R-CNN loss images. (<b>a</b>) Total loss. (<b>b</b>) Bounding box loss. (<b>c</b>) Classification loss. (<b>d</b>) Regional Proposal Network classification loss. (<b>e</b>) Regional Proposal Network bounding box loss.</p>
Full article ">Figure 9 Cont.
<p>Faster R-CNN loss images. (<b>a</b>) Total loss. (<b>b</b>) Bounding box loss. (<b>c</b>) Classification loss. (<b>d</b>) Regional Proposal Network classification loss. (<b>e</b>) Regional Proposal Network bounding box loss.</p>
Full article ">Figure 10
<p>Validation results of Faster R-CNN. (<b>a</b>–<b>f</b>) Original images. (<b>g</b>–<b>l</b>) Randomly flipped and rotated images.</p>
Full article ">Figure 10 Cont.
<p>Validation results of Faster R-CNN. (<b>a</b>–<b>f</b>) Original images. (<b>g</b>–<b>l</b>) Randomly flipped and rotated images.</p>
Full article ">Figure 11
<p>Precision-recall curve of Faster R-CNN and YOLOv3 validation.</p>
Full article ">Figure 12
<p>Precision-recall curve of Faster R-CNN and YOLOv3 testing.</p>
Full article ">Figure 13
<p>Image results of Faster R-CNN test: (<b>a</b>–<b>d</b>) no-light conditions, (<b>e</b>–<b>h</b>) high-light conditions, and (<b>i</b>–<b>l</b>) low-light conditions.</p>
Full article ">Figure 13 Cont.
<p>Image results of Faster R-CNN test: (<b>a</b>–<b>d</b>) no-light conditions, (<b>e</b>–<b>h</b>) high-light conditions, and (<b>i</b>–<b>l</b>) low-light conditions.</p>
Full article ">
24 pages, 10865 KiB  
Article
Modular Single-Stage Three-Phase Flyback Differential Inverter for Medium/High-Power Grid Integrated Applications
by Ahmed Ismail M. Ali, Cao Anh Tuan, Takaharu Takeshita, Mahmoud A. Sayed and Zuhair Muhammed Alaas
Sensors 2022, 22(5), 2064; https://doi.org/10.3390/s22052064 - 7 Mar 2022
Cited by 8 | Viewed by 2653
Abstract
This paper proposes a single-stage three-phase modular flyback differential inverter (MFBDI) for medium/high power solar PV grid-integrated applications. The proposed inverter structure consists of parallel modules of flyback DC-DC converters based on the required power level. The MFBDI offers many features for renewable [...] Read more.
This paper proposes a single-stage three-phase modular flyback differential inverter (MFBDI) for medium/high power solar PV grid-integrated applications. The proposed inverter structure consists of parallel modules of flyback DC-DC converters based on the required power level. The MFBDI offers many features for renewable energy applications, such as reduced components, single-stage power processing, high-power density, voltage-boosting property, improved footprint, flexibility with modular extension capability, and galvanic isolation. The proposed inverter has been modelled, designed, and scaled up to the required application rating. A new mathematical model of the proposed MFBDI is presented and analyzed with a time-varying duty-cycle, wide-range of frequency variation, and power balancing in order to display its grid current harmonic orders for grid-tied applications. In addition, an LPF-based harmonic compensation strategy is used for second-order harmonic component (SOHC) compensation. With the help of the compensation technique, the grid current THD is reduced from 36% to 4.6% by diminishing the SOHC from 51% to 0.8%. Moreover, the SOHC compensation technique eliminates third-order harmonic components from the DC input current. In addition, a 15% parameters mismatch has been applied between the flyback parallel modules to confirm the modular operation of the proposed MFBDI under modules divergence. In addition, SiC MOSFETs are used for inverter switches implementation, which decrease the inverter switching losses at high-switching frequency. The proposed MFBDI is verified by using three flyback parallel modules/phase using PSIM/Simulink software, with a rating of 5 kW, 200 V, and 50 kHz switching frequency, as well as experimental environments. Full article
Show Figures

Figure 1

Figure 1
<p>Circuit configuration of the single-stage three-phase MFBDI.</p>
Full article ">Figure 2
<p>One phase of the proposed MFBDI, N = 3.</p>
Full article ">Figure 3
<p>Control waveforms and current/voltage components.</p>
Full article ">Figure 4
<p>Single module bi-directional power flow of the proposed MFBDI.</p>
Full article ">Figure 5
<p>Temporarily power transfer of the MFBDI: (<b>a</b>) Energy storage in the HFT magnetizing inductance, (<b>b</b>) Energy release to supply the load and charging the output capacitor.</p>
Full article ">Figure 6
<p>Switching waveforms of single flyback module of the proposed MFBDI.</p>
Full article ">Figure 7
<p>Flyback converter voltage gain.</p>
Full article ">Figure 8
<p>Averaged one-cycle of the proposed MFBDI.</p>
Full article ">Figure 9
<p>PCB board of a flyback converter module.</p>
Full article ">Figure 10
<p>Control block diagram of the proposed MFBDI.</p>
Full article ">Figure 11
<p>Bode plot of the proposed control scheme of MFBDI.</p>
Full article ">Figure 12
<p>Simulation results of the proposed MFBDI at 5 kW; (<b>a</b>) with SOHC, (<b>b</b>) without SOHC.</p>
Full article ">Figure 13
<p>Simulation results of the proposed MFBDI at 5 kW considering 15% parameter mismatch. (<b>a</b>) Power frequency waveforms; (<b>b</b>) switching frequency waveforms.</p>
Full article ">Figure 14
<p>Experimental system prototype photograph.</p>
Full article ">Figure 15
<p>Experimental system results without SOHC compensation. (<b>a</b>) Converter results without SOHC compensation. (<b>b</b>) Grid current FFT harmonic spectrum without SOHC compensation.</p>
Full article ">Figure 16
<p>Experimental system results with SOHC compensation. (<b>a</b>) Converter results with SOHC compensation. (<b>b</b>) Grid current FFT harmonic spectrum with SOHC compensation.</p>
Full article ">Figure 17
<p>Grid current harmonic orders vs. IEC61000-3-2 (Class-A).</p>
Full article ">Figure 18
<p>Efficiency profile of a single flyback module of the proposed MFBDI.</p>
Full article ">
22 pages, 1966 KiB  
Article
Comparison of Empirical Mode Decomposition and Singular Spectrum Analysis for Quick and Robust Detection of Aerodynamic Instabilities in Centrifugal Compressors
by Mateusz Stajuda, David García Cava and Grzegorz Liśkiewicz
Sensors 2022, 22(5), 2063; https://doi.org/10.3390/s22052063 - 7 Mar 2022
Cited by 4 | Viewed by 3040
Abstract
Aerodynamic instabilities in centrifugal compressors are dangerous phenomena affecting machine efficiency and in severe cases leading to failure of the compressing system. Quick and robust instability detection during compressor operation is a challenge of utmost importance from an economical and safety point of [...] Read more.
Aerodynamic instabilities in centrifugal compressors are dangerous phenomena affecting machine efficiency and in severe cases leading to failure of the compressing system. Quick and robust instability detection during compressor operation is a challenge of utmost importance from an economical and safety point of view. Rapid indication of instabilities can be obtained using a pressure signal from the compressor. Detection of aerodynamic instabilities using pressure signal results in specific challenges, as the signal is often highly contaminated with noise, which can influence the performance of detection methods. The aim of this study is to investigate and compare the performance of two non-linear signal processing methods—Empirical Mode Decomposition (EMD) and Singular Spectrum Analysis (SSA)—for aerodynamic instability detection. Two instabilities of different character, local—inlet recirculation and global—surge, are considered. The comparison focuses on the robustness, sensitivity and pace of detection—crucial parameters for a successful detection method. It is shown that both EMD and SSA perform similarly for the analysed machine, despite different underlying principles of the methods. Both EMD and SSA have great potential for instabilities detection, but tuning of their parameters is important for robust detection. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart of a detection approach.</p>
Full article ">Figure 2
<p>Centrifugal compressor under investigation; cross-section of the machine and photo of the test rig.</p>
Full article ">Figure 3
<p>Raw signal and selected components from its decomposition with (<b>a</b>) EMD and (<b>b</b>) SSA at TOA 25% for <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>s</mi> <mo>−</mo> <mi>i</mi> <mi>m</mi> <mi>p</mi> <mn>1</mn> </mrow> </msub> </semantics></math> outlet sensor.</p>
Full article ">Figure 4
<p>Mean energy <math display="inline"><semantics> <mover> <mi>E</mi> <mo>¯</mo> </mover> </semantics></math> of dynamic pressure signal from <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>s</mi> <mo>−</mo> <mi>i</mi> <mi>m</mi> <mi>p</mi> <mn>1</mn> </mrow> </msub> </semantics></math> sensor before impeller for signal portions of length <span class="html-italic">N</span> = 10,000; (<b>left</b>) 8 sifting iterations, (<b>middle</b>) 16 sifting iterations, (<b>right</b>) 32 sifting iterations.</p>
Full article ">Figure 5
<p>Mean energy <math display="inline"><semantics> <mover> <mi>E</mi> <mo>¯</mo> </mover> </semantics></math> (solid lines) and confidence interval (dashed lines) for different method parameters: (<b>a</b>) varying signal length <span class="html-italic">N</span> for <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>N</mi> </mrow> </semantics></math> = 8; (<b>b</b>) varying number of siftings for <span class="html-italic">N</span> = 10,000.</p>
Full article ">Figure 6
<p>Inlet recirculation detection accuracy for IMFs 6 to 11 with different signal length <span class="html-italic">N</span> and different sifting iterations number; colorscale is used only for values above 75% for easier interpretation.</p>
Full article ">Figure 7
<p>Mean energy <math display="inline"><semantics> <mover> <mi>E</mi> <mo>¯</mo> </mover> </semantics></math> of RCs from 1 to 3 over different operating conditions for varying window length and signal length <span class="html-italic">N</span> = 10,000.</p>
Full article ">Figure 8
<p>Mean energy <math display="inline"><semantics> <mover> <mi>E</mi> <mo>¯</mo> </mover> </semantics></math> (solid lines) and confidence interval (dashed lines) for different method parameters: (<b>a</b>) varying signal length <span class="html-italic">N</span> for <span class="html-italic">L</span> = 50; (<b>b</b>) varying window length <span class="html-italic">L</span> for <span class="html-italic">N</span> = 10,000.</p>
Full article ">Figure 9
<p>Inlet recirculation detection for RCs 1 to 3 with different signal length <span class="html-italic">N</span> and different window length <span class="html-italic">L</span>.</p>
Full article ">Figure 10
<p>Surface plot of energy in the studied range for varying signal lengths: (<b>a</b>) <span class="html-italic">N</span> = 1000, (<b>b</b>) <span class="html-italic">N</span> = 5000, (<b>c</b>) <span class="html-italic">N</span> = 10,000, (<b>d</b>) <span class="html-italic">N</span> = 50,000; solid line represents the mean energy of IMF 9, marked with red line on the surface plot and dashed lines represent energy confidence intervals.</p>
Full article ">Figure 11
<p>Surge detection accuracy for IMFs 8 to 16 with different signal length <span class="html-italic">N</span> and different number of siftings.</p>
Full article ">Figure 12
<p>Mean energy <math display="inline"><semantics> <mover> <mi>E</mi> <mo>¯</mo> </mover> </semantics></math> of RCs from 1 to 3 for varying window length and signal length <span class="html-italic">N</span> = 10,000.</p>
Full article ">Figure 13
<p>Surge detection accuracy for RCs 1 to 3 with different signal length <span class="html-italic">N</span> and different window length <span class="html-italic">L</span>.</p>
Full article ">Figure 14
<p>Time of obtaining the components with EMD and SSA for selected parameters for EMD and SSA; dotted line represents the time needed for acquisition of <span class="html-italic">N</span> data points.</p>
Full article ">
10 pages, 1105 KiB  
Article
Deductive Reasoning and Working Memory Skills in Individuals with Blindness
by Eyal Heled, Noa Elul, Maurice Ptito and Daniel-Robert Chebat
Sensors 2022, 22(5), 2062; https://doi.org/10.3390/s22052062 - 7 Mar 2022
Cited by 6 | Viewed by 3862
Abstract
Deductive reasoning and working memory are integral parts of executive functioning and are important skills for blind people in everyday life. Despite the importance of these skills, the influence of visual experience on reasoning and working memory skills, as well as on the [...] Read more.
Deductive reasoning and working memory are integral parts of executive functioning and are important skills for blind people in everyday life. Despite the importance of these skills, the influence of visual experience on reasoning and working memory skills, as well as on the relationship between these, is unknown. In this study, fifteen participants with congenital blindness (CB), fifteen with late blindness (LB), fifteen sighted blindfolded controls (SbfC), and fifteen sighted participants performed two tasks of deductive reasoning and two of working memory. We found that while the CB and LB participants did not differ in their deductive reasoning abilities, the CB group performed worse than the sighted controls, and the LB group performed better than the SbfC group. Those with CB outperformed all the other groups in both of the working memory tests. Working memory is associated with deductive reasoning in all three visually impaired groups, but not in the sighted group. These findings suggest that deductive reasoning is not a uniform skill, and that it is associated with visual impairment onset, the level of reasoning difficulty, and the degree of working memory load. Full article
(This article belongs to the Special Issue Spatial Perception and Navigation in the Absence of Vision)
Show Figures

Figure 1

Figure 1
<p>Group comparisons in the reasoning tests scores. Bar graph comparing the performance of congenitally blind, late blind, blindfolded, and sighted controls in the word context test (<b>left</b>) and the deductive reasoning argument task (<b>right</b>). No difference was found between the groups in the word context test. In the deducting reasoning argument task, the sighted controls performed better than the congenitally blind and blindfolded, and the late blind group performed better than the blindfolded group. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 2
<p>Group comparisons in the working memory tests scores. Bar graph comparing performance of congenitally blind, late blind, blindfolded, and sighted controls in the letter–number sequencing task (<b>left</b>) and the digit span backwards task (<b>right</b>). The congenital blind group performed better than all the other groups in both working memory tasks. * <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 3
<p>Correlation of reasoning and working memory tasks in the sighted, blindfolded, congenital, and late blindness groups. Scatter plot showing the correlations between deductive reasoning scores (<span class="html-italic">x</span>) and working memory scores (<span class="html-italic">y</span>) for congenitally blind (red squares), late blind (green lozenges), blindfolded controls (blue circles), and sighted controls (white circles). All the correlations between the deductive reasoning argument task and working memory composite score were significant for all the groups, except for the sighted controls.</p>
Full article ">
21 pages, 2602 KiB  
Article
Data Fusion of Observability Signals for Assisting Orchestration of Distributed Applications
by Ioannis Tzanettis, Christina-Maria Androna, Anastasios Zafeiropoulos, Eleni Fotopoulou and Symeon Papavassiliou
Sensors 2022, 22(5), 2061; https://doi.org/10.3390/s22052061 - 7 Mar 2022
Cited by 10 | Viewed by 3801
Abstract
Nowadays, various frameworks are emerging for supporting distributed tracing techniques over microservices-based distributed applications. The objective is to improve observability and management of operational problems of distributed applications, considering bottlenecks in terms of high latencies in the interaction among the deployed microservices. However, [...] Read more.
Nowadays, various frameworks are emerging for supporting distributed tracing techniques over microservices-based distributed applications. The objective is to improve observability and management of operational problems of distributed applications, considering bottlenecks in terms of high latencies in the interaction among the deployed microservices. However, such frameworks provide information that is disjoint from the management information that is usually collected by cloud computing orchestration platforms. There is a need to improve observability by combining such information to easily produce insights related to performance issues and to realize root cause analyses to tackle them. In this paper, we provide a modern observability approach and pilot implementation for tackling data fusion aspects in edge and cloud computing orchestration platforms. We consider the integration of signals made available by various open-source monitoring and observability frameworks, including metrics, logs and distributed tracing mechanisms. The approach is validated in an experimental orchestration environment based on the deployment and stress testing of a proof-of-concept microservices-based application. Helpful results are produced regarding the identification of the main causes of latencies in the various application parts and the better understanding of the behavior of the application under different stressing conditions. Full article
(This article belongs to the Special Issue Edge/Fog Computing for Intelligent IoT Applications)
Show Figures

Figure 1

Figure 1
<p>Classification of signals into metrics, logs and traces.</p>
Full article ">Figure 2
<p>Exemplar usage example.</p>
Full article ">Figure 3
<p>Distributed trace structure.</p>
Full article ">Figure 4
<p>Data fusion schema.</p>
Full article ">Figure 5
<p>Instrumentation architecture.</p>
Full article ">Figure 6
<p>Distributed IoT application graph.</p>
Full article ">Figure 7
<p>Trace 1 analysis in Zipkin.</p>
Full article ">Figure 8
<p>Trace 2 analysis in Zipkin.</p>
Full article ">Figure 9
<p>Integrated visualization capabilities in the orchestration environment.</p>
Full article ">Figure 10
<p>Request rate and latencies variation under the smooth workload.</p>
Full article ">Figure 11
<p>Request rate and latencies variation under the bursty workload.</p>
Full article ">Figure 12
<p>CPU usage per application component under the smooth workload.</p>
Full article ">Figure 13
<p>CPU usage per application component under the bursty workload.</p>
Full article ">Figure 14
<p>Correlation between trace duration and CPU usage—IoT Predictor component.</p>
Full article ">Figure 15
<p>Memory usage for the IoT Collector component under the smooth workload.</p>
Full article ">Figure 16
<p>Memory usage for the IoT Backend component under the bursty workload.</p>
Full article ">Figure 17
<p>Correlation between trace duration and memory usage—IoT Preprocessor component.</p>
Full article ">
24 pages, 15785 KiB  
Article
Hand Measurement System Based on Haptic and Vision Devices towards Post-Stroke Patients
by Katarzyna Koter, Martyna Samowicz, Justyna Redlicka and Igor Zubrycki
Sensors 2022, 22(5), 2060; https://doi.org/10.3390/s22052060 - 7 Mar 2022
Cited by 5 | Viewed by 3104
Abstract
Diagnostics of a hand requires measurements of kinematics and joint limits. The standard tools for this purpose are manual devices such as goniometers which allow measuring only one joint simultaneously, making the diagnostics time-consuming. The paper presents a system for automatic measurement and [...] Read more.
Diagnostics of a hand requires measurements of kinematics and joint limits. The standard tools for this purpose are manual devices such as goniometers which allow measuring only one joint simultaneously, making the diagnostics time-consuming. The paper presents a system for automatic measurement and computer presentation of essential parameters of a hand. Constructed software uses an integrated vision system, a haptic device for measurement, and has a web-based user interface. The system provides a simplified way to obtain hand parameters, such as hand size, wrist, and finger range of motions, using the homogeneous-matrix-based notation. The haptic device allows for active measurement of the wrist’s range of motion and additional force measurement. A study was conducted to determine the accuracy and repeatability of measurements compared to the gold standard. The system functionality was confirmed on five healthy participants, with results showing comparable results to manual measurements regarding fingers’ lengths. The study showed that the finger’s basic kinematic structure could be measured by a vision system with a mean difference to caliper measurement of 4.5 mm and repeatability with the Standard Deviations up to 0.7 mm. Joint angle limits measurement achieved poorer results with a mean difference to goniometer of 23.6º. Force measurements taken by the haptic device showed the repeatability with a Standard Deviation of 0.7 N. The presented system allows for a unified measurement and a collection of important parameters of a human hand with therapist interface visualization and control with potential use for post-stroke patients’ precise rehabilitation. Full article
Show Figures

Figure 1

Figure 1
<p>Elements of measurement system based on haptic and vision devices.</p>
Full article ">Figure 2
<p>Leap Motion data acquisition and processing scheme.</p>
Full article ">Figure 3
<p>(<b>a</b>) Joint connections scheme distributed in the ROS system, (<b>b</b>) visualization of bones positions and orientation quaternions of the hand and wrist using ROS system.</p>
Full article ">Figure 4
<p>Visualization of user interface showing measurement procedure.</p>
Full article ">Figure 5
<p>Wrist positions with vectors plotted against which wrist flexion/extension angle was measured. (<b>A</b>) vector created while the hand is in the entry position, (<b>B</b>) vector created for the wrist extension, (<b>C</b>) vector created for the wrist flexion.</p>
Full article ">Figure 6
<p>Measurement station with forearm stabilizer.</p>
Full article ">Figure 7
<p>Measurement interface for therapist. (<b>a</b>) Introduction panel which choice of the feature section; (<b>b</b>) hand skeleton measurement panel for Leap motion measurements, it is possible to choose specific hand parameters; (<b>c</b>) haptic measurement panel to collect data of wrist force and range; (<b>d</b>) patient’s panel with data from the entire duration of the therapy.</p>
Full article ">Figure 8
<p>Interpretation of flexion and extension direction on the example of the right hand.</p>
Full article ">Figure 9
<p>Comparison of angle measurement by Leap Motion (LM) and Goniometer(G) and Haptic device (H) for wrist range.</p>
Full article ">Figure 10
<p>Comparison of finger length measurement by Leap Motion and Caliper.</p>
Full article ">Figure 11
<p>Results correlation of measurements via Leap Motion and Caliper for: (<b>a</b>) index finger length, (<b>b</b>) hand length.</p>
Full article ">Figure 12
<p>Comparison of force measurements via Haptic device and Dynamometer.</p>
Full article ">
17 pages, 21831 KiB  
Article
Detection and Mosaicing Techniques for Low-Quality Retinal Videos
by José Camara, Bruno Silva, António Gouveia, Ivan Miguel Pires, Paulo Coelho and António Cunha
Sensors 2022, 22(5), 2059; https://doi.org/10.3390/s22052059 - 7 Mar 2022
Viewed by 2638
Abstract
Ideally, to carry out screening for eye diseases, it is expected to use specialized medical equipment to capture retinal fundus images. However, since this kind of equipment is generally expensive and has low portability, and with the development of technology and the emergence [...] Read more.
Ideally, to carry out screening for eye diseases, it is expected to use specialized medical equipment to capture retinal fundus images. However, since this kind of equipment is generally expensive and has low portability, and with the development of technology and the emergence of smartphones, new portable and cheaper screening options have emerged, one of them being the D-Eye device. When compared to specialized equipment, this equipment and other similar devices associated with a smartphone present lower quality and less field-of-view in the retinal video captured, yet with sufficient quality to perform a medical pre-screening. Individuals can be referred for specialized screening to obtain a medical diagnosis if necessary. Two methods were proposed to extract the relevant regions from these lower-quality videos (the retinal zone). The first one is based on classical image processing approaches such as thresholds and Hough Circle transform. The other performs the extraction of the retinal location by applying a neural network, which is one of the methods reported in the literature with good performance for object detection, the YOLO v4, which was demonstrated to be the preferred method to apply. A mosaicing technique was implemented from the relevant retina regions to obtain a more informative single image with a higher field of view. It was divided into two stages: the GLAMpoints neural network was applied to extract relevant points in the first stage. Some homography transformations are carried out to have in the same referential the overlap of common regions of the images. In the second stage, a smoothing process was performed in the transition between images. Full article
(This article belongs to the Special Issue Computational Intelligence in Image Analysis)
Show Figures

Figure 1

Figure 1
<p>Image acquired with D-Eye lens (<b>left</b>) and image of FIRE public dataset [<a href="#B5-sensors-22-02059" class="html-bibr">5</a>] (<b>right</b>).</p>
Full article ">Figure 2
<p>YOLO bounding boxes prediction: Grid division of input (<b>left</b>); Bounding box prediction (<b>middle</b>); Exclusion of low confidence bounding boxes (<b>right</b>).</p>
Full article ">Figure 3
<p>Methodology pipeline.</p>
Full article ">Figure 4
<p>Example of how the D-Eye images were cropped. At <b>left</b> is presented a D-Eye image and at <b>right</b> is presented the result after data preparation.</p>
Full article ">Figure 5
<p>Chart of loss from YOLOv4 model training.</p>
Full article ">Figure 6
<p>Train loss graphic (<b>top</b>) and validation loss graphic (<b>bottom</b>).</p>
Full article ">Figure 7
<p>Example of an image where the Proposed method was applied (<b>left</b>); Result of the Otsu’s threshold (<b>middle</b>); Contour image after adaptive threshold (<b>right</b>).</p>
Full article ">Figure 8
<p>Image mosaicing flowchart.</p>
Full article ">Figure 9
<p>Comparison between MAE and IoU metric results: MAE = 103.0 and IoU = 0.45 (<b>left</b>); MAE = 53.0 and IoU = 0.32 (<b>right</b>).</p>
Full article ">Figure 10
<p>Example of bounding boxes visual results of Zengin et al. [<a href="#B6-sensors-22-02059" class="html-bibr">6</a>]. Successful classification (<b>left</b>); Acceptable classification (<b>middle</b>); Failed classification (<b>right</b>).</p>
Full article ">Figure 11
<p>Example of bounding boxes visual results of Proposed method. Successful classification (<b>left</b>); Acceptable classification (<b>middle</b>); Failed classification (<b>right</b>).</p>
Full article ">Figure 12
<p>Example of bounding boxes visual results of YOLO v4. Successful classification (<b>left</b>); Acceptable classification (<b>middle</b>); Failed classification (<b>right</b>).</p>
Full article ">Figure 13
<p>Mosaicing result obtained with the original model of GLAMpoints applied to three images of DS2: Image registration result (<b>left</b>); Image blending result (<b>right</b>).</p>
Full article ">Figure 14
<p>Mosaicing result obtained with the fine-tuned model of GLAMpoints applied to the same three images of DS2.</p>
Full article ">Figure 15
<p>Comparison between mosaicing of the DS1 cropped images: Using the original model (<b>left</b>) and using the fine-tuned model (<b>right</b>).</p>
Full article ">
21 pages, 17934 KiB  
Article
Probabilistic Maritime Trajectory Prediction in Complex Scenarios Using Deep Learning
by Kristian Aalling Sørensen, Peder Heiselberg and Henning Heiselberg
Sensors 2022, 22(5), 2058; https://doi.org/10.3390/s22052058 - 7 Mar 2022
Cited by 37 | Viewed by 4540
Abstract
Maritime activity is expected to increase, and therefore also the need for maritime surveillance and safety. Most ships are obligated to identify themselves with a transponder system like the Automatic Identification System (AIS) and ships that do not, intentionally or unintentionally, are referred [...] Read more.
Maritime activity is expected to increase, and therefore also the need for maritime surveillance and safety. Most ships are obligated to identify themselves with a transponder system like the Automatic Identification System (AIS) and ships that do not, intentionally or unintentionally, are referred to as dark ships and must be observed by other means. Knowing the future location of ships can not only help with ship/ship collision avoidance, but also with determining the identity of these dark ships found in, e.g., satellite images. However, predicting the future location of ships is inherently probabilistic and the variety of possible routes is almost limitless. We therefore introduce a Bidirectional Long-Short-Term-Memory Mixture Density Network (BLSTM-MDN) deep learning model capable of characterising the underlying distribution of ship trajectories. It is consequently possible to predict a probabilistic future location as opposed to a deterministic location. AIS data from 3631 different cargo ships are acquired from a region west of Norway spanning 320,000 sqkm. Our implemented BLSTM-MDN model characterizes the conditional probability of the target, conditioned on an input trajectory using an 11-dimensional Gaussian distribution and by inferring a single target from the distribution, we can predict several probable trajectories from the same input trajectory with a test Negative Log Likelihood loss of 9.96 corresponding to a mean distance error of 2.53 km 50 min into the future. We compare our model to both a standard BLSTM and a state-of-the-art multi-headed self-attention BLSTM model and the BLSTM-MDN performs similarly to the two deterministic deep learning models on straight trajectories, but produced better results in complex scenarios. Full article
(This article belongs to the Special Issue Remote Sensing in Vessel Detection and Navigation: Edition Ⅱ)
Show Figures

Figure 1

Figure 1
<p>Generating samples and targets from Automatic Identification System (AIS) data using the sliding window approach here shown for a single sub-trajectory, <math display="inline"><semantics> <msup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo>˜</mo> </mover> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> </msup> </semantics></math> of length <math display="inline"><semantics> <msub> <mi>T</mi> <mi>j</mi> </msub> </semantics></math>. Samples, <math display="inline"><semantics> <mrow> <msup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo>˜</mo> </mover> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> <mo>,</mo> <mn>1</mn> </mrow> </msup> <mo>…</mo> <msup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo>˜</mo> </mover> <mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>−</mo> <mi>a</mi> </mrow> </msup> </mrow> </semantics></math>, of size <math display="inline"><semantics> <mrow> <mi>b</mi> <mspace width="4.pt"/> <mi mathvariant="normal">x</mi> <mspace width="4.pt"/> <mi>f</mi> </mrow> </semantics></math> and targets of size <math display="inline"><semantics> <mrow> <mi>a</mi> <mspace width="4.pt"/> <mi mathvariant="normal">x</mi> <mspace width="4.pt"/> <mi>f</mi> </mrow> </semantics></math> are made. With the sliding window approach, time <math display="inline"><semantics> <msub> <mi>t</mi> <mn>0</mn> </msub> </semantics></math> shifts with the window.</p>
Full article ">Figure 2
<p>Proposed Deep Learning model with the Bidirectional Long Short Term Memory (BLSTM) model parameterising the temporal dependency and the Mixture Density Network (MDN) model describing the underlying distribution: The input to the BLSTM layers, <math display="inline"><semantics> <msup> <mover accent="true"> <mi mathvariant="bold">x</mi> <mo>˜</mo> </mover> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </semantics></math> are batches of <span class="html-italic">l</span> same-size samples shown in <a href="#sensors-22-02058-f001" class="html-fig">Figure 1</a>. Six BLSTM layers are used, each followed by a dropout layer and Batch Normalisation. Three Fully Connected (FC) layers are used to parameterise the features of the Probability Density Function (PDF). The PDF is then used to predict the target for each sample in the batch, <math display="inline"><semantics> <msup> <mover accent="true"> <mi mathvariant="bold">y</mi> <mo stretchy="false">^</mo> </mover> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </semantics></math>.</p>
Full article ">Figure 3
<p>Heatmap of pre-processed cargo ship AIS data near Norway with high density corresponding to red colours. The red lines corresponds to the most used sea lanes. The red dots northeast of Norway corresponds to, e.g., oil platforms.</p>
Full article ">Figure 4
<p>Training and validation negative log likelihood (NLL) loss for the model shown in grey circles and black stars, respectively. The red squares (from validation set) corresponds to when the model starts to overfit the data. The final model is taken from epoch 54.</p>
Full article ">Figure 5
<p>Predictions made with the BLSTM-MDN model. (<b>left</b>): A simple cargo vessel trajectory with input samples shown in black, true future trajectory in green and the predicted trajectory in orange. Harbours are illustrated as red polygons with the harbour of Mendal being the leftmost polygon. The model can satisfactorily predict simple trajectories. (<b>right</b>): 10 different sampled predictions 36 times steps into the future (3 h), shown in a shade of orange. The predictions mostly follow the true track shown in green, with one future trajectory heading to the harbour of Mandal.</p>
Full article ">Figure 6
<p>10 predicted trajectories 36 times steps into the future (3 h) in a complex scenario. The model occasionally predicts the wrong trajectory and in rare cases, the future trajectory is predicted to go through, e.g., an island. Two trajectories are predicted to sail northward, the rest are predicted to sail to larger harbours than Tau.</p>
Full article ">Figure 7
<p>Model predictions with the input sample illustrated in black, the true trajectory in green and the predicted samples in shade of orange. Nearby harbours are shown in red polygons. The predicted trajectory avoid land and follows different probable trajectories.</p>
Full article ">Figure 8
<p>Qualitatively comparison of the predictions from the BLSTM-MDN model (orange shaded points) and the BLSTM-Attention model (blue points). Even if the BLSTM-Attention model had a lower mean distance error that the BLSTM-MDN model, we can see that the BLSTM-MDN model can capture the different possible outcomes better than the BLSTM-Attention model.</p>
Full article ">Figure A1
<p>Deep learning model used to estimate the number of mixtures.</p>
Full article ">Figure A2
<p>BLSTM model: Same model as the proposed MDN model, without the added MDN layer. RMSE loss function has been used.</p>
Full article ">Figure A3
<p>BLSTM with Multi-headed Self-attention model. RMSE loss function has been used.</p>
Full article ">Figure A4
<p>A recurrent neuron (<b>left</b>) that is unrolled through time (<b>right</b>).</p>
Full article ">Figure A5
<p>A deep RNN unrolled through time. The output of layer 1 is the input to layer 2. At each time step, each recurrent neuron will receive the input <math display="inline"><semantics> <msub> <mi mathvariant="bold">x</mi> <mi>t</mi> </msub> </semantics></math>.</p>
Full article ">Figure A6
<p>Illustration of the Long Short Term Memory cell. Here, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">f</mi> <mi>t</mi> </msub> </semantics></math> is the forget gate, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">i</mi> <mi>t</mi> </msub> </semantics></math> the input gate, <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">g</mi> <mi>t</mi> </msub> </semantics></math> the gate gate and <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">o</mi> <mi>t</mi> </msub> </semantics></math> the output gate, each with input weights <math display="inline"><semantics> <mi mathvariant="bold">U</mi> </semantics></math>, hidden state weights, <math display="inline"><semantics> <mi mathvariant="bold">W</mi> </semantics></math> and biases, <math display="inline"><semantics> <mi mathvariant="bold">b</mi> </semantics></math>. ⊗ it the multiplication operator and ⊕ the addition operator.</p>
Full article ">
27 pages, 2913 KiB  
Article
Multi-Unit Serial Polynomial Multiplier to Accelerate NTRU-Based Cryptographic Schemes in IoT Embedded Systems
by Santiago Sánchez-Solano, Eros Camacho-Ruiz, Macarena C. Martínez-Rodríguez and Piedad Brox
Sensors 2022, 22(5), 2057; https://doi.org/10.3390/s22052057 - 7 Mar 2022
Cited by 7 | Viewed by 3422
Abstract
Concern for the security of embedded systems that implement IoT devices has become a crucial issue, as these devices today support an increasing number of applications and services that store and exchange information whose integrity, privacy, and authenticity must be adequately guaranteed. Modern [...] Read more.
Concern for the security of embedded systems that implement IoT devices has become a crucial issue, as these devices today support an increasing number of applications and services that store and exchange information whose integrity, privacy, and authenticity must be adequately guaranteed. Modern lattice-based cryptographic schemes have proven to be a good alternative, both to face the security threats that arise as a consequence of the development of quantum computing and to allow efficient implementations of cryptographic primitives in resource-limited embedded systems, such as those used in consumer and industrial applications of the IoT. This article describes the hardware implementation of parameterized multi-unit serial polynomial multipliers to speed up time-consuming operations in NTRU-based cryptographic schemes. The flexibility in selecting the design parameters and the interconnection protocol with a general-purpose processor allow them to be applied both to the standardized variants of NTRU and to the new proposals that are being considered in the post-quantum contest currently held by the National Institute of Standards and Technology, as well as to obtain an adequate cost/performance/security-level trade-off for a target application. The designs are provided as AXI4 bus-compliant intellectual property modules that can be easily incorporated into embedded systems developed with the Vivado design tools. The work provides an extensive set of implementation and characterization results in devices of the Xilinx Zynq-7000 and Zynq UltraScale+ families for the different sets of parameters defined in the NTRUEncrypt standard. It also includes details of their plug and play inclusion as hardware accelerators in the C implementation of this public-key encryption scheme codified in the LibNTRU library, showing that acceleration factors of up to 3.1 are achieved when compared to pure software implementations running on the processing systems included in the programmable devices. Full article
(This article belongs to the Special Issue Advances in Cybersecurity for the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Evaluation of each term of the summation (<b>a</b>) and generation of indices <span class="html-italic">i</span>, <span class="html-italic">j</span>, and <span class="html-italic">k</span> (<b>b</b>) while sweeping the matrix by rows.</p>
Full article ">Figure 2
<p>Evaluation of each term of the summation (<b>a</b>) and generation of indices <span class="html-italic">i</span>, <span class="html-italic">j</span>, and <span class="html-italic">k</span> (<b>b</b>) while sweeping the matrix by columns.</p>
Full article ">Figure 3
<p>Simultaneous evaluation of two terms of the summation (<b>a</b>) and generation of indices <span class="html-italic">i</span>, <span class="html-italic">j</span>, and <span class="html-italic">k</span> (<b>b</b>) when the convolution matrix is swept by rows (colors indicate the terms evaluated in parallel).</p>
Full article ">Figure 4
<p>Simultaneous evaluation of two terms of the summation (<b>a</b>) and generation of indices <span class="html-italic">i</span>, <span class="html-italic">j</span>, and <span class="html-italic">k</span> (<b>b</b>) when the convolution matrix is swept by columns (colors indicate the terms evaluated in parallel).</p>
Full article ">Figure 5
<p>Block diagram for the serial implementation of the polynomial multiplier required in NTRU.</p>
Full article ">Figure 6
<p>(<b>a</b>) Pseudocode used to generate the indices for multiple arithmetic units. (<b>b</b>) Memory block used to store the coefficients of the polynomial <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Basic memory blocks used to store the coefficients of polynomials <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> (<b>a</b>) and <math display="inline"><semantics> <mrow> <mi>e</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> (<b>b</b>).</p>
Full article ">Figure 8
<p>Memory structures used to store the coefficients of polynomials <math display="inline"><semantics> <mrow> <mi>h</mi> <mo>(</mo> <mi>x</mi> </mrow> </semantics></math>) (<b>a</b>) and <math display="inline"><semantics> <mrow> <mi>e</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </semantics></math> (<b>b</b>).</p>
Full article ">Figure 9
<p>Block diagram of the arithmetic unit (<b>a</b>) and grouping of <span class="html-italic">M</span> arithmetic units to calculate <span class="html-italic">M</span> terms of the multiplier operation in parallel (<b>b</b>).</p>
Full article ">Figure 10
<p>AXI4-Lite IP module input registers: (<b>a</b>) control register; (<b>b</b>) address register; and (<b>c</b>) data input register.</p>
Full article ">Figure 11
<p>AXI4-Lite IP module output registers: (<b>a</b>) data output register and (<b>b</b>) end operation register.</p>
Full article ">Figure 12
<p>Inclusion of input and output interfaces for connecting the multiplier IP module through AXI4-Stream buses.</p>
Full article ">Figure 13
<p>LUTs (<b>a</b>) and Block RAMs (<b>b</b>) required to implement the MS2XL and MS2XS hardware accelerators on Zynq-7000 and Zynq UltraScale+ devices for the parameter set EES541EP1 and different values of <span class="html-italic">M</span>.</p>
Full article ">Figure 14
<p>Comparison of LUTs required to implement the different IEEE Std 1363.1 parameter sets for (<b>a</b>) MS2XL and MS2XS IPs with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> on Zynq-7000 devices; and (<b>b</b>) MS2XS IPs with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> on Zynq-7000 and Zynq UltraScale+ devices.</p>
Full article ">Figure 15
<p>LUTs (<b>a</b>) and BRAMs (<b>b</b>) required to implement embedded systems that incorporate MS2XL and MS2XS multipliers on Zynq-7000 and Zynq UltraScale+ devices for the parameter set EES541EP1 and different values of <span class="html-italic">M</span>.</p>
Full article ">Figure 16
<p>LUTs required to implement embedded systems that incorporate MS2XL-M8 (<b>a</b>) and MS2XS (<b>b</b>) multipliers on Zynq-7000 and Zynq UltraScale+ devices for the different IEEE Std 1363.1 parameter sets.</p>
Full article ">Figure 17
<p>Clock cycles required to complete polynomial multiplication for embedded systems that incorporate IPs MS2XS (<b>a</b>) and MS2XL (<b>b</b>), implementing the EES541EP1 parameter set with different values of <span class="html-italic">M</span> in a Zynq-7000 device.</p>
Full article ">Figure 18
<p>Clock cycles required to complete polynomial multiplication using the different IEEE Std 1363.1 parameter sets for embedded systems that incorporate: (<b>a</b>) MS2XS IPs with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> implemented in a Zynq UltraScale+; (<b>b</b>) MS2XL IPs with <math display="inline"><semantics> <mrow> <mi>M</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math> implemented in a Zynq-7000 device.</p>
Full article ">Figure 19
<p>Comparison of multiplier operating times vs. <span class="html-italic">M</span> for: (<b>a</b>) MS2XL-based test systems implementing the parameter set EES541EP1 running at two clock frequencies on Pynq-Z2 and Ultra-96 boards and (<b>b</b>) MS2XS-based test systems implementing the same parameter set on the Ultra-96 board running at different clock frequencies.</p>
Full article ">Figure 20
<p>Evolution versus number of AUs of the time invested (<b>a</b>) and the acceleration factor reached (<b>b</b>) in the encryption operation using the set of parameters EES541EP1 for SW and HW/SW embedded systems implemented on the Pynq-Z2 board (time displayed in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s).</p>
Full article ">Figure 21
<p>Times invested (<b>a</b>) and acceleration factors reached (<b>a</b>) in the encryption operation using the parameter sets defined in IEEE Std 1363.1 for SW and HW/SW embedded systems implemented on the Pynq-Z2 board (time displayed in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s).</p>
Full article ">Figure 22
<p>Evolution versus number of arithmetic units of the time invested (<b>a</b>) and the acceleration factor reached (<b>b</b>) in the encryption operation using the set of parameters EES541EP1 for SW and HW/SW embedded systems implemented on the Ultra-96 board using AXI4-Stream IPs (time displayed in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s).</p>
Full article ">Figure 23
<p>Evolution versus number of arithmetic units of the time invested (<b>a</b>) and the acceleration factor reached (<b>b</b>) in the encryption operation using the parameter sets defined in IEEE Std 1363.1 for SW and HW/SW embedded systems implemented on the Ultra-96 board using AXI4-Stream IPs (time displayed in <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>s).</p>
Full article ">
14 pages, 4053 KiB  
Article
Highly Sensitive and Selective Detection of Hydrogen Using Pd-Coated SnO2 Nanorod Arrays for Breath-Analyzer Applications
by Hwaebong Jung, Junho Hwang, Yong-Sahm Choe, Hyun-Sook Lee and Wooyoung Lee
Sensors 2022, 22(5), 2056; https://doi.org/10.3390/s22052056 - 7 Mar 2022
Cited by 7 | Viewed by 3543
Abstract
We report a breath hydrogen analyzer based on Pd-coated SnO2 nanorods (Pd-SnO2 NRs) sensor integrated into a miniaturized gas chromatography (GC) column. The device can measure a wide range of hydrogen (1–100 ppm), within 100 s, using a small volume of [...] Read more.
We report a breath hydrogen analyzer based on Pd-coated SnO2 nanorods (Pd-SnO2 NRs) sensor integrated into a miniaturized gas chromatography (GC) column. The device can measure a wide range of hydrogen (1–100 ppm), within 100 s, using a small volume of human breath (1 mL) without pre-concentration. Especially, the mini-GC integrated with Pd-SnO2 NRs can detect 1 ppm of H2, as a lower detection limit, at a low operating temperature of 152 °C. Furthermore, when the breath hydrogen analyzer was exposed to a mixture of interfering gases, such as carbon dioxide, nitrogen, methane, and acetone, it was found to be capable of selectively detecting only H2. We found that the Pd-SnO2 NRs were superior to other semiconducting metal oxides that lack selectivity in H2 detection. Our study reveals that the Pd-SnO2 NRs integrated into the mini-GC device can be utilized in breath hydrogen analyzers to rapidly and accurately detect hydrogen due to its high selectivity and sensitivity. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of a hydrogen analyzer using a miniaturized gas chromatography (mini-GC).</p>
Full article ">Figure 2
<p>Schematic image of the sensing reaction mechanism of Pd-coated SnO<sub>2</sub> NR arrays in air and hydrogen.</p>
Full article ">Figure 3
<p>(<b>a</b>) Top-view SEM images of the Pd-coated SnO<sub>2</sub> NR arrays; (<b>b</b>) cross-view SEM images of the Pd-coated SnO<sub>2</sub> NR arrays; (<b>c</b>) a magnified view of (<b>a</b>); (<b>d</b>) SEM image depicting the analyzed region and the EDS elemental color mapping images for Al, Sn, and Pd.</p>
Full article ">Figure 4
<p>Sensing properties of the Pd-coated SnO<sub>2</sub> NR arrays using the mini-GC: (<b>a</b>) sensing response (Δ Sensor signal) to 100 ppm hydrogen as a function of operating temperature; (<b>b</b>) sensor signals of the Pd-coated SnO<sub>2</sub> NR arrays to various H<sub>2</sub> concentrations (1–100 ppm) at 152 °C (Inset: the cases of low H<sub>2</sub> concentration (1–10 ppm)); (<b>c</b>) sensing response as a function of H<sub>2</sub> concentrations at 152 °C.</p>
Full article ">Figure 5
<p>(<b>a</b>) Sensor signals for H<sub>2</sub>, CO<sub>2</sub>, and CH<sub>3</sub>COCO<sub>3</sub> in the mini-GC integrated with ZnO nanoparticles (Inset: the cases of low H<sub>2</sub> concentration (50–1000 ppm); (<b>b</b>) sensor signals for air, H<sub>2</sub>, N<sub>2</sub>, CO<sub>2</sub>, CH<sub>4</sub>, and CH<sub>3</sub>COCH<sub>3</sub> in the mini-GC integrated with Pd-coated SnO<sub>2</sub> NR arrays.</p>
Full article ">Figure 6
<p>Sensor signals of Pd-coated SnO<sub>2</sub> NR arrays for various H<sub>2</sub> concentrations (5–100 ppm) at 90% RH and 152 °C; (inset) sensing response (Δ Sensor signal) with increasing H<sub>2</sub> concentration in dry and 90% RH air.</p>
Full article ">Figure 7
<p>Real-time hydrogen gas sensing test via the manufactured hydrogen gas analyzer, consisting of Pd-coated SnO<sub>2</sub> NR sensor and mini-GC, using 10 ppm of standard hydrogen gas. (<b>a</b>) Put mouthpiece in you mouth and Press the start button, (<b>b</b>) Injection of a standard test gas of 10 ppm hydrogen, (<b>c</b>) Gas sampling for 20 s, (<b>d</b>) Starting gas analysis, (<b>e</b>) After 90 s of gas analysis, and (<b>f</b>) Display of the gas analysis result.</p>
Full article ">Figure 8
<p>Real-time hydrogen gas sensing test via the manufactured hydrogen gas analyzer, consisting of Pd-coated SnO<sub>2</sub> NR sensor and mini-GC, using tester’s exhaled breath. (<b>a</b>) Pressing the start button, (<b>b</b>) Blowing the exhaled breath for 12 s, (<b>c</b>) Gas sampling for 20 s, (<b>d</b>) Starting gas analysis, (<b>e</b>) After 90 s of gas analysis, and (<b>f</b>) Display of the gas analysis result.</p>
Full article ">
13 pages, 3523 KiB  
Article
Investigation of Red Blood Cells by Atomic Force Microscopy
by Viktoria Sergunova, Stanislav Leesment, Aleksandr Kozlov, Vladimir Inozemtsev, Polina Platitsina, Snezhanna Lyapunova, Alexander Onufrievich, Vyacheslav Polyakov and Ekaterina Sherstyukova
Sensors 2022, 22(5), 2055; https://doi.org/10.3390/s22052055 - 7 Mar 2022
Cited by 21 | Viewed by 5243
Abstract
Currently, much research is devoted to the study of biological objects using atomic force microscopy (AFM). This method’s resolution is superior to the other non-scanning techniques. Our study aims to further emphasize some of the advantages of using AFM as a clinical screening [...] Read more.
Currently, much research is devoted to the study of biological objects using atomic force microscopy (AFM). This method’s resolution is superior to the other non-scanning techniques. Our study aims to further emphasize some of the advantages of using AFM as a clinical screening tool. The study focused on red blood cells exposed to various physical and chemical factors, namely hemin, zinc ions, and long-term storage. AFM was used to investigate the morphological, nanostructural, cytoskeletal, and mechanical properties of red blood cells (RBCs). Based on experimental data, a set of important biomarkers determining the status of blood cells have been identified. Full article
(This article belongs to the Special Issue Medical and Biomedical Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>The principles of the atomic force microscopy operation. (<b>a</b>) Block diagram of AFM; (<b>b</b>) Measurement of blood cell membrane stiffness by AFM. I: the scanner is brought to the cell membrane; the laser spot is located in the center of the photodiode. II: the probe indents the cell membrane; the laser spot moves up relative to the center (located in the upper sections). III: the probe is detached from the cell membrane; the laser spot moves down relative to the center (located in lower sections). IV: the piezo scanner is taken away from the cell; the laser spot is located in the center of the photodiode. (<b>c</b>) Empirical force curve. The forward motion is black, the reverse motion is red.</p>
Full article ">Figure 2
<p>AFM image of the red blood cell (RBC) morphology (10 × 10 µm<sup>2</sup>) in air and their profiles, respectively: (<b>a</b>) Control; (<b>b</b>) RBC after exposure to hemin, 3D AFM image of a 2000 × 2400 nm<sup>2</sup> area with granular structures after exposure to hemin and domain structure profile; (<b>c</b>) RBC after exposure to zinc ions; (<b>d</b>) RBC on day 31 of storage; (<b>e</b>) Force curves of RBCs in liquid on exposure to different agents, the curves are the average over all the curves for a specific condition; (<b>f</b>) Change in Young’s modulus depending on pathogenic factors. Data are presented as box plots. One-way ANOVA test followed by post hoc. Tukey was used: ** <span class="html-italic">p</span> &lt; 0.01, *** <span class="html-italic">p</span> &lt; 0.001 compared to control.</p>
Full article ">Figure 3
<p>AFM images of the cytoskeleton: (<b>a</b>) Monolayer of ghost control cells (35 × 35 μm<sup>2</sup>); (<b>b</b>) Single RBC ghost (10 × 10 μm<sup>2</sup>); (<b>c</b>) Cytoskeleton section (1.8 × 1.6 μm<sup>2</sup>); (<b>d</b>) 3D AFM image of a section of the RBC cytoskeleton (0.8 × 0.7 μm<sup>2</sup>); (<b>e</b>) Model of the RBC cytoskeleton.</p>
Full article ">Figure 4
<p>2D AFM image of a ghost cell: (<b>a</b>) Day 4 of storage; (<b>b</b>) Day 31 of storage; (<b>c</b>) After exposure to hemin. 3D AFM image of the cytoskeleton fragment: (<b>d</b>) Day 4 of storage; (<b>e</b>) Day 31 of storage; (<b>f</b>) After exposure to hemin. Histograms for the average size of ghost fragments are given: (<b>g</b>) Day 4 of storage; (<b>h</b>) Day 31 of storage; (<b>i</b>) After exposure to hemin.</p>
Full article ">
17 pages, 2497 KiB  
Article
Influence of Engine Electronic Management Fault Simulation on Vehicle Operation
by Branislav Šarkan, Michal Loman, František Synák, Michal Richtář and Mirosław Gidlewski
Sensors 2022, 22(5), 2054; https://doi.org/10.3390/s22052054 - 7 Mar 2022
Cited by 4 | Viewed by 3430
Abstract
The preparation of the fuel mixture of a conventional internal combustion engine is currently controlled exclusively electronically. In order for the electrical management of an internal combustion engine to function properly, it is necessary that all its electronic components work flawlessly and fulfill [...] Read more.
The preparation of the fuel mixture of a conventional internal combustion engine is currently controlled exclusively electronically. In order for the electrical management of an internal combustion engine to function properly, it is necessary that all its electronic components work flawlessly and fulfill their role. Failure of these electronic components can cause incorrect fuel mixture preparation and also affect driving safety. Due to the effect of individual failures, it has a negative impact on road safety and also negatively affects other participants. The task of the research is to investigate the effect of the failure of electronic engine components on the selected operating characteristics of a vehicle. The purpose of this article is to specify the extent to which a failure of an electronic engine component may affect the operation of a road vehicle. Eight failures of electronic systems (sensors and actuators) were simulated on a specific vehicle, with a petrol internal combustion engine. Measurements were performed in laboratory conditions, the purpose of which was to quantify the change in the operating characteristics of the vehicle between the faulty and fault-free state. The vehicle performance parameters and the production of selected exhaust emission components were determined for selected vehicle operating characteristics. The results show that in the normal operation of vehicles, there are situations where a failure in the electronic system of the engine has a significant impact on its operating characteristics and, at the same time, some of these failures are not identifiable by the vehicle operator. The findings of the publication can be used in the drafting of legislation, in the field of production and operation of road vehicles, and also in the mathematical modeling of the production of gaseous emissions by road transport. Full article
(This article belongs to the Special Issue Sensors and Systems for Automotive and Road Safety)
Show Figures

Figure 1

Figure 1
<p>Research progress diagram (author).</p>
Full article ">Figure 2
<p>Course of power decrease in individual measurements (author).</p>
Full article ">Figure 3
<p>Influence of failure of selected components on vehicle performance (author).</p>
Full article ">Figure 4
<p>Change in engine power due to electronic component failure (author).</p>
Full article ">Figure 5
<p>Graphic course of monitored emissions (author).</p>
Full article ">
13 pages, 6302 KiB  
Article
High-Accuracy Event Classification of Distributed Optical Fiber Vibration Sensing Based on Time–Space Analysis
by Zhao Ge, Hao Wu, Can Zhao and Ming Tang
Sensors 2022, 22(5), 2053; https://doi.org/10.3390/s22052053 - 7 Mar 2022
Cited by 12 | Viewed by 3109
Abstract
Distributed optical fiber vibration sensing (DVS) can measure vibration information along with an optical fiber. Accurate classification of vibration events is a key issue in practical applications of DVS. In this paper, we propose a convolutional neural network (CNN) to analyze DVS data [...] Read more.
Distributed optical fiber vibration sensing (DVS) can measure vibration information along with an optical fiber. Accurate classification of vibration events is a key issue in practical applications of DVS. In this paper, we propose a convolutional neural network (CNN) to analyze DVS data and achieve high-accuracy event recognition fully. We conducted experiments outdoors and collected more than 10,000 sets of vibration data. Through training, the CNN acquired the features of the raw DVS data and achieved the accurate classification of multiple vibration events. The recognition accuracy reached 99.9% based on the time–space data, a higher than used time-domain, frequency–domain, and time–frequency domain data. Moreover, considering that the performance of the DVS and the testing environment would change over time, we experimented again after one week to verify the method’s generalization performance. The classification accuracy using the previously trained CNN is 99.2%, which is of great value in practical applications. Full article
(This article belongs to the Special Issue Distributed Optical Fiber Sensors: Applications and Technology)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of DVS system. SOA: semiconductor optical amplifier, AWG: arbitrary waveform generator, EDFA: erbium-doped fiber amplifier, FUT: fiber under test, APD: avalanche photodetector, and DAQ: data acquisition card.</p>
Full article ">Figure 2
<p>Field tests for collecting vibration signals: (<b>a</b>) hammer, (<b>b</b>) air pick, and (<b>c</b>) excavator.</p>
Full article ">Figure 3
<p>Time–space domain diagrams of three typical disturbance events: (<b>a</b>) hammer, (<b>b</b>) air pick, and (<b>c</b>) excavator.</p>
Full article ">Figure 4
<p>The statistical analysis of the spatial fluctuation range caused by vibration.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Time-domain diagrams of a hammer, air pick, and excavator; (<b>d</b>–<b>f</b>) frequency-domain diagrams of a hammer, air pick, and excavator; and (<b>g</b>–<b>i</b>) time–frequency domain diagrams of a hammer, air pick, and excavator.</p>
Full article ">Figure 6
<p>Other samples in the time-domain dataset: (<b>a</b>–<b>c</b>) hammer; (<b>d</b>–<b>f</b>) air pick; and (<b>g</b>–<b>i</b>) excavator.</p>
Full article ">Figure 7
<p>Other samples in the frequency-domain dataset: (<b>a</b>–<b>c</b>) hammer; (<b>d</b>–<b>f</b>) air pick; and (<b>g</b>–<b>i</b>) excavator.</p>
Full article ">Figure 8
<p>(<b>a</b>) Basic architecture of a 2D CNN; (<b>b</b>) the structure of the ResBlock.</p>
Full article ">Figure 9
<p>Accuracy of different numbers of ResBlocks in Testset1.</p>
Full article ">Figure 10
<p>Structure of the deep 1D CNN and 2D CNN.</p>
Full article ">Figure 11
<p>The accuracy curves of each epoch validation dataset.</p>
Full article ">
18 pages, 2244 KiB  
Article
FT-NIR Spectroscopy for the Non-Invasive Study of Binders and Multi-Layered Structures in Ancient Paintings: Artworks of the Lombard Renaissance as Case Studies
by Margherita Longoni, Beatrice Genova, Alessia Marzanni, Daniela Melfi, Carlotta Beccaria and Silvia Bruni
Sensors 2022, 22(5), 2052; https://doi.org/10.3390/s22052052 - 6 Mar 2022
Cited by 8 | Viewed by 3283
Abstract
This work deals with the identification of natural binders and the study of the complex stratigraphy in paintings using reflection FT-IR spectroscopy, a common diagnostic tool for cultural heritage materials thanks to its non-invasiveness. In particular, the potential of the near-infrared (NIR) spectral [...] Read more.
This work deals with the identification of natural binders and the study of the complex stratigraphy in paintings using reflection FT-IR spectroscopy, a common diagnostic tool for cultural heritage materials thanks to its non-invasiveness. In particular, the potential of the near-infrared (NIR) spectral region, dominated by the absorption bands due to CH, CO, OH and NH functional groups, is successfully exploited to distinguish a lipid binder from a proteinaceous one, as well as the coexistence of the two media in laboratory-made model samples that simulate the complex multi-layered structure of a painting. The combination with multivariate analysis methods or with the calculation of indicative ratios between the intensity values of characteristic absorption bands is proposed to facilitate the interpretation of the spectral data. Furthermore, the greater penetration depth of NIR radiation is exploited to obtain information about the inner layers of the paintings, focusing in particular on the preparatory coatings of the supports. Finally, as proof of concept, FT-NIR analyses were also carried out on six paintings by artists working in Lombardy at the end of the 15th century, that exemplify different pictorial techniques. Full article
Show Figures

Figure 1

Figure 1
<p>Mock-up samples prepared in the laboratory: painting layers on (<b>a</b>) calcite ground layer; (<b>b</b>) white and pink priming layers; (<b>c</b>) gypsum ground layer. The pigments were lead white, azurite and yellow ochre for (<b>a</b>) and (<b>b</b>), and lead white and hematite for (<b>c</b>).</p>
Full article ">Figure 2
<p>NIR (left) and MIR (right) spectra of: (<b>a</b>) linseed oil; (<b>b</b>) model painting sample of linseed oil on tempera; (<b>c</b>) model painting sample of <span class="html-italic">tempera grassa</span>; (<b>d</b>) model painting sample of tempera; (<b>e</b>) egg yolk; (<b>f</b>) model painting sample of <span class="html-italic">tempera grassa</span> with dammar varnish; (<b>g</b>) dammar resin. All model samples contained lead white as a pigment. All spectra of the reference materials and model painting samples were acquired in reflection mode, with the exception of the spectrum of dammar which was acquired in transmission mode. The dashed lines indicate the two bands at 1740 and 1650 cm<sup>−1</sup>, respectively characteristic of oil and yolk in the MIR region.</p>
Full article ">Figure 3
<p>NIR (<b>left</b>) and MIR (<b>right</b>) spectra of: (<b>a</b>) linseed oil, (<b>b</b>) white priming layer, (<b>c</b>) pink priming layer, (<b>d</b>) lead white in linseed oil on white priming layer, (<b>e</b>) lead white in linseed oil on pink priming layer. All spectra of reference materials and model painting samples were acquired in reflection mode.</p>
Full article ">Figure 4
<p>Score plot of the first two principal components of the NIR spectra (6000–4000 cm<sup>−1</sup>) of pure binders and of model painting samples on calcite ground layer. In the model samples, lead white (LW) was used as pigment.</p>
Full article ">Figure 5
<p>NIR spectra of: (<b>a</b>) gypsum preparatory layer; (<b>b</b>) a layer of oil paint on gypsum; (<b>c</b>) two layers of oil paint on gypsum; (<b>d</b>) linseed oil. The bands due to gypsum are highlighted. Cross-sections of mock-up samples corresponding respectively to (<b>b</b>) (bottom) and (<b>c</b>) (top) are also shown.</p>
Full article ">Figure 6
<p>Score plot of the first two principal components of the NIR spectra (6000–4000 cm<sup>−1</sup>) of model painting samples on gypsum ground layer. The binders are tempera or linseed oil, used pure, in mixture or in superimposed layers. The pigments are lead white (LW) and hematite (H).</p>
Full article ">Figure 7
<p>Highlighting of the absorption bands at 4694 and 5160 cm<sup>−1</sup> typical of oil and egg yolk, respectively, and chosen for the calculation of the intensity ratio used to distinguish the two binders and their possible mixture.</p>
Full article ">Figure 8
<p>Histogram summarizing the values of the ratio between the absorbance values at 4694 and 5160 cm<sup>−1</sup>, calculated for the reference samples and for the ancient paintings examined.</p>
Full article ">Figure 9
<p>FT-NIR spectra from some details of the paintings (<b>a</b>) “Madonna and Child” by G. A. Boltraffio; (<b>b</b>) “Madonna and Child” by F. Galli; (<b>c</b>) “Madonna nursing the Child” by Lombard school painter; (<b>d</b>) “St. Augustine and St. Jerome” by Bergognone; (<b>e</b>) “St. John the Baptist” by B. Zenale. The red dashed lines mark the weak bands associated with egg tempera. Bands due to paper in spectrum (<b>f</b>) are marked with “x”.</p>
Full article ">
16 pages, 6542 KiB  
Article
Control Method of the Dual-Winding Motor for Online High-Frequency Resistance Measurement in Fuel Cell Vehicle
by Cheng Chang, Yafu Zhou, Jing Lian and Jicai Liang
Sensors 2022, 22(5), 2051; https://doi.org/10.3390/s22052051 - 6 Mar 2022
Cited by 1 | Viewed by 2758
Abstract
The dual-winding motor drive has recently been proposed in the field of fuel cell vehicles due to its performance and high robust advantages. Efforts for this new topology have been made by many researchers. However, the high-frequency resistance measurement of a proton exchange [...] Read more.
The dual-winding motor drive has recently been proposed in the field of fuel cell vehicles due to its performance and high robust advantages. Efforts for this new topology have been made by many researchers. However, the high-frequency resistance measurement of a proton exchange membrane fuel cell based on dual-winding motor drive architecture, which is important for water management to optimize the lifespan of fuel cells, was not employed in earlier works. In this paper, a new control method of the dual-winding motor is proposed by introducing a dc input current control to realize high-frequency resistance measurement and normal drive control simultaneously, without using extra dc-dc converter. On the basis of the revealed energy exchange principles among electrical ports and mechanical port of the dual-winding motor, the load ripple caused by high-frequency current perturbation is optimized based on the q-axis current distribution between two winding sets. The decoupling control algorithm for the coupling effect within and across windings is also discussed to improve the dynamic response during high-frequency resistance measurement. Finally, simulation results verify the effectiveness and improvement of the proposed method. Fast Fourier transform results indicated that the total harmonic distortion of the dc input current was reduced from 22.53% to 4.47% of the fundamental, and the torque ripple was suppressed from about ±4.5 Nm to ±0.5 Nm at the given operation points. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic of HFR measurement based on the dc-dc converter in the conventional drive system.</p>
Full article ">Figure 2
<p>Schematic of the new drivetrain based on the dual-winding motor.</p>
Full article ">Figure 3
<p>System architecture of the DWM drive system for fuel cell vehicle.</p>
Full article ">Figure 4
<p>Randle’s equivalent circuit model: (<b>a</b>) Presented by individual parameter of each fuel cell; (<b>b</b>) presented by consolidated parameter of each fuel cell.</p>
Full article ">Figure 5
<p>Nyquist plot for the ac impedance of the PEMFC stack under different frequencies.</p>
Full article ">Figure 6
<p>Realization of HFR perturbation current based on the conventional control method.</p>
Full article ">Figure 7
<p>Realization of HFR perturbation current based on the proposed control method.</p>
Full article ">Figure 8
<p><span class="html-italic">q</span>-axis equivalent circuit of the dual-winding motor.</p>
Full article ">Figure 9
<p>Block diagram of the inner current control loop with decoupling network.</p>
Full article ">Figure 10
<p>Overview of the system simulation model.</p>
Full article ">Figure 11
<p>Shaping of the PEMFC current under the conventional phase current control and the proposed input current control.</p>
Full article ">Figure 12
<p>FFT of the PEMFC current: (<b>a</b>) Under the conventional phase current control; (<b>b</b>) Under the proposed input current control.</p>
Full article ">Figure 13
<p>Suppression of the torque ripple under the HFR measurement condition.</p>
Full article ">Figure 14
<p>The dynamic response of the current and voltage on <span class="html-italic">d</span>/<span class="html-italic">q</span>-axis under HFR measurement and transient load change conditions in three stages: (<b>a</b>) Reference and sampled current on the <span class="html-italic">d</span>1-axis and <span class="html-italic">q</span>1-axis, (<b>b</b>) <span class="html-italic">d</span>-axis voltage produced by PID controller and the N1 network and their sum <math display="inline"><semantics> <mrow> <msub> <mi>u</mi> <mrow> <mrow> <mi>d</mi> <mn>1</mn> </mrow> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 15
<p>The dynamic response of the current and voltage on the <span class="html-italic">q</span>-axis under HFR measurement and torque ripple compensation conditions in two stages: (<b>a</b>) Reference and sampled current on <span class="html-italic">q</span>1-axis, (<b>b</b>) <span class="html-italic">q</span>1-axis voltage produced by the PID controller.</p>
Full article ">
20 pages, 14311 KiB  
Article
An Observation Scheduling Approach Based on Task Clustering for High-Altitude Airship
by Jiawei Chen, Qizhang Luo and Guohua Wu
Sensors 2022, 22(5), 2050; https://doi.org/10.3390/s22052050 - 6 Mar 2022
Cited by 1 | Viewed by 2500
Abstract
Airship-based Earth observation is of great significance in many fields such as disaster rescue and environment monitoring. To facilitate efficient observation of high-altitude airships (HAA), a high-quality observation scheduling approach is crucial. This paper considers the scheduling of the imaging sensor and proposes [...] Read more.
Airship-based Earth observation is of great significance in many fields such as disaster rescue and environment monitoring. To facilitate efficient observation of high-altitude airships (HAA), a high-quality observation scheduling approach is crucial. This paper considers the scheduling of the imaging sensor and proposes a hierarchical observation scheduling approach based on task clustering (SA-TC). The original observation scheduling problem of HAA is transformed into three sub-problems (i.e., task clustering, sensor scheduling, and cruise path planning) and these sub-problems are respectively solved by three stages of the proposed SA-TC. Specifically, a novel heuristic algorithm integrating an improved ant colony optimization and the backtracking strategy is proposed to address the task clustering problem. The 2-opt local search is embedded into a heuristic algorithm to solve the sensor scheduling problem and the improved ant colony optimization is also implemented to solve the cruise path planning problem. Finally, extensive simulation experiments are conducted to verify the superiority of the proposed approach. Besides, the performance of the three algorithms for solving the three sub-problems are further analyzed on instances with different scales. Full article
(This article belongs to the Special Issue Parallel and Distributed Computing in Wireless Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>An example of the HAA observation process.</p>
Full article ">Figure 2
<p>The framework of the proposed observation scheduling approach.</p>
Full article ">Figure 3
<p>An illustration of the task clustering process. (<b>a</b>) A path formed by all ground targets, (<b>b</b>) A path with a stagnation node, (<b>c</b>) A path with two stagnation nodes.</p>
Full article ">Figure 4
<p>An illustration of the 2-opt operation. (<b>a</b>) A path before 2-opt operation, (<b>b</b>) A path after 2-opt operation.</p>
Full article ">Figure 5
<p>Comparison results of TSPA and SA-TC with different numbers of targets.</p>
Full article ">Figure 6
<p>Scheduling results of TSPA and SA-TC. (<b>a</b>) 100 targets, (<b>b</b>) 300 targets, (<b>c</b>) 500 targets, (<b>d</b>) 700 targets, (<b>e</b>) 900 targets, (<b>f</b>) 1200 targets.</p>
Full article ">Figure 7
<p>Scheduling results of HATC with different numbers of targets. (<b>a</b>) 30 targets, (<b>b</b>) 50 targets, (<b>c</b>) 100 targets, (<b>d</b>) 200 targets, (<b>e</b>) 300 targets, (<b>f</b>) 400 targets.</p>
Full article ">Figure 8
<p>Scheduling results of HASS with different numbers of targets. (<b>a</b>) 50 targets, (<b>b</b>) 100 targets, (<b>c</b>) 150 targets, (<b>d</b>) 200 targets, (<b>e</b>) 250 targets, (<b>f</b>) 300 targets.</p>
Full article ">Figure 9
<p>Cruise time of the airship in 6 cases with different numbers of targets obtained by HASS.</p>
Full article ">
18 pages, 26542 KiB  
Article
Design and Implementation of a UAV-Based Airborne Computing Platform for Computer Vision and Machine Learning Applications
by Athanasios Douklias, Lazaros Karagiannidis, Fay Misichroni and Angelos Amditis
Sensors 2022, 22(5), 2049; https://doi.org/10.3390/s22052049 - 6 Mar 2022
Cited by 21 | Viewed by 9250
Abstract
Visual sensing of the environment is crucial for flying an unmanned aerial vehicle (UAV) and is a centerpiece of many related applications. The ability to run computer vision and machine learning algorithms onboard an unmanned aerial system (UAS) is becoming more of a [...] Read more.
Visual sensing of the environment is crucial for flying an unmanned aerial vehicle (UAV) and is a centerpiece of many related applications. The ability to run computer vision and machine learning algorithms onboard an unmanned aerial system (UAS) is becoming more of a necessity in an effort to alleviate the communication burden of high-resolution video streaming, to provide flying aids, such as obstacle avoidance and automated landing, and to create autonomous machines. Thus, there is a growing interest on the part of many researchers in developing and validating solutions that are suitable for deployment on a UAV system by following the general trend of edge processing and airborne computing, which transforms UAVs from moving sensors into intelligent nodes that are capable of local processing. In this paper, we present, in a rigorous way, the design and implementation of a 12.85 kg UAV system equipped with the necessary computational power and sensors to serve as a testbed for image processing and machine learning applications, explain the rationale behind our decisions, highlight selected implementation details, and showcase the usefulness of our system by providing an example of how a sample computer vision application can be deployed on our platform. Full article
(This article belongs to the Special Issue UAV Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the flight controller and its peripherals.</p>
Full article ">Figure 2
<p>(<b>a</b>) The power supply system of the UAV up close. (<b>b</b>) PCB view of the ideal diode OR circuit. Notice the copper bars used due to the large currents involved. (<b>c</b>) Assembled module with cover and heat sinks.</p>
Full article ">Figure 3
<p>Block diagram of the companion computer and its peripherals.</p>
Full article ">Figure 4
<p>(<b>a</b>) The onboard processing system installed on the UAV. The two front-facing cameras can be seen on the bottom right.(<b>b</b>) Jetson AGX Xavier (center) with its battery (left) and power supply module (right). (<b>c</b>) Custom PCB adapter connecting the analog video decoder to the PCIe bus of the onboard processor.</p>
Full article ">Figure 5
<p>(<b>a</b>) The interior of the camera unit. (1) Thermal camera, (2) RGB camera, (3) HDMI wireless transmitter, (4) fan for heat abduction, and (<b>b</b>) gimbal with the camera unit under the UAV.</p>
Full article ">Figure 6
<p>Architecture of the software abstraction layer for communication with the flight controller.</p>
Full article ">Figure 7
<p>(<b>a</b>) The UAV with a box attached underneath for power consumption vs. weight measurement. (<b>b</b>) Equation (<a href="#FD2-sensors-22-02049" class="html-disp-formula">2</a>) plotted against the power–weight measurements, which can be found in <a href="#sensors-22-02049-t0A1" class="html-table">Table A1</a> in <a href="#app1-sensors-22-02049" class="html-app">Appendix A</a>.</p>
Full article ">Figure 8
<p>Flight time of the UAV with respect to the battery weight installed.</p>
Full article ">Figure 9
<p>(<b>a</b>) Front view of the UAV while flying. (<b>b</b>) Side view of the UAV.</p>
Full article ">Figure 10
<p>(<b>a</b>) View of the university campus from the RGB camera onboard the UAV. (<b>b</b>) View of the same area with the thermal camera. The thermal camera’s color space is black (hot).</p>
Full article ">Figure 11
<p>(<b>a</b>) View from the RGB camera without any magnification. Red arrow marks the point of observation. (<b>b</b>) Ten-times magnification. (<b>c</b>) Thirty-times magnification. (<b>d</b>) The distance between the UAV and the point of observation (13 km) plotted on a map. The UAV is located at the top right end of the red line and the port at the bottom left line’s end.</p>
Full article ">Figure 12
<p>Mission Planner’s user interface as it was configured during the flight tests.</p>
Full article ">Figure 13
<p>Block diagram of the sample tracking application.</p>
Full article ">Figure 14
<p>(<b>a</b>–<b>c</b>) Tracked vehicle marked with the pink bounding box as it approaches the UAV. (<b>d</b>) The UAV flying with the tracked vehicle in the background.</p>
Full article ">Figure A1
<p>The schematic of the ideal diode OR circuit.</p>
Full article ">
10 pages, 2075 KiB  
Communication
Improvement of Schottky Contacts of Gallium Oxide (Ga2O3) Nanowires for UV Applications
by Badriyah Alhalaili, Ahmad Al-Duweesh, Ileana Nicoleta Popescu, Ruxandra Vidu, Luige Vladareanu and M. Saif Islam
Sensors 2022, 22(5), 2048; https://doi.org/10.3390/s22052048 - 6 Mar 2022
Cited by 7 | Viewed by 3406
Abstract
Interest in the synthesis and fabrication of gallium oxide (Ga2O3) nanostructures as wide bandgap semiconductor-based ultraviolet (UV) photodetectors has recently increased due to their importance in cases of deep-UV photodetectors operating in high power/temperature conditions. Due to their unique [...] Read more.
Interest in the synthesis and fabrication of gallium oxide (Ga2O3) nanostructures as wide bandgap semiconductor-based ultraviolet (UV) photodetectors has recently increased due to their importance in cases of deep-UV photodetectors operating in high power/temperature conditions. Due to their unique properties, i.e., higher surface-to-volume ratio and quantum effects, these nanostructures can significantly enhance the sensitivity of detection. In this work, two Ga2O3 nanostructured films with different nanowire densities and sizes obtained by thermal oxidation of Ga on quartz, in the presence and absence of Ag catalyst, were investigated. The electrical properties influenced by the density of Ga2O3 nanowires (NWs) were analyzed to define the configuration of UV detection. The electrical measurements were performed on two different electric contacts and were located at distances of 1 and 3 mm. Factors affecting the detection performance of Ga2O3 NWs film, such as the distance between metal contacts (1 and 3 mm apart), voltages (5–20 V) and transient photocurrents were discussed in relation to the composition and nanostructure of the Ga2O3 NWs film. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the fabrication process of Ga<sub>2</sub>O<sub>3</sub> nanowires by thermal growth process at 1000 °C in the presence of silver catalyst and liquid Ga, which was placed at the bottom of the quartz crucible.</p>
Full article ">Figure 2
<p>Schematic illustration of different shadow masks (<b>a</b>,<b>b</b>) used for sputtering 5 nm Cr and 50 nm gold (Au) contacts on Au/β-Ga<sub>2</sub>O<sub>3</sub>/Au metal–semiconductor–metal (MSM) photoconductor on quartz. (<b>a</b>) The distance between the lines is &lt;1 mm. (<b>b</b>) The distance between the circle probes is &lt;3 mm. (<b>c</b>) Schematic illustration of Au/β-Ga<sub>2</sub>O<sub>3</sub>/Au metal–semiconductor–metal (MSM) photoconductor on quartz.</p>
Full article ">Figure 3
<p>Side views of SEM images of Ga<sub>2</sub>O<sub>3</sub> nanowire grown at 1000 °C. (<b>a</b>) Free-Ag of Ga<sub>2</sub>O<sub>3</sub> nanowires on a quartz substrate. (<b>b</b>) Ga<sub>2</sub>O<sub>3</sub> nanowires on quartz catalyzed by 5 nm Ag. Longer and denser Ga<sub>2</sub>O<sub>3</sub> nanowires were attained in the presence of Ag.</p>
Full article ">Figure 4
<p>Semi-logarithmic plots of current density for Au/β-Ga<sub>2</sub>O<sub>3</sub>/Au MSM without and with Ag catalyst versus applied voltage characteristics without and with UV illumination. The right column is for the distance between the probe lines &lt;1 mm. Left column is for the distance between the probe lines &lt;3 mm. (<b>a</b>,<b>b</b>) 5 V. (<b>c</b>,<b>d</b>) 10 V. (<b>e</b>,<b>f</b>) 20 V. (<b>g</b>,<b>h</b>) 50 V.</p>
Full article ">Figure 4 Cont.
<p>Semi-logarithmic plots of current density for Au/β-Ga<sub>2</sub>O<sub>3</sub>/Au MSM without and with Ag catalyst versus applied voltage characteristics without and with UV illumination. The right column is for the distance between the probe lines &lt;1 mm. Left column is for the distance between the probe lines &lt;3 mm. (<b>a</b>,<b>b</b>) 5 V. (<b>c</b>,<b>d</b>) 10 V. (<b>e</b>,<b>f</b>) 20 V. (<b>g</b>,<b>h</b>) 50 V.</p>
Full article ">Figure 5
<p>Transient response of the UV photodetector fabricated based on Au/β–Ga<sub>2</sub>O<sub>3</sub>/Au MSM at 5 V, 10 V, and 20 V. (<b>a</b>) The distance between the lines is &lt;1 mm. (<b>b</b>) The distance between the circle probes is &lt;3 mm.</p>
Full article ">
13 pages, 4424 KiB  
Article
Long Range Raman-Amplified Distributed Acoustic Sensor Based on Spontaneous Brillouin Scattering for Large Strain Sensing
by Shahab Bakhtiari Gorajoobi, Ali Masoudi and Gilberto Brambilla
Sensors 2022, 22(5), 2047; https://doi.org/10.3390/s22052047 - 6 Mar 2022
Cited by 2 | Viewed by 3780
Abstract
A Brillouin distributed acoustic sensor (DAS) based on optical time-domain refractometry exhibiting a maximum detectible strain of 8.7 mε and a low signal fading is developed. Strain waves with frequencies of up to 120 Hz are measured with an accuracy of 12 [...] Read more.
A Brillouin distributed acoustic sensor (DAS) based on optical time-domain refractometry exhibiting a maximum detectible strain of 8.7 mε and a low signal fading is developed. Strain waves with frequencies of up to 120 Hz are measured with an accuracy of 12 με at a sampling rate of 1.2 kHz and a spatial resolution of 4 m over a sensing range of 8.5 km. The sensing range is further extended by using a modified inline Raman amplifier configuration. Using 80 ns Raman pump pulses, the signal-to-noise ratio is improved by 3.5 dB, while the accuracy of the measurement is enhanced by a factor of 2.5 to 62 με at the far-end of a 20 km fiber. Full article
Show Figures

Figure 1

Figure 1
<p>A comparison of the fluctuation frequency of the strain wave on the vibrating fiber Equation (3) and the optical intensity seen at the detector (4) for <span class="html-italic">f</span> = 10 Hz and <math display="inline"><semantics> <mrow> <mi>ε</mi> <mo>·</mo> <mi>L</mi> </mrow> </semantics></math> = 5 <math display="inline"><semantics> <mrow> <mo>×</mo> <mspace width="3.33333pt"/> <msup> <mn>10</mn> <mrow> <mo>−</mo> <mn>5</mn> </mrow> </msup> </mrow> </semantics></math> m.</p>
Full article ">Figure 2
<p>Maximum measurable strain frequency as a function of the strain amplitude for fiber lengths of 0.1, 1, 10, 20 km. The spatial resolution is assumed to be <span class="html-italic">L</span> = 1 m.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic of the MZI filter with a 3 × 3 coupler as the output coupler, and (<b>b</b>) transfer function of MZI in terms of the output intensity of the PDs for <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>L</mi> </mrow> </semantics></math> = 65 cm and center wavelength of <math display="inline"><semantics> <mi>λ</mi> </semantics></math> = 1550 nm, and example of the sensor operating point.</p>
Full article ">Figure 4
<p>Experimental setup consisting of five sections: probe pulse shaping, fiber under test, scattered Brillouin signal collection, detection/data acquisition and Raman amplification (EDFA: Erbium-doped fiber amplifier, FBG: Fiber Bragg Grating, MZI: Mach Zehnder Interferometer, PD: Photodetector, EOM: Electro-optic modulator, and WDM: Wavelength Division Multiplexer).</p>
Full article ">Figure 5
<p>(<b>a</b>) Response of the B-DAS system as a function of distance in time for an input sinusoidal strain wave of f ∼10 Hz with a peak-to-peak displacement of 17.2 mm over 4 m of SMF, (<b>b</b>) 2D plot of the same response at the middle of the 4 m-fiber, and (<b>c</b>) amplitude of the sensor output and its linear fit in terms of the applied peak-to-peak strain at <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>∼</mo> </mrow> </semantics></math> 10 Hz.</p>
Full article ">Figure 6
<p>Frequency spectra of the sensor excited with a strain wave of ∼300 <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">μ</mi> <mi mathvariant="sans-serif">ε</mi> </mrow> </semantics></math> amplitude at <span class="html-italic">f</span> = 21, 76, 103 and 120 Hz (FFT: Fast Fourier Transform). Signals are averaged 10 times.</p>
Full article ">Figure 7
<p>Optical intensity of the BOTDR signal collected before the MZI (<b>a</b>) at the end of the 20 km fiber as a function of the Raman pump pulse pulse-width (inset shows detailed view of the plot for very small pulse widths), and (<b>b</b>) when there is no Raman amplification, amplified by 80 ns-wide pulse and CW pump.</p>
Full article ">Figure 8
<p>Performance of the DAS as function of distance and time when a sinusoidal strain with 1 m<math display="inline"><semantics> <mi mathvariant="sans-serif">ε</mi> </semantics></math> of amplitude at ∼11 Hz is applied under three conditions: (<b>a</b>) no amplification, (<b>b</b>) 80 ns-linewidth pulsed, and (<b>c</b>) CW Raman amplification. (<b>d</b>) Frequency content of the obtained signals in cases (<b>a</b>–<b>c</b>) demonstrating the noise level.</p>
Full article ">
20 pages, 6719 KiB  
Article
Intelligent Diagnosis of Rolling Element Bearing Based on Refined Composite Multiscale Reverse Dispersion Entropy and Random Forest
by Aiqiang Liu, Zuye Yang, Hongkun Li, Chaoge Wang and Xuejun Liu
Sensors 2022, 22(5), 2046; https://doi.org/10.3390/s22052046 - 6 Mar 2022
Cited by 15 | Viewed by 2973
Abstract
Rolling bearings are the vital components of large electromechanical equipment, thus it is of great significance to develop intelligent fault diagnoses for them to improve equipment operation reliability. In this paper, a fault diagnosis method based on refined composite multiscale reverse dispersion entropy [...] Read more.
Rolling bearings are the vital components of large electromechanical equipment, thus it is of great significance to develop intelligent fault diagnoses for them to improve equipment operation reliability. In this paper, a fault diagnosis method based on refined composite multiscale reverse dispersion entropy (RCMRDE) and random forest is developed. Firstly, rolling bearing vibration signals are adaptively decomposed by variational mode decomposition (VMD), and then the RCMRDE values of 25 scales are calculated for original signal and each decomposed component as the initial feature set. Secondly, based on the joint mutual information maximization (JMIM) algorithm, the top 15 sensitive features are selected as a new feature set and feed into random forest model to identify bearing health status. Finally, to verify the effectiveness and superiority of the presented method, actual data acquisition and analysis are performed on the bearing fault diagnosis experimental platform. These results indicate that the presented method can precisely diagnose bearing fault types and damage degree, and the average identification accuracy rate is 97.33%. Compared with the refine composite multiscale dispersion entropy (RCMDE) and multiscale dispersion entropy (MDE), the fault diagnosis accuracy is improved by 2.67% and 8.67%, respectively. Furthermore, compared with the RCMRDE method without VMD decomposition, the fault diagnosis accuracy is improved by 3.67%. Research results prove that a better feature extraction technique is proposed, which can effectively overcome the deficiency of existing entropy and significantly enhance the ability of fault identification. Full article
Show Figures

Figure 1

Figure 1
<p>The mean curve and error value of MDE, RCMDE and RCMRDE for white noise with different <math display="inline"><semantics> <mi>N</mi> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mi>N</mi> </semantics></math> = 2048; (<b>b</b>) <math display="inline"><semantics> <mi>N</mi> </semantics></math> = 3072; (<b>c</b>) <math display="inline"><semantics> <mi>N</mi> </semantics></math> = 4096; (<b>d</b>) <math display="inline"><semantics> <mi>N</mi> </semantics></math> = 5120.</p>
Full article ">Figure 2
<p>The MDE, RCMDE, and RCMRDE of WGN noise under different data lengths: (<b>a</b>) MDE; (<b>b</b>) RCMDE; (<b>c</b>) RCMRDE.</p>
Full article ">Figure 3
<p>MDE, RCMDE, RCMRDE of WGN noise under different numbers of <math display="inline"><semantics> <mi>c</mi> </semantics></math> (<b>a</b>) MDE of WGN; (<b>b</b>) RCMDE of WGN; (<b>c</b>) RCMDE of WGN.</p>
Full article ">Figure 4
<p>The MDE, RCMDE, and RCMRDE of synthetic signal with different SNRs.</p>
Full article ">Figure 5
<p>Simulation signals and corresponding three entropy values: (<b>a</b>) four bearing outer fault signals with different damage degrees; (<b>b</b>) MDE value curves; (<b>c</b>) RCMDE value curves; (<b>d</b>) RCMRDE value curves.</p>
Full article ">Figure 6
<p>The flowchart of the proposed method.</p>
Full article ">Figure 7
<p>Rolling bearing fault diagnosis test bench and faulty parts: (<b>a</b>) test bench; (<b>b</b>) different types of faulty parts.</p>
Full article ">Figure 8
<p>Bearing vibration signals under 10 different health conditions.</p>
Full article ">Figure 9
<p>The frequency spectrum of normal bearing signal: (<b>a</b>) spectrum of original signal; (<b>b</b>) spectrum of each component.</p>
Full article ">Figure 10
<p>The VMD decomposition results of each fault type signal: (<b>a</b>) outer race 0.2 mm crack; (<b>b</b>) outer race 0.4 mm crack; (<b>c</b>) outer race 0.4 mm crack; (<b>d</b>) inner race 0.2 mm wear; (<b>e</b>) inner race 0.4 mm wear; (<b>f</b>) inner race 0.6 mm wear; (<b>g</b>) rolling element 0.2 mm wear; (<b>h</b>) rolling element 0.4 mm wear; (<b>i</b>) rolling element 0.6 mm wear.</p>
Full article ">Figure 11
<p>The RCMRDE values obtained under scale 25: (<b>a</b>) RCMRDE values of original signal; (<b>b</b>) RCMRDE values of the first IMF components.</p>
Full article ">Figure 12
<p>Multi-class confusion matrix of the proposed method.</p>
Full article ">Figure 13
<p>The diagnosis accuracy of three methods.</p>
Full article ">Figure 14
<p>The diagnosis accuracy of JMIM, Fisher and LS feature selection methods.</p>
Full article ">Figure 15
<p>Comparison of diagnostic accuracy of different methods.</p>
Full article ">
15 pages, 2077 KiB  
Article
Vehicle-Assisted UAV Delivery Scheme Considering Energy Consumption for Instant Delivery
by Xudong Deng, Mingke Guan, Yunfeng Ma, Xijie Yang and Ting Xiang
Sensors 2022, 22(5), 2045; https://doi.org/10.3390/s22052045 - 5 Mar 2022
Cited by 28 | Viewed by 3572
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used in instant delivery scenarios. The combined delivery of vehicles and UAVs has many advantages compared to their respective separate delivery, which can greatly improve delivery efficiency. Although a few studies in the literature have explored the [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used in instant delivery scenarios. The combined delivery of vehicles and UAVs has many advantages compared to their respective separate delivery, which can greatly improve delivery efficiency. Although a few studies in the literature have explored the issue of vehicle-assisted UAV delivery, we did not find any studies on the scenario of an UAV serving several customers. This study aims to design a new vehicle-assisted UAV delivery solution that allows UAVs to serve multiple customers in a single take-off and takes energy consumption into account. A multi-UAV task allocation model and a vehicle path planning model were established to determine the task allocation of the UAVs as well as the path of UAVs and the vehicle, respectively. The model also considered the impact of changing the payload of the UAV on energy consumption, bringing the results closer to reality. Finally, a hybrid heuristic algorithm based on an improved K-means algorithm and ant colony optimization (ACO) was proposed to solve the problem, and the effectiveness of the scheme was proven by multi-scale experimental instances and comparative experiments. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Description of vehicle-assisted UAV delivery.</p>
Full article ">Figure 2
<p>Energy-consumption process of an UAV carrying three packages for delivery.</p>
Full article ">Figure 3
<p>Process for solving vehicle-assisted UAV delivery problems.</p>
Full article ">Figure 4
<p>Improved <math display="inline"><semantics> <mi>K</mi> </semantics></math>-means process.</p>
Full article ">Figure 5
<p>Improved ACO process.</p>
Full article ">Figure 6
<p>Illustration of the example of C1.</p>
Full article ">Figure 7
<p>Efficiency of the algorithm.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop