[go: up one dir, main page]

Next Issue
Volume 22, January-1
Previous Issue
Volume 21, December-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 21, Issue 24 (December-2 2021) – 347 articles

Cover Story (view full-size image): Self-healing sensors have the potential to increase the lifespan of existing sensing technologies. This paper presents the design for a self-healing sensor that can be used for damage detection and localization in a continuous manner. The soft sensor can recover full functionality almost instantaneously at room temperature, making the healing process fully autonomous. The working principle of the sensor is based on the measurement of air pressure inside enclosed chambers, making the fabrication and the modelling of the sensors easy. We characterize the force-sensing abilities of the proposed sensor and perform damage detection and localization over a one-dimensional and two-dimensional surface using multilateration techniques. The proposed solution is highly scalable, easy to build, cheap, and even applicable for multi-damage detection. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
4 pages, 195 KiB  
Editorial
Special Issue “Wearable and BAN Sensors for Physical Rehabilitation and eHealth Architectures”
by Maria de Fátima Domingues, Andrea Sciarrone and Ayman Radwan
Sensors 2021, 21(24), 8509; https://doi.org/10.3390/s21248509 - 20 Dec 2021
Viewed by 2730
Abstract
The demographic shift of the population toward an increased number of elder citizens, together with the sedentary lifestyle we are adopting, is reflected in the increasingly debilitated physical health of the population [...] Full article
27 pages, 8458 KiB  
Article
Research Active Posterior Rhinomanometry Tomography Method for Nasal Breathing Determining Violations
by Oleg G. Avrunin, Yana V. Nosova, Ibrahim Younouss Abdelhamid, Sergii V. Pavlov, Natalia O. Shushliapina, Natalia A. Bouhlal, Ainur Ormanbekova, Aigul Iskakova and Damian Harasim
Sensors 2021, 21(24), 8508; https://doi.org/10.3390/s21248508 - 20 Dec 2021
Cited by 24 | Viewed by 4662
Abstract
This study analyzes the existing methods for studying nasal breathing. The aspects of verifying the results of rhinomanometric diagnostics according to the data of spiral computed tomography are considered, and the methodological features of dynamic posterior active rhinomanometry and the main indicators of [...] Read more.
This study analyzes the existing methods for studying nasal breathing. The aspects of verifying the results of rhinomanometric diagnostics according to the data of spiral computed tomography are considered, and the methodological features of dynamic posterior active rhinomanometry and the main indicators of respiration are also analyzed. The possibilities of testing respiratory olfactory disorders are considered, the analysis of errors in rhinomanometric measurements is carried out. In the conclusions, practical recommendations are given that have been developed for the design and operation of tools for functional diagnostics of nasal breathing disorders. It is advisable, according to the data of dynamic rhinomanometry, to assess the functioning of the nasal valve by the shape of the air flow rate signals during forced breathing and the structures of the soft palate by the residual nasopharyngeal pressure drop. It is imperative to take into account not only the maximum coefficient of aerodynamic nose drag, but also the values of the pressure drop and air flow rate in the area of transition to the turbulent quadratic flow regime. From the point of view of the physiology of the nasal response, it is necessary to look at the dynamic change to the current mode, given the hour of the forced response, so that it will ensure the maximum possible acidity in the legend. When planning functional rhinosurgical operations, it is necessary to apply the calculation method using computed tomography, which makes it possible to predict the functional result of surgery. Full article
(This article belongs to the Special Issue Application and Technology Trends in Optoelectronic Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of measurements of pressure drop Δ<span class="html-italic">p</span> and air flow applying methods of active rhinomanometry: (<b>a</b>)—front, (<b>b</b>)—back (the dotted line shows the limit of measurement of pressure drop at front active rhinomanometry at the level of choanas).</p>
Full article ">Figure 2
<p>Computer rhinomanometer TNDA.</p>
Full article ">Figure 3
<p>Combined scheme of a computer rhinomanometer TNDA (PSP—pressure control point, BFV—check valve adapter, Th1 and Th2—parallel adjustable throttles (resistances), SP1-SP4 pressure transducers, PS1–PS4—flexible hoses).</p>
Full article ">Figure 4
<p>Respiratory cycle diagram according to dynamic PARM data.</p>
Full article ">Figure 5
<p>Typical rhinomanometric dependences of pressure difference on air flow in norm (1) and at disturbance of nasal breath (2).</p>
Full article ">Figure 6
<p>Graphs of the air flow signal in two respiratory cycles when inhaled according to the forced dynamic PARM (<b>a</b>) and derived from the air flow time (<b>b</b>).</p>
Full article ">Figure 7
<p>Respiratory cycle diagram showing the time shift Δ<span class="html-italic">t</span> between the amplitudes of the pressure transducer signals <span class="html-italic">p</span><sub>1</sub> and <span class="html-italic">p</span><sub>2</sub>.</p>
Full article ">Figure 8
<p>Respiratory cycle diagram according to dynamic RAP data when communicating with the oral cavity from the nasopharynx in the phase of holding the breath.</p>
Full article ">Figure 9
<p>Respiratory cycle diagrams according to dynamic AAPM data with a hermetic separation of the oral cavity from the nasopharynx by the structures of the soft palate in the phase of holding the breath.</p>
Full article ">Figure 10
<p>Static image of a frame of high-speed dynamic radiography of the nasopharynx in a sagittal projection (1—airway of the nasopharynx, 2—posterior pharyngeal wall, 3—soft palate).</p>
Full article ">Figure 11
<p>Illustration of airway segmentation by a three-dimensional voxel model of the nasal cavity.</p>
Full article ">Figure 12
<p>Computed tomography of the patient at the conditional norm: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in frontal projection.</p>
Full article ">Figure 13
<p>Computed tomography of a patient with curvature of the nasal septum to the left in the middle section: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in frontal projection.</p>
Full article ">Figure 14
<p>Computed tomography of a patient with curvature of the nasal septum to the left in the posterior part: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in frontal projection.</p>
Full article ">Figure 15
<p>Computed tomography of a patient with the consequences of chronic rhinosinusitis: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in frontal projection.</p>
Full article ">Figure 16
<p>Computed tomography of a patient with adenoid vegetation: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in sagittal projection.</p>
Full article ">Figure 17
<p>Computed tomography of a patient with empty nose syndrome after conchotomy—bilateral removal of the lower shells: (<b>a</b>) axial tomographic section; (<b>b</b>) multiplanar reconstruction in frontal projection.</p>
Full article ">Figure 18
<p>Graphs of the coefficient of aerodynamic nasal resistance growth by tomographic sections of the nasal cavity at different states of the nasal cavity: 1—conditional norm; 2—at curvature of a nasal partition in average departments; 3—at curvature of a nasal partition in back departments; 4—at chronic rhinosinusitis; 5—with adenoid vegetation; 6—with empty nose syndrome after conchotomy.</p>
Full article ">Figure 19
<p>Graphs of the dependence of the pressure drop on the air flow according to the forced posterior active rhinomanometry at different states of the nasal cavity: 1–conditional norm; 2—at curvature of a nasal partition in average departments; 3—at curvature of a nasal partition in back departments; 4—at chronic rhinosinusitis; 5—with adenoid vegetation; 6—with empty nose syndrome after conchotomy.</p>
Full article ">Figure 20
<p>Graphs of the dependence of the pressure drop on the air flow according to the forced posterior active rhinomanometry at different states of the nasal cavity: 1—conditional norm; 2—at curvature of a nasal partition in average departments; 3—at curvature of a nasal partition in back departments; 4—at chronic rhinosinusitis; 5—with adenoid vegetation; 6—with empty nose syndrome after conchotomy.</p>
Full article ">Figure 21
<p>Cyclograms of pneumatic power at nasal breath: (<b>a</b>)—at conditional norm; (<b>b</b>)—at disturbance of olfactory sensitivity owing to rhinosinusitis.</p>
Full article ">Figure 22
<p>Options for respiratory cycles: 1, 2—calm breathing (normal); 3—forced breathing (rigidity of the nasal valve); 4, 5—forced breathing—stepped breathing (normally functioning mobility of the nasal valve); 6—stepped breath—“sniffing”.</p>
Full article ">Figure 23
<p>Cyclogram of air flow during nasal breathing (T—threshold of sensation).</p>
Full article ">Figure 24
<p>Illustration of registration of incorrect breathing maneuvers in the presence of a large amount of secretion in the nasal cavity.</p>
Full article ">Figure 25
<p>Illustration of registration of incorrect breathing maneuvers when the mask does not fit tightly to the patient’s face.</p>
Full article ">Figure 26
<p>Illustration of registration of incorrect breathing maneuvers when the mouthpiece of apparatus is insufficiently gripped.</p>
Full article ">Figure 27
<p>Illustration of registration of incorrect breathing maneuvers in case of perspiration in the oropharynx.</p>
Full article ">Figure 28
<p>A graph of the decrease in the error in making a diagnostic decision when comparing a violation of nasal breathing with a conditional norm when adding data from various diagnostic methods: optical endoscopy, computed tomography, rhinomanometry (<span class="html-italic">j</span> = 3 is the dimension of the space of informative parameters).</p>
Full article ">
35 pages, 3069 KiB  
Review
A Review on Computer Aided Diagnosis of Acute Brain Stroke
by Mahesh Anil Inamdar, Udupi Raghavendra, Anjan Gudigar, Yashas Chakole, Ajay Hegde, Girish R. Menon, Prabal Barua, Elizabeth Emma Palmer, Kang Hao Cheong, Wai Yee Chan, Edward J. Ciaccio and U. Rajendra Acharya
Sensors 2021, 21(24), 8507; https://doi.org/10.3390/s21248507 - 20 Dec 2021
Cited by 34 | Viewed by 9544
Abstract
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and [...] Read more.
Amongst the most common causes of death globally, stroke is one of top three affecting over 100 million people worldwide annually. There are two classes of stroke, namely ischemic stroke (due to impairment of blood supply, accounting for ~70% of all strokes) and hemorrhagic stroke (due to bleeding), both of which can result, if untreated, in permanently damaged brain tissue. The discovery that the affected brain tissue (i.e., ‘ischemic penumbra’) can be salvaged from permanent damage and the bourgeoning growth in computer aided diagnosis has led to major advances in stroke management. Abiding to the Preferred Reporting Items for Systematic Review and Meta–Analyses (PRISMA) guidelines, we have surveyed a total of 177 research papers published between 2010 and 2021 to highlight the current status and challenges faced by computer aided diagnosis (CAD), machine learning (ML) and deep learning (DL) based techniques for CT and MRI as prime modalities for stroke detection and lesion region segmentation. This work concludes by showcasing the current requirement of this domain, the preferred modality, and prospective research areas. Full article
(This article belongs to the Special Issue Innovations in Biomedical Imaging)
Show Figures

Figure 1

Figure 1
<p>Ischemic and hemorrhagic brain stroke.</p>
Full article ">Figure 2
<p>Age–specific incidence rate of strokes by gender in India, 2019.</p>
Full article ">Figure 3
<p>Schematic to showcase applications of AI in stroke management.</p>
Full article ">Figure 4
<p>Articles selection process based on the PRISMA guidelines.</p>
Full article ">Figure 5
<p>Structure of the review process.</p>
Full article ">Figure 6
<p>Various neuroimaging modalities. (<b>a</b>) CT Angiography, (<b>b</b>) CT Perfusion, (<b>c</b>) T1–weighted imaging, (<b>d</b>) T2–weighted imaging, (<b>e</b>) FLAIR (fluid attenuated inversion recovery), (<b>f</b>) DWI (diffusion weighted imaging).</p>
Full article ">Figure 7
<p>General block diagram of a typical ML–based CAD system.</p>
Full article ">Figure 8
<p>Generalized stages for lesion segmentation, identification, and classification of stroke regions.</p>
Full article ">Figure 9
<p>Generalized stages for lesion segmentation, identification, and classification of stroke regions.</p>
Full article ">Figure 10
<p>Prototype model for remote patient monitoring with cloud–based AI Model.</p>
Full article ">
34 pages, 8448 KiB  
Article
Development of a Microwave Sensor for Solid and Liquid Substances Based on Closed Loop Resonator
by Aiswarya S, Sreedevi K. Menon, Massimo Donelli and Meenu L
Sensors 2021, 21(24), 8506; https://doi.org/10.3390/s21248506 - 20 Dec 2021
Cited by 13 | Viewed by 4630
Abstract
In this work, a compact dielectric sensor for the detection of adulteration in solid and liquid samples using planar resonators is presented. Six types of filter prototypes operating at 2.4 GHz are presented, optimized, numerically assessed, fabricated and experimentally validated. The obtained experimental [...] Read more.
In this work, a compact dielectric sensor for the detection of adulteration in solid and liquid samples using planar resonators is presented. Six types of filter prototypes operating at 2.4 GHz are presented, optimized, numerically assessed, fabricated and experimentally validated. The obtained experimental results provided an error less than 6% with respect to the simulated results. Moreover, a size reduction of about 69% was achieved for the band stop filter and a 75% reduction for band pass filter compared to standard sensors realized using open/short circuited stub microstrip lines. From the designed filters, the miniaturised filter with Q of 95 at 2.4 GHz and size of 35 mm × 35 mm is formulated as a sensor and is validated theoretically and experimentally. The designed sensor shows better sensitivity, and it depends upon the dielectric property of the sample to be tested. Simulation and experimental validation of the designed sensor is carried out by loading different samples onto the sensor. The adulteration detection of various food samples using the designed sensor is experimentally validated and shows excellent sensing on adding adulterants to the original sample. The sensitivity of the sensor is analyzed by studying the variations in resonant frequency, scattering parameters, phase and Q factor with variation in the dielectric property of the sample loaded onto the sensor. Full article
(This article belongs to the Special Issue Antenna and Microwave Sensors)
Show Figures

Figure 1

Figure 1
<p>Second order standard stop band filter. Microstrip implementation and simulated scattering parameters <math display="inline"><semantics> <msub> <mi>S</mi> <mn>11</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>21</mn> </msub> </semantics></math>.</p>
Full article ">Figure 2
<p>Second order standard band pass filter. Microstrip implementation and simulated scattering parameters <math display="inline"><semantics> <msub> <mi>S</mi> <mn>11</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>S</mi> <mn>21</mn> </msub> </semantics></math>.</p>
Full article ">Figure 3
<p>Geometry of the first two sensors realized with closed loop resonators and stubs loading: (<b>a</b>) perpendicular; (<b>b</b>) parallel to microstrip feeding line.</p>
Full article ">Figure 4
<p>Dependence of the main frequency peak versus variations of geometrical parameters: (<b>a</b>) length of the main square loop side <span class="html-italic">L</span>, (<b>b</b>) thickness of the square loop microstrip <span class="html-italic">W</span>, (<b>c</b>) gap between the two stubs <span class="html-italic">g</span>, (<b>d</b>) thickness of the stubs microstrip <math display="inline"><semantics> <msub> <mi>W</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 5
<p>Geometry of the other two sensors realized introducing a T-shaped on the stubs head: (<b>a</b>) perpendicular, (<b>b</b>) parallel to microstrip feeding line.</p>
Full article ">Figure 6
<p>Dependence of the main frequency peak versus variations of geometrical parameters, with reference to <a href="#sensors-21-08506-f005" class="html-fig">Figure 5</a>: (<b>a</b>) gap between the stubs <span class="html-italic">g</span>, (<b>b</b>) length of stubs tips <span class="html-italic">d</span> and (<b>c</b>) width of the stubs tips <math display="inline"><semantics> <msub> <mi>W</mi> <mi>t</mi> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Geometry of the last two sensors realized with closed loop resonators and stubs tips of the same square resonator side length: (<b>a</b>) perpendicular (<b>b</b>) parallel to microstrip feeding line.</p>
Full article ">Figure 8
<p>Dependence of the main frequency peak versus variations in stubs gap <span class="html-italic">g</span>, with reference to <a href="#sensors-21-08506-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Validation of the empirical relations (<a href="#FD2-sensors-21-08506" class="html-disp-formula">2</a>)–(<a href="#FD4-sensors-21-08506" class="html-disp-formula">4</a>). Main resonance peak vs. substrate dielectric permittivity <math display="inline"><semantics> <msub> <mi>ε</mi> <mi>r</mi> </msub> </semantics></math> comparisons between semi-analytical formulas and numerical simulations, (<b>a</b>) configuration in <a href="#sensors-21-08506-f003" class="html-fig">Figure 3</a>, (<b>b</b>) configuration in <a href="#sensors-21-08506-f005" class="html-fig">Figure 5</a> and (<b>c</b>) configuration in <a href="#sensors-21-08506-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 10
<p>Photo of the considered experimental setup.</p>
Full article ">Figure 11
<p>Photo of the first two prototypes reported in <a href="#sensors-21-08506-f003" class="html-fig">Figure 3</a> equipped with two SMA coaxial connectors: (<b>a</b>) stubs perpendicular and (<b>b</b>) parallel to the microstrip feeding line.</p>
Full article ">Figure 12
<p>Experimental assessment, prototype of <a href="#sensors-21-08506-f011" class="html-fig">Figure 11</a>a. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 13
<p>Experimental assessment, prototype of <a href="#sensors-21-08506-f011" class="html-fig">Figure 11</a>b. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 14
<p>Photos of the prototypes with T-shaped stubs (<b>a</b>) parallel and (<b>b</b>) perpendicular to the microstrip feeding line.</p>
Full article ">Figure 15
<p>Experimental assessment, T-shapes stubs, prototype of <a href="#sensors-21-08506-f005" class="html-fig">Figure 5</a>a. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 16
<p>Experimental assessment, T-shapes stubs, prototype of <a href="#sensors-21-08506-f005" class="html-fig">Figure 5</a>b. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 17
<p>Photos of the prototypes with T-shaped stubs: (<b>a</b>) T-stubs length equal to <span class="html-italic">L</span> perpendicular and (<b>b</b>) parallel to the microstrip feeding line.</p>
Full article ">Figure 18
<p>Experimental assessment, T-shaped stubs of length equal to <span class="html-italic">L</span> perpendicular to the feeding line, prototype of <a href="#sensors-21-08506-f007" class="html-fig">Figure 7</a>a. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 19
<p>Experimental assessment, T-shaped stubs of length equal to <span class="html-italic">L</span> perpendicular to the feeding line, prototype of <a href="#sensors-21-08506-f007" class="html-fig">Figure 7</a>b. Comparisons between numerical and measured scattering parameters.</p>
Full article ">Figure 20
<p>Schema of the two considered sensors with T-shaped stubs: (<b>a</b>) parallel stub (PS) and (<b>b</b>) parallel stub loop (PSL).</p>
Full article ">Figure 21
<p>Sensing applications, T-shaped stubs and T-shaped stub of length equal to <span class="html-italic">L</span> parallel to the feeding line. Frequency shift versus dielectric permittivity of liquid substances placed on the resonator top side.</p>
Full article ">Figure 22
<p>Senor testing in solid samples.</p>
Full article ">Figure 23
<p>Senor testing in solid samples: (<b>a</b>) variation in scattering parameters with frequency; (<b>b</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 24
<p>Senor testing in liquid samples.</p>
Full article ">Figure 25
<p>Senor testing in liquid samples: (<b>a</b>) variation in scattering parameters with frequency; (<b>b</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 26
<p>Experimental assessment, measurement setup for adulteration testing.</p>
Full article ">Figure 27
<p>Adulteration sensing in turmeric powder: (<b>a</b>) experimental setup, (<b>b</b>) variation in scattering parameters with frequency and (<b>c</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 28
<p>Adulteration testing in ghee (<b>a</b>) Experimental setup (<b>b</b>) Variation of scattering parameters with frequency (<b>c</b>) Variation of transmission coefficient parameters with frequency.</p>
Full article ">Figure 29
<p>Adulteration testing in honey: (<b>a</b>) experimental setup, (<b>b</b>) variation in the scattering parameters with frequency and (<b>c</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 29 Cont.
<p>Adulteration testing in honey: (<b>a</b>) experimental setup, (<b>b</b>) variation in the scattering parameters with frequency and (<b>c</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 30
<p>Adulteration testing in milk: (<b>a</b>) experimental setup, (<b>b</b>) variation in scattering parameters with frequency and (<b>c</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 30 Cont.
<p>Adulteration testing in milk: (<b>a</b>) experimental setup, (<b>b</b>) variation in scattering parameters with frequency and (<b>c</b>) variation in transmission coefficient parameters with frequency.</p>
Full article ">Figure 31
<p>Adulteration testing in sesame oil: (<b>a</b>) experimental setup, (<b>b</b>) variation of scattering parameters with frequency and (<b>c</b>) variation of transmission coefficient parameters with frequency.</p>
Full article ">Figure 32
<p>Adulteration testing in sesame oil: (<b>a</b>) variation in scattering parameters with frequency (<b>b</b>) and variation in the transmission coefficient parameters with frequency.</p>
Full article ">Figure 33
<p>Variation in resonant frequency and transmission coefficient on adding adulterant to (<b>a</b>) turmeric powder, (<b>b</b>) ghee, (<b>c</b>) honey, (<b>d</b>) milk and (<b>e</b>) sesame oil.</p>
Full article ">
19 pages, 20377 KiB  
Article
Autonomous System for Lake Ice Monitoring
by Ilya Aslamov, Georgiy Kirillin, Mikhail Makarov, Konstantin Kucher, Ruslan Gnatovsky and Nikolay Granin
Sensors 2021, 21(24), 8505; https://doi.org/10.3390/s21248505 - 20 Dec 2021
Cited by 6 | Viewed by 4106
Abstract
Continuous monitoring of ice cover belongs to the key tasks of modern climate research, providing up-to-date information on climate change in cold regions. While a strong advance in ice monitoring worldwide has been provided by the recent development of remote sensing methods, quantification [...] Read more.
Continuous monitoring of ice cover belongs to the key tasks of modern climate research, providing up-to-date information on climate change in cold regions. While a strong advance in ice monitoring worldwide has been provided by the recent development of remote sensing methods, quantification of seasonal ice cover is impossible without on-site autonomous measurements of the mass and heat budget. In the present study, we propose an autonomous monitoring system for continuous in situ measuring of vertical temperature distribution in the near-ice air, the ice strata and the under-ice water layer for several months with simultaneous records of solar radiation incoming at the lake surface and passing through the snow and ice covers as well as snow and ice thicknesses. The use of modern miniature analog and digital sensors made it possible to make a compact, energy efficient measurement system with high precision and spatial resolution and characterized by easy deployment and transportation. In particular, the high resolution of the ice thickness probe of 0.05 mm allows to resolve the fine-scale processes occurring in low-flow environments, such as freshwater lakes. Several systems were tested in numerous studies in Lake Baikal and demonstrated a high reliability in deriving the ice heat balance components during ice-covered periods. Full article
(This article belongs to the Special Issue Marine Sensors: Recent Advances and Challenges)
Show Figures

Figure 1

Figure 1
<p>An example of a webpage of the server with a preview of the dataset from one of the experiments.</p>
Full article ">Figure 2
<p>(<b>a</b>) Measuring system of the ASLIM: 1—light sensors, 2—temperature sensors, 3—snow thickness sensor; (<b>b</b>,<b>c</b>) photos of above-water and underwater parts of the measuring system, respectively.</p>
Full article ">Figure 3
<p>Temperature and light sensors.</p>
Full article ">Figure 4
<p>(<b>a</b>) Operating principle of the ice thickness meter: 1—suspension system, 2—ice, 3—emitted signal, 4—reflected signal, 5—hydroacoustic transducers, 6—device body; (<b>b</b>) photo of the device body.</p>
Full article ">Figure 5
<p>(<b>a</b>) Reflected signal waveform; (<b>b</b>) cross-correlation function of reflected signal with the reference signal; (<b>c</b>) zoomed in fragment of the reflected signal (blue line) with the superimposed reference signal (red line).</p>
Full article ">Figure 6
<p>Snow thickness meter with integrated temperature and humidity sensor.</p>
Full article ">Figure 7
<p>(<b>a</b>) Geographical location of the study site; (<b>b</b>) Ice conditions at the end of the experiment (on 31 March 2014) with the pattern of mean currents according to [<a href="#B34-sensors-21-08505" class="html-bibr">34</a>] in the southern part of Lake Baikal and locations of the autonomous measurement stations. The satellite image (Terra MODIS true color band composition) obtained from the Irkutsk Center of Remote Sensing [<a href="#B35-sensors-21-08505" class="html-bibr">35</a>]. Note the stronger ice melt in the area of the jet current around Station 1 visible as a dark area in Panel (<b>b</b>).</p>
Full article ">Figure 8
<p>Temperatures in the ice, water and air at Station 1 (<b>a</b>) and Station 2 (<b>b</b>). The legends show the distance from sensors to the ice surface, positive direction — downward. Intersections of temperature curves with the abscissa (0 °C) correspond to the times of sensor freezing in the ice. Note the different scales for positive and negative temperatures.</p>
Full article ">Figure 9
<p>Daily averaged data on the Lake Baikal ice regime during the observations for Station 1 (blue lines) and Station 2 (red lines): (<b>a</b>) air temperatures at 1.5 m above the ice, (<b>b</b>) temperatures of the ice surface, (<b>c</b>) ice thickness, (<b>d</b>) water temperatures at 5 m depth and calculated linear trends, (<b>e</b>) incident solar radiation and penetrating water (at 1.5 m depth) and (<b>f</b>) current velocities and directions at a 1.5 m depth.</p>
Full article ">Figure 10
<p>Daily averaged heat balance at (<b>a</b>) Station 1 and (<b>b</b>) Station 2.</p>
Full article ">Figure 11
<p>Fine-scale temperature profiles at Station 1 (<b>a</b>) and Station 2 (<b>b</b>) obtained during the freezing of miniature analog sensors in the ice with the one-hour averaging. H—distance from the ice–water interface.</p>
Full article ">Figure 12
<p>Daily averaged heat fluxes at the ice–water interface (solid lines) calculated from heat balance Equations (<a href="#FD2-sensors-21-08505" class="html-disp-formula">2</a>), (<a href="#FD4-sensors-21-08505" class="html-disp-formula">4</a>) and (<a href="#FD5-sensors-21-08505" class="html-disp-formula">5</a>) and current velocities (dashed lines) at Station 1 (blue lines) and Station 2 (red lines); the circles are instantaneous values of heat fluxes determined by the temperature gradient in the thin under-ice water layer when thermal sensors froze in the ice (<a href="#FD3-sensors-21-08505" class="html-disp-formula">3</a>).</p>
Full article ">
13 pages, 4091 KiB  
Perspective
Combining Action Observation Treatment with a Brain–Computer Interface System: Perspectives on Neurorehabilitation
by Fabio Rossi, Federica Savi, Andrea Prestia, Andrea Mongardi, Danilo Demarchi and Giovanni Buccino
Sensors 2021, 21(24), 8504; https://doi.org/10.3390/s21248504 - 20 Dec 2021
Cited by 8 | Viewed by 4527
Abstract
Action observation treatment (AOT) exploits a neurophysiological mechanism, matching an observed action on the neural substrates where that action is motorically represented. This mechanism is also known as mirror mechanism. In a typical AOT session, one can distinguish an observation phase and an [...] Read more.
Action observation treatment (AOT) exploits a neurophysiological mechanism, matching an observed action on the neural substrates where that action is motorically represented. This mechanism is also known as mirror mechanism. In a typical AOT session, one can distinguish an observation phase and an execution phase. During the observation phase, the patient observes a daily action and soon after, during the execution phase, he/she is asked to perform the observed action at the best of his/her ability. Indeed, the execution phase may sometimes be difficult for those patients where motor impairment is severe. Although, in the current practice, the physiotherapist does not intervene on the quality of the execution phase, here, we propose a stimulation system based on neurophysiological parameters. This perspective article focuses on the possibility to combine AOT with a brain–computer interface system (BCI) that stimulates upper limb muscles, thus facilitating the execution of actions during a rehabilitation session. Combining a rehabilitation tool that is well-grounded in neurophysiology with a stimulation system, such as the one proposed, may improve the efficacy of AOT in the treatment of severe neurological patients, including stroke patients, Parkinson’s disease patients, and children with cerebral palsy. Full article
Show Figures

Figure 1

Figure 1
<p>Typical BCI acquisition chain. After acquiring the EEG signal, the frequency components outside the band of interest are filtered out. Then, the power of the signal is estimated, followed by the extraction of its features to be used to classify if FES needs to be applied or not.</p>
Full article ">Figure 2
<p>Overview of the system. The patient is instructed to perform the reaching of an object, typically. EEG electrodes are placed on the motor areas and stabilized by a comfortable helmet. sEMG and inertial sensors are placed on the limb of interest, next to the stimulation electrodes. A central unit processes acquired data to activate the FES when the subject needs help to reach the target. A monitor provides feedbacks encouraging the user.</p>
Full article ">Figure 3
<p>Activity flow of an execution session. After the indication of the task to perform, the subject is required to execute the action. The four sensors are continuously monitored, and for each of them a proper feature is extracted and recorded (e.g., signal power from EEG, limb trajectory from the IMUs). The information obtained by EEG, EMG and IMUs is combined and processed to evaluate how the subject body is reacting, and to decide if the FES must be applied to assist the execution. Therefore, FES parameters are tuned depending on the decision of the processing stage, stabilizing the movement by stimulating one or more muscles, if necessary. Lastly, if the task consists of reaching for an object, the subject is encouraged until the BCC sensors detect the touch of the target.</p>
Full article ">
22 pages, 1285 KiB  
Article
Predictive Machine Learning Models and Survival Analysis for COVID-19 Prognosis Based on Hematochemical Parameters
by Nicola Altini, Antonio Brunetti, Stefano Mazzoleni, Fabrizio Moncelli, Ilenia Zagaria, Berardino Prencipe, Erika Lorusso, Enrico Buonamico, Giovanna Elisiana Carpagnano, Davide Fiore Bavaro, Mariacristina Poliseno, Annalisa Saracino, Annalisa Schirinzi, Riccardo Laterza, Francesca Di Serio, Alessia D’Introno, Francesco Pesce and Vitoantonio Bevilacqua
Sensors 2021, 21(24), 8503; https://doi.org/10.3390/s21248503 - 20 Dec 2021
Cited by 11 | Viewed by 4488
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of millions of individuals and caused millions of deaths worldwide. Predicting the clinical course of the disease is of pivotal importance to manage patients. Several studies have found hematochemical alterations in COVID-19 patients, such [...] Read more.
The coronavirus disease 2019 (COVID-19) pandemic has affected hundreds of millions of individuals and caused millions of deaths worldwide. Predicting the clinical course of the disease is of pivotal importance to manage patients. Several studies have found hematochemical alterations in COVID-19 patients, such as inflammatory markers. We retrospectively analyzed the anamnestic data and laboratory parameters of 303 patients diagnosed with COVID-19 who were admitted to the Polyclinic Hospital of Bari during the first phase of the COVID-19 global pandemic. After the pre-processing phase, we performed a survival analysis with Kaplan–Meier curves and Cox Regression, with the aim to discover the most unfavorable predictors. The target outcomes were mortality or admission to the intensive care unit (ICU). Different machine learning models were also compared to realize a robust classifier relying on a low number of strongly significant factors to estimate the risk of death or admission to ICU. From the survival analysis, it emerged that the most significant laboratory parameters for both outcomes was C-reactive protein min; HR=17.963 (95% CI 6.548–49.277, p < 0.001) for death, HR=1.789 (95% CI 1.000–3.200, p = 0.050) for admission to ICU. The second most important parameter was Erythrocytes max; HR=1.765 (95% CI 1.141–2.729, p < 0.05) for death, HR=1.481 (95% CI 0.895–2.452, p = 0.127) for admission to ICU. The best model for predicting the risk of death was the decision tree, which resulted in ROC-AUC of 89.66%, whereas the best model for predicting the admission to ICU was support vector machine, which had ROC-AUC of 95.07%. The hematochemical predictors identified in this study can be utilized as a strong prognostic signature to characterize the severity of the disease in COVID-19 patients. Full article
Show Figures

Figure 1

Figure 1
<p><b>Data Processing Workflow</b>. The figure shows the study workflow, starting from the data collection step until the development and assessment of the different predictive models. ML stands for machine learning. Considered ML classifiers include decision trees, random forests, support vector machines, Gaussian naive Bayes, AdaBoost, and K-nearest neighbors.</p>
Full article ">Figure 2
<p><b>Kaplan–Meier survival curves</b>. (<b>A</b>) Kaplan–Meier curves for death as a function of hospitalization days stratified by sex. (<b>B</b>) Kaplan–Meier curves for the admission to ICU as a function of hospitalization days before the admission stratified by sex. (<b>C</b>) Kaplan–Meier curves for death as a function of hospitalization days stratified by age. (<b>D</b>) Kaplan–Meier curves for the admission to ICU as a function of hospitalization days before the admission stratified by age.</p>
Full article ">Figure 3
<p><b>Scatter plot of low dimensionality feature embedding (death outcome)</b>. A 2D visualization of hematochemical parameters with PCA and t-SNE. Different colors are used for survived and deceased patients. (Top left) PCA starting from the selected features; (top right) t-SNE from the selected features; (bottom left) PCA starting from all features; (bottom right) t-SNE starting from all features.</p>
Full article ">Figure 4
<p><b>Scatter plot of low dimensionality features embeddings (admission to ICU outcome)</b>. A 2D visualization of hematochemical parameters with PCA and t-SNE. Different colors are used for patients, who were (or not) transferred to the ICU. (Top left) PCA starting from the selected features; (top right) t-SNE from the selected features; (bottom left) PCA starting from all features; (bottom right) t-SNE starting from all features.</p>
Full article ">Figure 5
<p><b>Violin plots of the distribution of the selected laboratory features considering mortality as outcome</b>. <span class="html-italic">C-reactive protein (CRP) mean</span>, <span class="html-italic">CRP min</span>, <span class="html-italic">Total bilirubin min</span>, <span class="html-italic">Erythrocyte max</span>, <span class="html-italic">AST min</span> proved to be statistically significant according to the Mann–Whitney U test.</p>
Full article ">Figure 6
<p><b>Violin plots of the distribution of the selected laboratory features considering the admission to ICU as outcome</b>. <span class="html-italic">Ionized calcium max</span>, <span class="html-italic">CRP mean</span>, <span class="html-italic">CRP min</span>, <span class="html-italic">Total bilirubin min</span> proved to be statistically significant, according to the Mann–Whitney U test.</p>
Full article ">Figure 7
<p><b>Cox regression coefficients for mortality risk (top) and risk of admission to ICU (bottom)</b>. Hazard ratio (HR) is plotted with the 95% confidence interval (CI).</p>
Full article ">Figure 8
<p><b>Predictive model performances for mortality prediction</b>. Model performances for the mortality prediction displayed as bar plots for accuracy, precision, recall, and ROC-AUC.</p>
Full article ">Figure 9
<p><b>Predictive model performances for ICU prediction</b>. Models performances for the ICU admission prediction displayed as bar plots for accuracy, precision, recall, and ROC-AUC.</p>
Full article ">Figure 10
<p>ROC curve of decision tree for mortality prediction.</p>
Full article ">Figure 11
<p>ROC curve of support vector machines for ICU admission prediction.</p>
Full article ">
18 pages, 994 KiB  
Article
Subjective and Objective User Behavior Disparity: Towards Balanced Visual Design and Color Adjustment
by Anna Lewandowska, Agnieszka Olejnik-Krugly, Jarosław Jankowski and Malwina Dziśko
Sensors 2021, 21(24), 8502; https://doi.org/10.3390/s21248502 - 20 Dec 2021
Cited by 7 | Viewed by 3924
Abstract
Interactive environments create endless possibilities for the design of websites, games, online platforms, and mobile applications. Their visual aspects and functional characteristics influence the user experience. Depending on the project, the purpose of the environment can be oriented toward marketing targets, user experience, [...] Read more.
Interactive environments create endless possibilities for the design of websites, games, online platforms, and mobile applications. Their visual aspects and functional characteristics influence the user experience. Depending on the project, the purpose of the environment can be oriented toward marketing targets, user experience, or accessibility. Often, these conflicting aspects should be integrated within a single project, and a search for trade-offs is needed. One of these conflicts involves a disparity in user behavior concerning declared preferences and real observed activity in terms of visual attention. Taking into account accessibility guidelines (WCAG) further complicates the problem. In our study, we focused on the analysis of color combinations and their contrast in terms of user-friendliness; visual intensity, which is important for attracting user attention; and recommendations from the Web Accessibility Guidelines (WCAG). We took up the challenge to reduce the disparity between user preferences and WCAG contrast, on one hand, and user natural behavior registered with an eye-tracker, on the other. However, we left the choice of what is more important—human conscious reaction or objective user behavior results—to the designer. The former corresponds to user-friendliness, while the latter, visual intensity, is consistent with marketing expectations. The results show that the ranking of visual objects characterized by different levels of contrast differs when considering the perspectives of user experience, commercial goals, and objective recording. We also propose an interactive tool with the possibility of assigning weights to each criterion to generate a ranking of objects. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Figure 1
<p>A designer must take numerous factors into account. On one hand, their work represents the implementation of their creative vision, while on the other hand, business requirements, standards, and technological limitations must be considered. The fulfillment of business targets is measured by the attainment of a high click-through rate (CTR) [<a href="#B11-sensors-21-08502" class="html-bibr">11</a>] through the use of visual design to attract the user’s attention, especially in the color domain (Case 1). Trends, as well as standards, such as the WCAG, impose visual design adjustments on different social groups by, e.g., limiting the color palette to colors with a high contrast factor (Case 2), whereas the user follows their preferences and expects a friendly and aesthetic visual message, which is simultaneously legible and comprehensible (Case 3). Thus, the search for a solution that balances user preferences, the attraction of user attention, and the fulfillment of WCAG standards takes place. The used dimensions are subjective data, objective measurements, and color contrast. Data collected during the experiment are stored in a knowledge database, called ColoUR DB (Color User Response Database). The designer decides which criteria are relevant for a given visual design and adjusts their weights appropriately. Based on the designer’s settings, a color ranking is generated. It is possible to obtain a color ranking for each separate criterion as well as an integrated one that covers all designers’ settings. The results were used to develop and implement a tool for designers, which we call The ColoUR Balance Tool.</p>
Full article ">Figure 2
<p>Overview of the forced-choice procedure used in the experiment. The task given to the observer was to select the most friendly and visible colors.</p>
Full article ">Figure 3
<p>Example test images used in the experiment. In the figure, the images were composed from a “gray” color set as primary for the background, and the remaining colors (blue, grey, green, orange, red, violet, white, and yellow) were set as secondary colors.</p>
Full article ">Figure 4
<p>The AOIs defined to filter the ET signal for displayed images.</p>
Full article ">Figure 5
<p>The schema shows the research procedure. The experiment with user participation allowed us to collect data on the time to first fixation for color pairs. The collected data were subjected to statistical analysis, resulting in the formation of the ColoUR DB. The ColoUR DB contains data on the natural eye fixation of users on color pairs and a contrast calculation for the same color pairs. Color ranking is based on parameters entered by the designer, i.e., criteria and their weights. The ranking can be generated for each criterion or in an integrated form. The research result is a tool for designers—the ColoUR Balance Tool.</p>
Full article ">Figure 6
<p>Color rankings for primary and secondary colors were developed based on ColoUR DB. The figure depicts the results for 9 tested colors: (<b>A</b>–<b>I</b>). The color ranking is arranged according to the level of user eye attraction. Each graph only applies to a single primary color, for example, (<b>A</b>)—Primary color Black. On the X axis, there is a set of secondary colors. The Y axis represents normalized values obtained in the experiment. Each graph shows tree plots. The first one (magenta) shows objective data recorded by an eye-tracker. The second one (black) shows a contrast calculation for the same color pairs. The contrast was determined according to the WCAG standard presented in the Methods section. The third one (blue) shows subjective data coming from users’ responses obtained during the experiment, as presented in the Results section.</p>
Full article ">Figure 7
<p>Attracting eyesight to the displayed stimulus. (<b>A</b>)—first slide, (<b>B</b>)—empty (middle-gray) screen (2 s), (<b>C</b>)—next slide. <b>Top</b>: situation where the eyes remain on the picture following the previous task. <b>Middle</b> and <b>Bottom</b>: eyes naturally resting on the image after the stimulus is displayed. To avoid situations where the user’s sight remains on the stimulus of the previously performed task and changes are noticed between images displayed one after another, an empty gray screen was shown for two seconds.</p>
Full article ">Figure 8
<p>Kendall correlation between (<b>Top</b>) the subjective data received from the experiment in the form of user preferences and the objective data represented by time to the first fixation recorded by the eye-tracker; (<b>Middle</b>) the objective data and WCAG contrast; (<b>Bottom</b>) the subjective data received from the experiment in the form of user preferences and WCAG contrast.</p>
Full article ">Figure 9
<p>The research results were made available with the ColoUR Balance Tool.</p>
Full article ">
21 pages, 2905 KiB  
Article
LightAnomalyNet: A Lightweight Framework for Efficient Abnormal Behavior Detection
by Abid Mehmood
Sensors 2021, 21(24), 8501; https://doi.org/10.3390/s21248501 - 20 Dec 2021
Cited by 14 | Viewed by 4131
Abstract
The continuous development of intelligent video surveillance systems has increased the demand for enhanced vision-based methods of automated detection of anomalies within various behaviors found in video scenes. Several methods have appeared in the literature that detect different anomalies by using the details [...] Read more.
The continuous development of intelligent video surveillance systems has increased the demand for enhanced vision-based methods of automated detection of anomalies within various behaviors found in video scenes. Several methods have appeared in the literature that detect different anomalies by using the details of motion features associated with different actions. To enable the efficient detection of anomalies, alongside characterizing the specificities involved in features related to each behavior, the model complexity leading to computational expense must be reduced. This paper provides a lightweight framework (LightAnomalyNet) comprising a convolutional neural network (CNN) that is trained using input frames obtained by a computationally cost-effective method. The proposed framework effectively represents and differentiates between normal and abnormal events. In particular, this work defines human falls, some kinds of suspicious behavior, and violent acts as abnormal activities, and discriminates them from other (normal) activities in surveillance videos. Experiments on public datasets show that LightAnomalyNet yields better performance comparative to the existing methods in terms of classification accuracy and input frames generation. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the proposed LightAnomalyNet framework.</p>
Full article ">Figure 2
<p>(<b>a</b>) Process of generating SG3I images from sequential video frames; (<b>b</b>) Example of SG3I generation for URFD dataset.</p>
Full article ">Figure 3
<p>Sample SG3I images generated for Avenue (row 1), Mini-Drone Video (row 2), and Hockey Fights (row 3) datasets.</p>
Full article ">Figure 4
<p>Proposed architecture of the lightweight CNN and the analysis of learnable parameters at each layer of the network. Note that the total of all learnable parameters for the proposed structure of the CNN is 7154.</p>
Full article ">Figure 5
<p>Confusion matrix of the proposed framework on: (<b>a</b>) UR Fall dataset (<b>b</b>) Avenue dataset (<b>c</b>) Mini-Drone Video dataset (<b>d</b>) Hockey Fights dataset.</p>
Full article ">Figure 6
<p>ROC curve and AUC values for: (<b>a</b>) UR Fall dataset (<b>b</b>) Avenue dataset (<b>c</b>) Mini-Drone Video dataset (<b>d</b>) Hockey Fights dataset.</p>
Full article ">Figure 7
<p>Comparison of classification results on different splits of: (<b>a</b>) UR Fall dataset (<b>b</b>) Avenue dataset (<b>c</b>) Mini-Drone Video dataset (<b>d</b>) Hockey Fights dataset.</p>
Full article ">
15 pages, 1820 KiB  
Article
Resource Prediction-Based Edge Collaboration Scheme for Improving QoE
by Jinho Park and Kwangsue Chung
Sensors 2021, 21(24), 8500; https://doi.org/10.3390/s21248500 - 20 Dec 2021
Cited by 3 | Viewed by 2502
Abstract
Recent years have witnessed a growth in the Internet of Things (IoT) applications and devices; however, these devices are unable to meet the increased computational resource needs of the applications they host. Edge servers can provide sufficient computing resources. However, when the number [...] Read more.
Recent years have witnessed a growth in the Internet of Things (IoT) applications and devices; however, these devices are unable to meet the increased computational resource needs of the applications they host. Edge servers can provide sufficient computing resources. However, when the number of connected devices is large, the task processing efficiency decreases due to limited computing resources. Therefore, an edge collaboration scheme that utilizes other computing nodes to increase the efficiency of task processing and improve the quality of experience (QoE) was proposed. However, existing edge server collaboration schemes have low QoE because they do not consider other edge servers’ computing resources or communication time. In this paper, we propose a resource prediction-based edge collaboration scheme for improving QoE. We estimate computing resource usage based on the tasks received from the devices. According to the predicted computing resources, the edge server probabilistically collaborates with other edge servers. The proposed scheme is based on the delay model, and uses the greedy algorithm. It allocates computing resources to the task considering the computation and buffering time. Experimental results show that the proposed scheme achieves a high QoE compared with existing schemes because of the high success rate and low completion time. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Edge collaborative network environment.</p>
Full article ">Figure 2
<p>Operation of the proposed scheme.</p>
Full article ">Figure 3
<p>Flow chart of collaboration decision in the proposed scheme.</p>
Full article ">Figure 4
<p>Network topology of edge servers in our simulation.</p>
Full article ">Figure 5
<p>QoE of collaboration schemes for Task 1: (<b>a</b>) Success rate; (<b>b</b>) Completion time; (<b>c</b>) Processing time; (<b>d</b>) Communication time.</p>
Full article ">Figure 6
<p>QoE of collaboration schemes for various tasks: (<b>a</b>) Success rate; (<b>b</b>) Completion time; (<b>c</b>) Processing time; (<b>d</b>) Communication time.</p>
Full article ">Figure 7
<p>Success rate of Task 1, Task 2, and Task 3: (<b>a</b>) Task 1; (<b>b</b>) Task 2; (<b>c</b>) Task 3.</p>
Full article ">Figure 8
<p>Completion time of Task 1, Task 2, and Task 3: (<b>a</b>) Task 1; (<b>b</b>) Task 2; (<b>c</b>) Task 3.</p>
Full article ">
13 pages, 25164 KiB  
Article
Strength Development Monitoring of Cemented Paste Backfill Using Guided Waves
by Wen He, Changsong Zheng, Shenhai Li, Wenfang Shi and Kui Zhao
Sensors 2021, 21(24), 8499; https://doi.org/10.3390/s21248499 - 20 Dec 2021
Cited by 3 | Viewed by 2916
Abstract
The strength of cemented paste backfill (CPB) directly affects mining safety and progress. At present, in-situ backfill strength is obtained by conducting uniaxial compression tests on backfill core samples. At the same time, it is time-consuming, and the integrity of samples cannot be [...] Read more.
The strength of cemented paste backfill (CPB) directly affects mining safety and progress. At present, in-situ backfill strength is obtained by conducting uniaxial compression tests on backfill core samples. At the same time, it is time-consuming, and the integrity of samples cannot be guaranteed. Therefore guided wave technique as a nondestructive inspection method is proposed for the strength development monitoring of cemented paste backfill. In this paper, the acoustic parameters of guided wave propagation in the different cement-tailings ratios (1:4, 1:8) and different curing times (within 42 d) of CPBs were measured. Combined with the uniaxial compression strength of CPB, relationships between CPB strength and the guided wave acoustic parameters were established. Results indicate that with the increase of backfill curing time, the guided wave velocity decreases sharply at first; on the contrary, attenuation of guided waves increases dramatically. Finally, both velocity and attenuation tend to be stable. When the CPB strength increases with curing time, guided wave velocity shows an exponentially decreasing trend, while the guided wave attenuation shows an exponentially increasing trend with the increase of the CPB strength. Based on the relationship curves between CPB strength and guided wave velocity and attenuation, the guided wave technique in monitoring the strength development of CPB proves feasible. Full article
Show Figures

Figure 1

Figure 1
<p>Main chemical properties of the tailings.</p>
Full article ">Figure 2
<p>The dimension of specimens for guided wave testing. (<b>a</b>) Side view; (<b>b</b>) Top view; (<b>c</b>) Photograph.</p>
Full article ">Figure 3
<p>Diagram of guided wave testing system.</p>
Full article ">Figure 4
<p>Pickup at the starting point and the time difference between the excitation and reception waves.</p>
Full article ">Figure 5
<p>The High-pressure triaxial testing system.</p>
Full article ">Figure 6
<p>Effect of excitation wave parameter <span class="html-italic">n</span> on the receiving wave. The characteristics of the received wave are observed by changing the excitation wave parameter <span class="html-italic">n</span>. The amplitude decay, wave packet shape and overlap of the received wave are compared to optimize the parameter <span class="html-italic">n</span>.</p>
Full article ">Figure 7
<p>Velocities at different guided wave frequencies; (<b>a</b>) sample A; (<b>b</b>) sample B.</p>
Full article ">Figure 8
<p>Attenuation at different guided wave frequencies; (<b>a</b>) sample A; (<b>b</b>) sample B.</p>
Full article ">Figure 9
<p>Relationship between UCS and velocity data for different cement-tailings ratio. Quantification of UCS by wave velocity.</p>
Full article ">Figure 10
<p>Relationship between UCS and attenuation data for different cement-tailings ratio. Quantification of UCS by wave attenuation.</p>
Full article ">
18 pages, 1513 KiB  
Article
Lateral and Longitudinal Driving Behavior Prediction Based on Improved Deep Belief Network
by Lei Yang, Chunqing Zhao, Chao Lu, Lianzhen Wei and Jianwei Gong
Sensors 2021, 21(24), 8498; https://doi.org/10.3390/s21248498 - 20 Dec 2021
Cited by 6 | Viewed by 3514
Abstract
Accurately predicting driving behavior can help to avoid potential improper maneuvers of human drivers, thus guaranteeing safe driving for intelligent vehicles. In this paper, we propose a novel deep belief network (DBN), called MSR-DBN, by integrating a multi-target sigmoid regression (MSR) layer with [...] Read more.
Accurately predicting driving behavior can help to avoid potential improper maneuvers of human drivers, thus guaranteeing safe driving for intelligent vehicles. In this paper, we propose a novel deep belief network (DBN), called MSR-DBN, by integrating a multi-target sigmoid regression (MSR) layer with DBN to predict the front wheel angle and speed of the ego vehicle. Precisely, the MSR-DBN consists of two sub-networks: one is for the front wheel angle, and the other one is for speed. This MSR-DBN model allows ones to optimize lateral and longitudinal behavior predictions through a systematic testing method. In addition, we consider the historical states of the ego vehicle and surrounding vehicles and the driver’s operations as inputs to predict driving behaviors in a real-world environment. Comparison of the prediction results of MSR-DBN with a general DBN model, back propagation (BP) neural network, support vector regression (SVR), and radical basis function (RBF) neural network, demonstrates that the proposed MSR-DBN outperforms the others in terms of accuracy and robustness. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Proposed driving behavior prediction system.</p>
Full article ">Figure 2
<p>The typical DBN driving behavior prediction architecture.</p>
Full article ">Figure 3
<p>Improved MSR-DBN prediction model.</p>
Full article ">Figure 4
<p>Schematic diagram for the RBM and training process for the pre-training.</p>
Full article ">Figure 5
<p>Data acquisition route.</p>
Full article ">Figure 6
<p>Prediction errors for surrounding vehicles based on different methods.</p>
Full article ">Figure 7
<p>Prediction results of the front wheel angle based on different methods.</p>
Full article ">Figure 8
<p>Prediction results of the speed based on different methods.</p>
Full article ">Figure 9
<p>Prediction results on highD dataset. (<b>a</b>) Prediction result of lateral speed. (<b>b</b>) Prediction result of longitudinal speed.</p>
Full article ">
19 pages, 6182 KiB  
Article
Hyperspectral Estimation of Winter Wheat Leaf Area Index Based on Continuous Wavelet Transform and Fractional Order Differentiation
by Changchun Li, Yilin Wang, Chunyan Ma, Fan Ding, Yacong Li, Weinan Chen, Jingbo Li and Zhen Xiao
Sensors 2021, 21(24), 8497; https://doi.org/10.3390/s21248497 - 20 Dec 2021
Cited by 22 | Viewed by 3509
Abstract
Leaf area index (LAI) is highly related to crop growth, and the traditional LAI measurement methods are field destructive and unable to be acquired by large-scale, continuous, and real-time means. In this study, fractional order differential and continuous wavelet transform were used to [...] Read more.
Leaf area index (LAI) is highly related to crop growth, and the traditional LAI measurement methods are field destructive and unable to be acquired by large-scale, continuous, and real-time means. In this study, fractional order differential and continuous wavelet transform were used to process the canopy hyperspectral reflectance data of winter wheat, the fractional order differential spectral bands and wavelet energy coefficients with more sensitive to LAI changes were screened by correlation analysis, and the optimal subset regression and support vector machine were used to construct the LAI estimation models for different growth stages. The precision evaluation results showed that the LAI estimation models constructed by using wavelet energy coefficients combined with a support vector machine at the jointing stage, fractional order differential combined with support vector machine at the booting stage, and wavelet energy coefficients combined with optimal subset regression at the flowering and filling stages had the best prediction performance. Among these, both flowering and filling stages could be used as the best growth stages for LAI estimation with modeling and validation R2 of 0.87 and 0.71, 0.84 and 0.77, respectively. This study can provide technical reference for LAI estimation of crops based on remote sensing technology. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the study area and experimental design.</p>
Full article ">Figure 2
<p>Correlation analysis of original spectrum and LAI at different growth stages.</p>
Full article ">Figure 3
<p>Correlation analysis of fractional differentiation spectrum and leaf area index at different growth stages.</p>
Full article ">Figure 3 Cont.
<p>Correlation analysis of fractional differentiation spectrum and leaf area index at different growth stages.</p>
Full article ">Figure 4
<p>Correlation matrix diagram of selected fractional differentiation spectrum and leaf area index at different growth stages.</p>
Full article ">Figure 5
<p>Optimal subset analysis of selected fractional differentiation spectrum for estimating LAI.</p>
Full article ">Figure 6
<p>Correlation analysis of wavelet energy coefficient and leaf area index at different growth stages.</p>
Full article ">Figure 7
<p>Correlation analysis of selected wavelet energy coefficient and leaf area index at different growth stages.</p>
Full article ">Figure 8
<p>Optimal subset analysis of wavelet energy coefficient for estimating leaf area index at different growth stages.</p>
Full article ">
17 pages, 1518 KiB  
Article
Collision-Aware Routing Using Multi-Objective Seagull Optimization Algorithm for WSN-Based IoT
by Preetha Jagannathan, Sasikumar Gurumoorthy, Andrzej Stateczny, Parameshachari Bidare Divakarachar and Jewel Sengupta
Sensors 2021, 21(24), 8496; https://doi.org/10.3390/s21248496 - 20 Dec 2021
Cited by 71 | Viewed by 3503
Abstract
In recent trends, wireless sensor networks (WSNs) have become popular because of their cost, simple structure, reliability, and developments in the communication field. The Internet of Things (IoT) refers to the interconnection of everyday objects and sharing of information through the Internet. Congestion [...] Read more.
In recent trends, wireless sensor networks (WSNs) have become popular because of their cost, simple structure, reliability, and developments in the communication field. The Internet of Things (IoT) refers to the interconnection of everyday objects and sharing of information through the Internet. Congestion in networks leads to transmission delays and packet loss and causes wastage of time and energy on recovery. The routing protocols are adaptive to the congestion status of the network, which can greatly improve the network performance. In this research, collision-aware routing using the multi-objective seagull optimization algorithm (CAR-MOSOA) is designed to meet the efficiency of a scalable WSN. The proposed protocol exploits the clustering process to choose cluster heads to transfer the data from source to endpoint, thus forming a scalable network, and improves the performance of the CAR-MOSOA protocol. The proposed CAR-MOSOA is simulated and examined using the NS-2.34 simulator due to its modularity and inexpensiveness. The results of the CAR-MOSOA are comprehensively investigated with existing algorithms such as fully distributed energy-aware multi-level (FDEAM) routing, energy-efficient optimal multi-path routing protocol (EOMR), tunicate swarm grey wolf optimization (TSGWO), and CoAP simple congestion control/advanced (CoCoA). The simulation results of the proposed CAR-MOSOA for 400 nodes are as follows: energy consumption, 33 J; end-to-end delay, 29 s; packet delivery ratio, 95%; and network lifetime, 973 s, which are improved compared to the FDEAM, EOMR, TSGWO, and CoCoA. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Proposed multi-tier framework.</p>
Full article ">Figure 2
<p>Network model of WSN-IoT.</p>
Full article ">Figure 3
<p>Overall block diagram of WSN-IoT.</p>
Full article ">Figure 4
<p>Performance based on energy consumption.</p>
Full article ">Figure 5
<p>Performance based on end-to-end delay.</p>
Full article ">Figure 6
<p>Performance based on packet delivery ratio.</p>
Full article ">Figure 7
<p>Performance based on network lifetime.</p>
Full article ">
15 pages, 2705 KiB  
Article
Relative Pose Determination of Uncooperative Spacecraft Based on Circle Feature
by Yue Liu, Shijie Zhang and Xiangtian Zhao
Sensors 2021, 21(24), 8495; https://doi.org/10.3390/s21248495 - 20 Dec 2021
Cited by 5 | Viewed by 3082
Abstract
This paper investigates the problem of spacecraft relative navigation with respect to an unknown target during the close-proximity operations in the on-orbit service system. The serving spacecraft is equipped with a Time-of-Flight (ToF) camera for object recognition and feature detection. A fast and [...] Read more.
This paper investigates the problem of spacecraft relative navigation with respect to an unknown target during the close-proximity operations in the on-orbit service system. The serving spacecraft is equipped with a Time-of-Flight (ToF) camera for object recognition and feature detection. A fast and robust relative navigation strategy for acquisition is presented without any extra information about the target by using the natural circle features. The architecture of the proposed relative navigation strategy consists of three ingredients. First, a point cloud segmentation method based on the auxiliary gray image is developed for fast extraction of the circle feature point cloud of the target. Secondly, a new parameter fitting method of circle features is proposed including circle feature calculation by two different geometric models and results’ fusion. Finally, a specific definition of the coordinate frame system is introduced to solve the relative pose with respect to the uncooperative target. In order to validate the efficiency of the segmentation, an experimental test is conducted based on real-time image data acquired by the ToF camera. The total time consumption is saved by 94%. In addition, numerical simulations are carried out to evaluate the proposed navigation algorithm. It shows good robustness under the different levels of noises. Full article
(This article belongs to the Special Issue Instrument and Measurement Based on Sensing Technology in China)
Show Figures

Figure 1

Figure 1
<p>Different types of ellipse-detection results.</p>
Full article ">Figure 2
<p>Data of Time-of-Flight (ToF) camera. (<b>a</b>) The captured gray image; (<b>b</b>) The captured point cloud in the field of view (76,800 points). Obviously, the circle feature only accounts for a small part of the point cloud.</p>
Full article ">Figure 3
<p>Illustration of coordinate frames and related vectors.</p>
Full article ">Figure 4
<p>Definition of the coordinate system.</p>
Full article ">Figure 5
<p>Schematic diagram of using the normal vector to solve the Euler angle.</p>
Full article ">Figure 6
<p>Data of ToF camera. (<b>a</b>) point cloud (76,800 points); (<b>b</b>) gray image; (<b>c</b>) depth image.</p>
Full article ">Figure 7
<p>The flowchart of circle feature point cloud extraction.</p>
Full article ">Figure 8
<p>Some good results of ellipse-detection based on arc support.</p>
Full article ">Figure 9
<p>Gray image and depth image correspond to the same area.</p>
Full article ">Figure 10
<p>Axis fitting by the intersecting line.</p>
Full article ">Figure 11
<p>Direct segmentation result of point cloud. (<b>a</b>) the result after the planar background segmentation; (<b>b</b>) the result after the clustering segmentation.</p>
Full article ">Figure 12
<p>(<b>a</b>) The result of ellipse detection; (<b>b</b>) the region of interest.</p>
Full article ">Figure 13
<p>(<b>a</b>) The result of point cloud segmentation assisted by gray images; (<b>b</b>) the result after the clustering segmentation.</p>
Full article ">Figure 14
<p>(<b>a</b>) The final results of the direct segmentation; (<b>b</b>) final results of the segmentation assisted by gray image.</p>
Full article ">Figure 15
<p>Standard model of circle feature.</p>
Full article ">Figure 16
<p>Errors of angles. (<b>a</b>) yaw angle; (<b>b</b>) pitch angle; (<b>c</b>) the included angle.</p>
Full article ">Figure 17
<p>Standard model of circle feature.</p>
Full article ">
14 pages, 1108 KiB  
Article
The Compact Support Neural Network
by Adrian Barbu and Hongyu Mou
Sensors 2021, 21(24), 8494; https://doi.org/10.3390/s21248494 - 20 Dec 2021
Cited by 2 | Viewed by 2895
Abstract
Neural networks are popular and useful in many fields, but they have the problem of giving high confidence responses for examples that are away from the training data. This makes the neural networks very confident in their prediction while making gross mistakes, thus [...] Read more.
Neural networks are popular and useful in many fields, but they have the problem of giving high confidence responses for examples that are away from the training data. This makes the neural networks very confident in their prediction while making gross mistakes, thus limiting their reliability for safety-critical applications such as autonomous driving and space exploration, etc. This paper introduces a novel neuron generalization that has the standard dot-product-based neuron and the radial basis function (RBF) neuron as two extreme cases of a shape parameter. Using a rectified linear unit (ReLU) as the activation function results in a novel neuron that has compact support, which means its output is zero outside a bounded domain. To address the difficulties in training the proposed neural network, it introduces a novel training method that takes a pretrained standard neural network that is fine-tuned while gradually increasing the shape parameter to the desired value. The theoretical findings of the paper are bound on the gradient of the proposed neuron and proof that a neural network with such neurons has the universal approximation property. This means that the network can approximate any continuous and integrable function with an arbitrary degree of accuracy. The experimental findings on standard benchmark datasets show that the proposed approach has smaller test errors than the state-of-the-art competing methods and outperforms the competing methods in detecting out-of-distribution samples on two out of three datasets. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) 1D example. Comparison between RBF neuron <math display="inline"><semantics> <mrow> <mrow> <mi>y</mi> <mo>=</mo> </mrow> <mo form="prefix">exp</mo> <mo>(</mo> <mo>−</mo> <mo>|</mo> <mi>x</mi> <mo>−</mo> <mn>2</mn> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> </semantics></math> and compact support neurons <math display="inline"><semantics> <mrow> <mrow> <mi>y</mi> <mo>=</mo> </mrow> <msub> <mi>f</mi> <mi>α</mi> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mn>2</mn> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> from (<a href="#FD2-sensors-21-08494" class="html-disp-formula">2</a>) for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>∈</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>0.8</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </semantics></math>. (<b>b</b>) 2D example. The construction (<a href="#FD2-sensors-21-08494" class="html-disp-formula">2</a>) smoothly interpolates between a standard neuron (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>) and an RBF-type of neuron (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>). Shown are the decision boundaries for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>α</mi> </msub> <mrow> <mo>(</mo> <mi mathvariant="bold">x</mi> <mo>,</mo> <mi mathvariant="bold">w</mi> <mo>,</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow> </semantics></math> with <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mo>=</mo> <mo>(</mo> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi mathvariant="bold">w</mi> <mo>=</mo> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>∈</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>0.1</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.8</mn> <mo>,</mo> <mn>1</mn> <mo>}</mo> </mrow> </semantics></math> and the corresponding centers <math display="inline"><semantics> <mrow> <mi mathvariant="bold">w</mi> <mo>/</mo> <mi>α</mi> </mrow> </semantics></math> as “*”.</p>
Full article ">Figure 2
<p>(<b>a</b>) A simple compact support neural network (CSNN), with the CSN layer described in (<a href="#FD4-sensors-21-08494" class="html-disp-formula">4</a>). (<b>b</b>) A CSNN-F with LeNet backbone, where all layers are trainable.</p>
Full article ">Figure 3
<p>The confidence map (with white for <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> and black for 1) of the trained CSNN on the moons dataset for different values of <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>∈</mo> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>CSNN train and test errors, and AUROC for OOD detection vs. <math display="inline"><semantics> <mi>α</mi> </semantics></math> for the moon data.</p>
Full article ">Figure 5
<p>Example of activation pattern domains for a regular NN and a CSNN (<math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.825</mn> </mrow> </semantics></math>), and the resulting confidence map (white for <math display="inline"><semantics> <mrow> <mn>0.5</mn> </mrow> </semantics></math> and black for 1) for <math display="inline"><semantics> <mrow> <mi>α</mi> <mo>=</mo> <mn>0.825</mn> </mrow> </semantics></math> for a 32 neuron 2-layer CSNN.</p>
Full article ">Figure 6
<p>Train and test errors, and Area under ROC Curve (AUROC) for OOD detection vs. <math display="inline"><semantics> <mi>α</mi> </semantics></math> for CSNN classifiers trained on three real datasets. These plots are obtained from one training run.</p>
Full article ">
16 pages, 3400 KiB  
Article
Thermal Drift Correction for Laboratory Nano Computed Tomography via Outlier Elimination and Feature Point Adjustment
by Mengnan Liu, Yu Han, Xiaoqi Xi, Siyu Tan, Jian Chen, Lei Li and Bin Yan
Sensors 2021, 21(24), 8493; https://doi.org/10.3390/s21248493 - 20 Dec 2021
Cited by 4 | Viewed by 2830
Abstract
Thermal drift of nano-computed tomography (CT) adversely affects the accurate reconstruction of objects. However, feature-based reference scan correction methods are sometimes unstable for images with similar texture and low contrast. In this study, based on the geometric position of features and the structural [...] Read more.
Thermal drift of nano-computed tomography (CT) adversely affects the accurate reconstruction of objects. However, feature-based reference scan correction methods are sometimes unstable for images with similar texture and low contrast. In this study, based on the geometric position of features and the structural similarity (SSIM) of projections, a rough-to-refined rigid alignment method is proposed to align the projection. Using the proposed method, the thermal drift artifacts in reconstructed slices are reduced. Firstly, the initial features are obtained by speeded up robust features (SURF). Then, the outliers are roughly eliminated by the geometric position of global features. The features are refined by the SSIM between the main and reference projections. Subsequently, the SSIM between the neighborhood images of features are used to relocate the features. Finally, the new features are used to align the projections. The two-dimensional (2D) transmission imaging experiments reveal that the proposed method provides more accurate and robust results than the random sample consensus (RANSAC) and locality preserving matching (LPM) methods. For three-dimensional (3D) imaging correction, the proposed method is compared with the commonly used enhanced correlation coefficient (ECC) method and single-step discrete Fourier transform (DFT) algorithm. The results reveal that proposed method can retain the details more faithfully. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Workflow of the proposed method.</p>
Full article ">Figure 2
<p>Schematic diagram of feature elimination and relocation based on ASIM. (<b>a</b>) Initial feature matching relationship, where blue and red lines represent the inliers and outliers, respectively. (<b>b</b>) Calculation of the feature angle <math display="inline"><semantics> <mrow> <mi>A</mi> <mo stretchy="false">(</mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>i</mi> </msubsup> <mo>,</mo> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>j</mi> </msubsup> <mo stretchy="false">)</mo> </mrow> </semantics></math> between <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>i</mi> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mi>t</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>j</mi> </msubsup> </mrow> </semantics></math>. Only three dots are connected here for clarity. (<b>c</b>) Process of feature position adjustment, where the hollow circles represent the position of original features and the solid circles represent the new optimal location.</p>
Full article ">Figure 3
<p>2D transmission imaging results. Rows 1 through 4 show the matching results for wasp, cabbage seed, bamboo stick, and star card, respectively. Columns 1 through 4 are the results obtained by using SURF, RANSAC, LPM, and ASIM, respectively.</p>
Full article ">Figure 4
<p>Evaluation of features eliminated by RANSAC, LPM, and ASIM. (<b>a</b>) Average error of projection alignment. (<b>b</b>) Precision of the elimination feature, which represents the correct proportion of the refined feature points. (<b>c</b>) Accuracy of the elimination feature, which represents the correct proportion of all the predicted points.</p>
Full article ">Figure 5
<p>Robustness test of elimination methods. Each column corresponds to a sample listed in <a href="#sensors-21-08493-t003" class="html-table">Table 3</a>. In each column, the error results for different initial numbers of features are consolidated into a bin. The mean error of the calculated results is represented as ‘<math display="inline"><semantics> <mo>∘</mo> </semantics></math>.’ The maximum and minimum value of data are represented as ‘*.’ The top and bottom borders of the box represent 0.75 and 0.25 points of data, respectively.</p>
Full article ">Figure 6
<p><span class="html-italic">RMSE</span> at different noise levels (5%, 10%, and 15%) are corrected by RANSAC, LPM, and ASIM. (<b>a</b>) The test sample is wasp. (<b>b</b>) The test sample is cabbage seed. The proposed method (red curve) achieves the most stable results.</p>
Full article ">Figure 7
<p>Drift calculation results. (<b>a</b>,<b>b</b>) Vertical and horizontal drifts in the electronic component scanning experiment, respectively. (<b>c</b>,<b>d</b>) Vertical and horizontal drifts in the cabbage seed scanning experiment, respectively.</p>
Full article ">Figure 8
<p>Correction results of electronic component. (<b>a</b>) Uncorrected. (<b>b</b>) ECC. (<b>c</b>) Sigle-step DFT. (<b>d</b>) RANSAC. (<b>e</b>) LPM. (<b>f</b>) ASIM.</p>
Full article ">Figure 9
<p>Correction results of difference alignment methods for cabbage seed. (<b>a</b>) Uncorrected. (<b>b</b>) ECC. (<b>c</b>) Sigle-step DFT. (<b>d</b>) RANSAC. (<b>e</b>) LPM. (<b>f</b>) ASIM.</p>
Full article ">
12 pages, 1421 KiB  
Article
Assessing Post-Driving Discomfort and Its Influence on Gait Patterns
by Marko M. Cvetkovic, Denise Soares and João Santos Baptista
Sensors 2021, 21(24), 8492; https://doi.org/10.3390/s21248492 - 20 Dec 2021
Cited by 1 | Viewed by 4788
Abstract
Professional drivers need constant attention during long driving periods and sometimes perform tasks outside the truck. Driving discomfort may justify inattention, but it does not explain post-driving accidents outside the vehicle. This study aims to study the discomfort developed during driving by analysing [...] Read more.
Professional drivers need constant attention during long driving periods and sometimes perform tasks outside the truck. Driving discomfort may justify inattention, but it does not explain post-driving accidents outside the vehicle. This study aims to study the discomfort developed during driving by analysing modified preferred postures, pressure applied at the interface with the seat, and changes in pre- and post-driving gait patterns. Each of the forty-four volunteers drove for two hours in a driving simulator. Based on the walking speed changes between the two gait cycles, three homogeneous study groups were identified. Two groups performed faster speeds, while one reduced it in the post-steering gait. While driving, the pressure at the interface and the area covered over the seat increased throughout the sample. Preferred driving postures differed between groups. No statistical differences were found between the groups in the angles between the segments (flexed and extended). Long-time driving develops local or whole-body discomfort, increasing interface pressure over time. While driving, drivers try to compensate by modifying their posture. After long steering periods, a change in gait patterns can be observed. These behaviours may result from the difficulties imposed on blood circulation by increasing pressure at this interface. Full article
Show Figures

Figure 1

Figure 1
<p>Definition of lower limbs area.</p>
Full article ">Figure 2
<p>Illustration of the preferred driving postures between the subgroups at the initial and final recording. Note. C1—Cluster 1; C2—Cluster 2; C3—Cluster 3. (<b>a</b>) 5th minute of steering; (<b>b</b>) 120th minute of steering.</p>
Full article ">Figure 3
<p>Interface pressure variables of the total seat pan area during prolonged driving. Note. C1—Cluster 1; C2—Cluster 2; C3—Cluster 3; 5th minute of driving; 120th minute of driving; Statistically significant difference is indicated as ** (<span class="html-italic">p</span> &lt; 0.001). (<b>a</b>) Average interface pressure in the minutes 5th and 120th; (<b>b</b>) Average contact area in the minutes 5th and 120th.</p>
Full article ">Figure 4
<p>Applied average interface pressure by cluster groups with indicated statistical difference. (<b>a</b>) Definition of the lower limbs zones; (<b>b</b>) Cluster 1 group; (<b>c</b>) Cluster 2 group; (<b>d</b>) Cluster 3 group. Note. C1—Cluster 1; C2—Cluster 2; C3—Cluster 3; 5th minute of driving; 120th minute of driving; Statistically significant difference is indicated as * (<span class="html-italic">p</span> &lt; 0.05) and ** (<span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">
24 pages, 5342 KiB  
Review
Recent Advances in Aptasensor for Cytokine Detection: A Review
by Jinmyeong Kim, Seungwoo Noh, Jeong Ah Park, Sang-Chan Park, Seong Jun Park, Jin-Ho Lee, Jae-Hyuk Ahn and Taek Lee
Sensors 2021, 21(24), 8491; https://doi.org/10.3390/s21248491 - 20 Dec 2021
Cited by 22 | Viewed by 5598
Abstract
Cytokines are proteins secreted by immune cells. They promote cell signal transduction and are involved in cell replication, death, and recovery. Cytokines are immune modulators, but their excessive secretion causes uncontrolled inflammation that attacks normal cells. Considering the properties of cytokines, monitoring the [...] Read more.
Cytokines are proteins secreted by immune cells. They promote cell signal transduction and are involved in cell replication, death, and recovery. Cytokines are immune modulators, but their excessive secretion causes uncontrolled inflammation that attacks normal cells. Considering the properties of cytokines, monitoring the secretion of cytokines in vivo is of great value for medical and biological research. In this review, we offer a report on recent studies for cytokine detection, especially studies on aptasensors using aptamers. Aptamers are single strand nucleic acids that form a stable three-dimensional structure and have been receiving attention due to various characteristics such as simple production methods, low molecular weight, and ease of modification while performing a physiological role similar to antibodies. Full article
(This article belongs to the Special Issue Field Effect Transistor (FET)-Based Biosensors)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of an electrochemical sensor.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic illustration of TNF-α electrochemical biosensor. (<b>b</b>) CV at different concentration of TNF-α diluted with PBS buffer in 10 mM HEPES and 5 mM [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup>. (<b>c</b>) CV at different concentration of TNF-α diluted with 10% human serum buffer in 10 mM HEPES and 5 mM [Fe(CN)<sub>6</sub>]<sup>3−/4−</sup>. Reprinted with permission from [<a href="#B47-sensors-21-08491" class="html-bibr">47</a>]. Copyright 2021 Elsevier. (<b>d</b>) Schematic illustration of IFN-γ electrochemical aptasensor. (<b>e</b>) DPV at different concentration of IFN-γ. (<b>f</b>) Linear regression curve of different IFN-γ concentrations. Reprinted with permission from [<a href="#B48-sensors-21-08491" class="html-bibr">48</a>]. Copyright 2012 Elsevier.</p>
Full article ">Figure 3
<p>(<b>a</b>) Schematic illustration of IL-6 electrochemical aptasensor. (<b>b</b>) EIS at different concentration of IL-6. Reprinted with permission from [<a href="#B50-sensors-21-08491" class="html-bibr">50</a>]. Copyright 2019 Elsevier. (<b>c</b>) Schematic illustration of IFN-γ impedance electrochemical aptasensor. (<b>d</b>) EIS at different concentration of IFN-γ. Reprinted with permission from [<a href="#B51-sensors-21-08491" class="html-bibr">51</a>]. Copyright 2019 Elsevier.</p>
Full article ">Figure 4
<p>Schematic diagram of an optical sensor.</p>
Full article ">Figure 5
<p>(<b>a</b>) Fluorescence emission spectra at different IFN-γ concentration and relationship between the fluorescence intensity and the concentration of IFN-γ. Reprinted with permission from [<a href="#B58-sensors-21-08491" class="html-bibr">58</a>]. Copyright 2018 Elsevier. (<b>b</b>) Schematic illustration of ssAptamers and dsAptamers anchored on a rGO nanosheet. Reprinted with permission from [<a href="#B59-sensors-21-08491" class="html-bibr">59</a>]. Copyright 2014 Elsevier. (<b>c</b>) fluorescence sensing mechanism of IFN-γ. Reprinted with permission from [<a href="#B60-sensors-21-08491" class="html-bibr">60</a>]. Copyright 2018 Elsevier. (<b>d</b>) Schematic representation of IFN-γ optical aptasensor. Reprinted with permission from [<a href="#B61-sensors-21-08491" class="html-bibr">61</a>]. Copyright 2012 Elsevier. (<b>e</b>) Transmission spectra of the LSPR chip before (black solid line) and after (red dashed line) the AuNRs were immobilized on the chip surface in the way shown in the insets. Reprinted with permission from [<a href="#B62-sensors-21-08491" class="html-bibr">62</a>]. Copyright 2016 Elsevier. (<b>f</b>) SERS spectra using aptamer-modified Au NPs array substrate and Corresponding SERS intensity ratios (I<sub>660</sub>/I<sub>736</sub>) versus IL-6 concentrations for obtaining the standard curve. Reprinted with permission from [<a href="#B63-sensors-21-08491" class="html-bibr">63</a>]. Copyright 2021 Elsevier.</p>
Full article ">Figure 6
<p>Schematics of (<b>a</b>) capacitive biosensor and (<b>b</b>) FET-based biosensor.</p>
Full article ">Figure 7
<p>(<b>a</b>) Schematic of aptamer immobilization and protein association: (<b>left</b>) aptamer-immobilized surface through phosphate-amino covalent linkage and (<b>right</b>) surface after incubation with PDGF-BB. (<b>b</b>) Concentration profile for impedance sensing of protein–aptamer interactions. The anti-PDGF-BB aptamer-modified silica wafers were incubated with increasing concentrations of PDGF-BB: 1 μg/mL, 2 μg/mL, 5 μg/mL, 10 μg/mL and 50 μg/mL. The calibration plots in both the linear and logarithmic scale (inset) are presented. Reprinted with permission from [<a href="#B83-sensors-21-08491" class="html-bibr">83</a>]. Copyright 2007 Elsevier. (<b>c</b>) Schematic diagram of the AAO-based capacitive sensor. Each biomolecule is assembled from bottom to top. (<b>d</b>) Calibration curve measured with the AAO-based capacitive sensor. The red dashed line indicates the cut-off concentration at 15 pg/mL of IFN-γ for LTBI and LOD is the limit of detection estimated by assuming 3.3 × SD ≈ 0.46%. Reprinted with permission from [<a href="#B84-sensors-21-08491" class="html-bibr">84</a>]. Copyright 2014 Elsevier. Relative percent changes in capacitance responses occurred at different frequencies (65, 95, 120 and 212 MHz) with; (<b>e</b>) primary complex (aptasensor, before sandwiching) and (<b>f</b>) secondary complex (apta-immunosensor, after sandwiching with MB-Abs) formed on the sensor surfaces. The insets in figures (<b>e</b>,<b>f</b>) represents illustrations before and after sandwiching of aptamer–VEGF protein complex with MB-Abs. Reprinted with permission from [<a href="#B85-sensors-21-08491" class="html-bibr">85</a>]. Copyright 2015 Elsevier.</p>
Full article ">Figure 8
<p>(<b>a</b>) Schematic of the aptamer-functionalized graphene FET biosensor. (<b>b</b>) Transfer characteristic curves measured when the biosensor was exposed to TNF-α solution with different concentrations. (<b>c</b>) The normalized Dirac point shift Δ<span class="html-italic">V<sub>Dirac</sub></span>/Δ<span class="html-italic">V<sub>Dirac,max</sub></span> showing the response of the biosensor to different concentrations (0.1, 5, and 500 × 10<sup>−9</sup> M) of TNF-α and the control proteins (IFN-γ, IL-002, and BSA). Reprinted with permission from [<a href="#B91-sensors-21-08491" class="html-bibr">91</a>]. Copyright 2019 WILEY-VCH GmbH. (<b>d</b>) Schematic of proteins capturing with specific antibodies on the crumpled graphene channel. (<b>e</b>) Dirac voltage shift of the FET sensor with detection of IL-6 protein. Reprinted with permission from [<a href="#B92-sensors-21-08491" class="html-bibr">92</a>]. Copyright 2021 WILEY-VCH GmbH. (<b>f</b>) Diagram of PASE immobilization with applying negative electric field. With applying negative electric field through the inserted Ag/AgCl electrode, PASE molecules would be arranged regularly with directivity with pyrenyl groups forced toward the graphene surface due to the electrostatic repulsion, making further quantities of PASE molecules anchored on the graphene through π–π stacking and hence increasing the PASE immobilization density. (<b>g</b>) Dirac point shift Δ<span class="html-italic">V<sub>Dirac</sub></span>/Δ<span class="html-italic">V<sub>Dirac-0</sub></span> is plotted as a function of the applying electric field voltage. Here, Δ<span class="html-italic">V<sub>Dirac-0</sub></span> and Δ<span class="html-italic">V<sub>Dirac</sub></span> are measured after graphene immersion in 5 mM PASE at ~25 °C for 3 h without and with applying negative electric field at a given voltage value. (<b>h</b>) EDS characterization results of graphene surfaces without (<b>top</b>) and with (<b>bottom</b>) applying electric field during the PASE and aptamer immobilization process. White dots represent the parts covered with phosphorus, which is a main constituent element of aptamer and not contained in PASE. Scale bar: 1 μm. Reprinted with permission from [<a href="#B93-sensors-21-08491" class="html-bibr">93</a>]. Copyright 2020 American Chemical Society.</p>
Full article ">Figure 9
<p>(<b>a</b>) Schematic diagram of label-free electrochemical biosensor. Reprinted with permission from [<a href="#B52-sensors-21-08491" class="html-bibr">52</a>]. Copyright 2017 Elsevier. (<b>b</b>) Schematic of the LSPR cdevice for sytokine detection. Reprinted with permission from [<a href="#B106-sensors-21-08491" class="html-bibr">106</a>]. Creative Commons Attribution License.</p>
Full article ">Figure 10
<p>(<b>a</b>) Schematic illustration of the optofluidic platform for single cell-analysis (<b>b</b>) Schematic representation of microfluidic system design and valve actuation mechanism. (<b>c</b>) photo image of the integrated device (<b>d</b>) Appearance of EOT resonance peak due to nanohole array. Depending on the binding of the analyte, the wavelength of the resonance peak changes due to the change in the refractive index of the plasmon surface. (<b>e</b>) SEM image of a EL4 Cell on the nanohole array. Reprinted with permission from [<a href="#B114-sensors-21-08491" class="html-bibr">114</a>]. Copyright 2018 WILEY-VCH GmbH.</p>
Full article ">Figure 11
<p>(<b>a</b>) Schematic image of the MIP based biosensor for cytokine detection (<b>b</b>) fabrication step of MIP based biosensor. Reprinted with permission from [<a href="#B120-sensors-21-08491" class="html-bibr">120</a>]. Copyright 2019 Elsevier. (<b>c</b>) Schematic image of SWV based IFN- γ detection biosensor (<b>d</b>) Evaluation of the regeneration of the sensor through urea washing. Reprinted with permission from [<a href="#B121-sensors-21-08491" class="html-bibr">121</a>]. Copyright 2010 American Chemical Society.</p>
Full article ">Figure 12
<p>(<b>a</b>) Schematic diagram of flexible GFET biosensor (<b>b</b>) Fabricated biosensor can be stretched with the human body. Reprinted with permission from [<a href="#B123-sensors-21-08491" class="html-bibr">123</a>]. Creative Commons Attribution License. (<b>c</b>) Schematic diagram of the graphene-nafion FET biosensor (<b>d</b>) Photograph of the fabricated biosensor mounted on the human body (<b>e</b>) Normalized Dirac point shift in undiluted human sweat (<b>f</b>) Photograph of the crumping process (<b>g</b>) Sensing response of the fabricated biosensor various cytokine concentration at crumpling cycles. Reprinted with permission from [<a href="#B127-sensors-21-08491" class="html-bibr">127</a>]. Copyright 2018 WILEY-VCH GmbH.</p>
Full article ">
11 pages, 14656 KiB  
Article
Limitations of Muscle Ultrasound Shear Wave Elastography for Clinical Routine—Positioning and Muscle Selection
by Alyssa Romano, Deborah Staber, Alexander Grimm, Cornelius Kronlage and Justus Marquetand
Sensors 2021, 21(24), 8490; https://doi.org/10.3390/s21248490 - 20 Dec 2021
Cited by 27 | Viewed by 5242
Abstract
Shear wave elastography (SWE) is a clinical ultrasound imaging modality that enables non-invasive estimation of tissue elasticity. However, various methodological factors—such as vendor-specific implementations of SWE, mechanical anisotropy of tissue, varying anatomical position of muscle and changes in elasticity due to passive muscle [...] Read more.
Shear wave elastography (SWE) is a clinical ultrasound imaging modality that enables non-invasive estimation of tissue elasticity. However, various methodological factors—such as vendor-specific implementations of SWE, mechanical anisotropy of tissue, varying anatomical position of muscle and changes in elasticity due to passive muscle stretch—can confound muscle SWE measurements and increase their variability. A measurement protocol with a low variability of reference measurements in healthy subjects is desirable to facilitate diagnostic conclusions on an individual-patient level. Here, we present data from 52 healthy volunteers in the areas of: (1) Characterizing different limb and truncal muscles in terms of inter-subject variability of SWE measurements. Superficial muscles with little pennation, such as biceps brachii, exhibit the lowest variability whereas paravertebral muscles show the highest. (2) Comparing two protocols with different limb positioning in a trade-off between examination convenience and SWE measurement variability. Repositioning to achieve low passive extension of each muscle results in the lowest SWE variability. (3) Providing SWE shear wave velocity (SWV) reference values for a specific ultrasound machine/transducer setup (Canon Aplio i800, 18 MHz probe) for a number of muscles and two positioning protocols. We argue that methodological issues limit the current clinical applicability of muscle SWE. Full article
Show Figures

Figure 1

Figure 1
<p>Representation of the muscle measurement locations with the average SWV in m/s and the corresponding standard deviation (±) of Protocol 1 (<b>A</b>) and 2 (<b>B</b>) and comparison of the variance between them (<b>C</b>): The variances of measurements acquired using Protocol 2 (optimized, rigid SWE-protocol) were significantly lower (<span class="html-italic">p</span> &lt; 0.001) than under Protocol 1 (clinical feasibility). DE = deltoideus. BB = biceps brachii. ECR = extensor carpi radialis. FDP = flexor digitorum profundus. TR = triceps brachii. MU (C8) = multifidus (C8). ES (Th10) = erector spinae (Th10). ES (L3) = erector spinae (L3). VA = vastus lateralis. BF = biceps femoris (caput longum). TA = tibialis anterior. GCM = gastrocnemius (caput mediale).</p>
Full article ">Figure A1
<p>SWE images of the examination of a 32-year-old male in Protocol 2 (optimized, rigid SWE-protocol): SWE pictures of the deltoid (DE), flexor digitorum profundus (FDP), tibialis anterior (TA), biceps brachii (BB) and vastus lateralis (VA) muscle, triceps brachii (TR), biceps femoris caput longum (BF), extensor carpi radialis (ECR) and gastrocnemius caput mediale (GCM) muscle are shown. On the left side of each picture, the B-mode images in gray scale are overlaid with SWV data in color. The cooler colors, such as the blue in these pictures, depict slower shear wave speeds, typically ranging from 0–6 m/s. As predicted, in Protocol 2 (optimized, rigid SWE-protocol), the muscles were positioned in optimally relaxed states, to avoid strain, which was demonstrated by the consistent blue coloring within the SWE pictures. On the right side of each picture, the shape of the shear waves is displayed with lines. The blue lines represent the origin of the shear waves and the red lines represent the change in the shear waves as they propagate accordingly through the muscle. In these pictures, the greater depth of acquisition required for FDP and DE can be seen. The layer of subcutaneous fat above the DE was typically thicker than for TA, BB and VA. Additionally, the greater pennation angle of DE is illustrated in that the muscle fibers could not be completely optimally displayed to parallel in the longitudinal plane. Alternatively, the path of the muscle fibers of FDP, TA, BB and VA could be displayed well in the longitudinal plane.</p>
Full article ">
19 pages, 12740 KiB  
Article
Design of a New Seismoelectric Logging Instrument
by Liangchen Zhang, Xiaodong Ju, Junqiang Lu, Baiyong Men and Weiliang He
Sensors 2021, 21(24), 8489; https://doi.org/10.3390/s21248489 - 20 Dec 2021
Cited by 1 | Viewed by 2298
Abstract
To increase the accuracy of reservoir evaluation, a new type of seismoelectric logging instrument was designed. The designed tool comprises the invented sonde-structured array complex. The tool includes several modules, including a signal excitation module, data acquisition module, phased array transmitting module, impedance [...] Read more.
To increase the accuracy of reservoir evaluation, a new type of seismoelectric logging instrument was designed. The designed tool comprises the invented sonde-structured array complex. The tool includes several modules, including a signal excitation module, data acquisition module, phased array transmitting module, impedance matching module and a main system control circuit, which are interconnected through high-speed tool bus to form a distributed architecture. UC/OS-II was used for the real-time system control. After constructing the experimental measurement system prototype of the seismoelectric logging detector, its performance was verified in the laboratory. The obtained results showed that the consistency between the multi-channel received waveform amplitude and benchmark spectrum was more than 97%. The binary phased linear array transmitting module of the instrument can realize 0° to 20° deflection and directional radiation. In the end, a field test was conducted to verify the tool’s performance in downhole conditions. The results of this test proved the effectiveness of the developed seismoelectric logging tool. Full article
(This article belongs to the Special Issue Sensors in Electronic Measurement Systems)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Structure diagram of the seismoelectric logging detector. (<b>b</b>) The internal electronic system of the instrument.</p>
Full article ">Figure 2
<p>The process of the CAN communication workflow.</p>
Full article ">Figure 3
<p>Schematic diagram of the acoustic excitation module circuit.</p>
Full article ">Figure 4
<p>Schematic diagram of the electric excitation module circuit.</p>
Full article ">Figure 5
<p>The preamplifier circuit of the tool.</p>
Full article ">Figure 6
<p>(<b>a</b>) Equivalent circuit of transformer excitation transducer. (<b>b</b>) Test results of transducer impedance.</p>
Full article ">Figure 7
<p>The structure of impedance matching network designed by ADS software.</p>
Full article ">Figure 8
<p>Diagram of a linear array sound source.</p>
Full article ">Figure 9
<p>Directivity graphs with different numbers of array elements. (<b>a</b>) <span class="html-italic">n</span> = 3; (<b>b</b>) <span class="html-italic">n</span> = 4; (<b>c</b>) <span class="html-italic">n</span> = 5; (<b>d</b>) <span class="html-italic">n</span> = 6.</p>
Full article ">Figure 9 Cont.
<p>Directivity graphs with different numbers of array elements. (<b>a</b>) <span class="html-italic">n</span> = 3; (<b>b</b>) <span class="html-italic">n</span> = 4; (<b>c</b>) <span class="html-italic">n</span> = 5; (<b>d</b>) <span class="html-italic">n</span> = 6.</p>
Full article ">Figure 10
<p>Schematic diagram of the transmitting transducer performance test.</p>
Full article ">Figure 11
<p>(<b>a</b>) The waveforms received by the hydrophone (0° to 120°). (<b>b</b>) Horizontal directivity diagram of the transmitter.</p>
Full article ">Figure 12
<p>The (<b>a</b>) waveform registered at a zero angle and (<b>b</b>) its Fourier spectrum.</p>
Full article ">Figure 13
<p>The (<b>a</b>) waveform without delay and (<b>b</b>) vertical directivity of the transmitter.</p>
Full article ">Figure 14
<p>The waveform (<b>left</b>) and vertical directivity (<b>right</b>) for a (<b>a</b>) 10 µs (<b>b</b>) 20 µs (<b>c</b>) 30 µs delay.</p>
Full article ">Figure 14 Cont.
<p>The waveform (<b>left</b>) and vertical directivity (<b>right</b>) for a (<b>a</b>) 10 µs (<b>b</b>) 20 µs (<b>c</b>) 30 µs delay.</p>
Full article ">Figure 15
<p>(<b>a</b>) Normalized vertical directivities of linear phased arrays. (<b>b</b>) Measured and theoretically computed deflection angles.</p>
Full article ">Figure 16
<p>The CPLD module diagram of the transformer excitation control program.</p>
Full article ">Figure 17
<p>(<b>a</b>) Schematic diagram of transformer excitation circuit. (<b>b</b>) Driving waveform of the transformer excitation.</p>
Full article ">Figure 18
<p>Waveforms received by transducer (<b>a</b>) R1, (<b>b</b>) R2, (<b>c</b>) and R3. (<b>d</b>) Normalized horizontal directivities of three transducers.</p>
Full article ">Figure 19
<p>The (<b>a</b>) waveform and (<b>b</b>) spectrum curve of the receiver.</p>
Full article ">Figure 20
<p>Data processing results of seismoelectric logging signals in some well sections.</p>
Full article ">
17 pages, 9511 KiB  
Article
Autonomous UAV System for Cleaning Insulators in Power Line Inspection and Maintenance
by Ricardo Lopez Lopez, Manuel Jesus Batista Sanchez, Manuel Perez Jimenez, Begoña C. Arrue and Anibal Ollero
Sensors 2021, 21(24), 8488; https://doi.org/10.3390/s21248488 - 20 Dec 2021
Cited by 28 | Viewed by 6092
Abstract
The inspection and maintenance tasks of electrical installations are very demanding. Nowadays, insulator cleaning is carried out manually by operators using scaffolds, ropes, or even helicopters. However, these operations involve potential risks for humans and the electrical structure. The use of Unmanned Aerial [...] Read more.
The inspection and maintenance tasks of electrical installations are very demanding. Nowadays, insulator cleaning is carried out manually by operators using scaffolds, ropes, or even helicopters. However, these operations involve potential risks for humans and the electrical structure. The use of Unmanned Aerial Vehicles (UAV) to reduce the risk of these tasks is rising. This paper presents an UAV to autonomously clean insulators on power lines. First, an insulator detection and tracking algorithm has been implemented to control the UAV in operation. Second, a cleaning tool has been designed consisting of a pump, a tank, and an arm to direct the flow of cleaning liquid. Third, a vision system has been developed that is capable of detecting soiled areas using a semantic segmentation neuronal network, calculating the trajectory for cleaning in the image plane, and generating arm trajectories to efficiently clean the insulator. Fourth, an autonomous system has been developed to land on a charging pad to charge the batteries and potentially fill the tank with cleaning liquid. Finally, the autonomous system has been validated in a controlled outdoor environment. Full article
(This article belongs to the Special Issue Advanced Sensors Technologies Applied in Mobile Robot)
Show Figures

Figure 1

Figure 1
<p>Conceptual design of the operation. (<b>1</b>) The UAV is sent to the GPS position of an insulator. (<b>2</b>) Visual local control is performed while locating and cleaning the areas that need the maintenance of the insulator. (<b>3</b>) When the operation is finished or batteries need to be recharged, the UAV returns to the GPS position of a charging station. (<b>4</b>) When it reaches the position, an autonomous vision-based descent is performed.</p>
Full article ">Figure 2
<p>Conceptual scheme of the systems involved in the application.</p>
Full article ">Figure 3
<p>Nozzle with variable cross-section design to increase range and dispersion.</p>
Full article ">Figure 4
<p>Design of the two-DOF cleaning tool.</p>
Full article ">Figure 5
<p>Schematic used to calculate the kinematics of the system along with the reference axes employed.</p>
Full article ">Figure 6
<p>Diagram of the targeting system for choosing the optimal joint variables to hit the target point of the insulator.</p>
Full article ">Figure 7
<p>Insulator detection using YOLO v4 Tiny with TensorRT implementation (green bounding box) and tracker result (blue bounding box).</p>
Full article ">Figure 8
<p>Soiled area segmentation and cleaning trajectory generated by the algorithm.</p>
Full article ">Figure 9
<p>Two-phase detection algorithm. (<b>a</b>) Phase 1; (<b>b</b>) Phase 2.</p>
Full article ">Figure 10
<p>State machine used for the descent and landing maneuver.</p>
Full article ">Figure 11
<p>UAV hardware setup.</p>
Full article ">Figure 12
<p>Path followed by the UAV.</p>
Full article ">Figure 13
<p>Visual control of the UAV with the control signal (<b>blue line</b>), position estimation of the insulator (<b>black line</b>), and reference (<b>dotted red line</b>). Once the insulator is detected, the UAV performs the approach phase indicated by the green area. When the UAV is stabilized on the three axes below a threshold for three seconds, the cleaning phase indicated by the yellow area begins.</p>
Full article ">Figure 14
<p>Evolution of soiled areas in the insulator.</p>
Full article ">Figure 15
<p>Joint variables and percentage of soiled area cleaned over time by the cleaning tool.</p>
Full article ">Figure 16
<p>Trajectory followed by the UAV during landing—example of the cone made for a safer descent.</p>
Full article ">Figure 17
<p>Control signals sent to the autopilot during landing (<b>blue line</b>) and charging pad position (<b>black line</b>) with the reference (<b>dotted red line</b>). The UAV aligns with the platform in the horizontal plane while descending at a constant speed. The second phase of detection begins at 1.8 m, slowing the descent speed and waiting for the optimum horizontal plane alignment to achieve a safe landing.</p>
Full article ">
27 pages, 3549 KiB  
Article
Divergence-Based Segmentation Algorithm for Heavy-Tailed Acoustic Signals with Time-Varying Characteristics
by Aleksandra Grzesiek, Karolina Gąsior, Agnieszka Wyłomańska and Radosław Zimroz
Sensors 2021, 21(24), 8487; https://doi.org/10.3390/s21248487 - 20 Dec 2021
Cited by 5 | Viewed by 2685
Abstract
Many real-world systems change their parameters during the operation. Thus, before the analysis of the data, there is a need to divide the raw signal into parts that can be considered as homogeneous segments. In this paper, we propose a segmentation procedure that [...] Read more.
Many real-world systems change their parameters during the operation. Thus, before the analysis of the data, there is a need to divide the raw signal into parts that can be considered as homogeneous segments. In this paper, we propose a segmentation procedure that can be applied for the signal with time-varying characteristics. Moreover, we assume that the examined signal exhibits impulsive behavior, thus it corresponds to the so-called heavy-tailed class of distributions. Due to the specific behavior of the data, classical algorithms known from the literature cannot be used directly in the segmentation procedure. In the considered case, the transition between parts corresponding to homogeneous segments is smooth and non-linear. This causes that the segmentation algorithm is more complex than in the classical case. We propose to apply the divergence measures that are based on the distance between the probability density functions for the two examined distributions. The novel segmentation algorithm is applied to real acoustic signals acquired during coffee grinding. Justification of the methodology has been performed experimentally and using Monte-Carlo simulations for data from the model with heavy-tailed distribution (here the stable distribution) with time-varying parameters. Although the methodology is demonstrated for a specific case, it can be extended to any process with time-changing characteristics. Full article
Show Figures

Figure 1

Figure 1
<p>Acoustic signal measurements from coffee bean grinding using a mobile phone.</p>
Full article ">Figure 2
<p>Trajectories of eight raw signals. Panels (<b>a</b>–<b>h</b>) correspond to Signals 1–8, respectively.</p>
Full article ">Figure 3
<p>Examples of signals after the pre-processing step. Panel (<b>a</b>) corresponds to Signal 1 and panel (<b>b</b>) corresponds to Signal 2.</p>
Full article ">Figure 4
<p>Density maps for Signal 1 (panel (<b>a</b>)) and Signal 2 (panel (<b>b</b>)).</p>
Full article ">Figure 5
<p>Estimated parameters of the stable distribution for Signal 1.</p>
Full article ">Figure 6
<p>Estimated parameters of the stable distribution for Signal 2.</p>
Full article ">Figure 7
<p>Jeffreys distance (panel (<b>a</b>)) and differences of Jeffreys distance (panel (<b>b</b>)) comparing the pdfs in the moving window of length 2500 with step equal to 250 to the pdf corresponding to the last window for Signal 1. Detected regime change points are marked in purple (dotted line), red (solid line), and yellow (dashed line).</p>
Full article ">Figure 8
<p>Jeffreys distance (panel (<b>a</b>)) and differences of Jeffreys distance (panel (<b>b</b>)) comparing the pdfs in the moving window of length 2500 with step equal to 250 to the the pdf corresponding to the last window for Signal 2. Detected regime change points are marked in purple (dotted line), red (solid line) and yellow (dashed line).</p>
Full article ">Figure 9
<p>Signals with marked regime changes. The first change is visible as a dotted purple line, second: a solid red line, and the last one is a dashed yellow line. Panel (<b>a</b>) corresponds to Signal 1 and panel (<b>b</b>) corresponds to Signal 2.</p>
Full article ">Figure 10
<p>Estimated values of <math display="inline"><semantics> <mi>α</mi> </semantics></math> on panel (<b>a</b>) and <math display="inline"><semantics> <mi>σ</mi> </semantics></math> on panel (<b>b</b>) in the subsequent segments of length 2500 for Signal 1. Fitted deterministic functions are marked in red.</p>
Full article ">Figure 11
<p>Values of <math display="inline"><semantics> <mi>α</mi> </semantics></math> (panel (<b>a</b>)) and <math display="inline"><semantics> <mi>σ</mi> </semantics></math> (panel (<b>b</b>)) in the subsequent segments of length 2500, the obtained simulated signal (panel (<b>c</b>)), and the corresponding probability density map (panel (<b>d</b>)).</p>
Full article ">Figure 12
<p>Estimated parameters of the stable distribution for sample simulated signal presented in panel (<b>c</b>) of <a href="#sensors-21-08487-f011" class="html-fig">Figure 11</a>. Panels (<b>a</b>), (<b>b</b>), (<b>c</b>) and (<b>d</b>) correspond to <math display="inline"><semantics> <mi>α</mi> </semantics></math>, <math display="inline"><semantics> <mi>σ</mi> </semantics></math>, <math display="inline"><semantics> <mi>β</mi> </semantics></math> and <math display="inline"><semantics> <mi>μ</mi> </semantics></math>, respectively.</p>
Full article ">Figure 13
<p>Jeffreys distance (panel (<b>a</b>)) and differences of Jeffreys distance (panel (<b>b</b>)) comparing the pdfs in the moving window of length 2500 with step equal to 250 to the the pdf corresponding to the last window. Detected regime change points are marked in purple (dotted line), red (solid line), and yellow (dashed line) and the theoretical moments of <math display="inline"><semantics> <mi>σ</mi> </semantics></math> and <math display="inline"><semantics> <mi>α</mi> </semantics></math> stabilization are marked with black dots.</p>
Full article ">Figure 14
<p>Simulated signal with regime change points marked in purple (dotted line), red (solid line), and yellow (dashed line) and the theoretical moments of <math display="inline"><semantics> <mi>σ</mi> </semantics></math> and <math display="inline"><semantics> <mi>α</mi> </semantics></math> stabilization marked with black dots.</p>
Full article ">Figure 15
<p>Boxplots presenting the results of the Monte Carlo simulation study, i.e., the identified moments of the first (purple), the second (red), and the third (yellow) regime changes for 100 simulated signals. Panel (<b>a</b>) corresponds to the procedure applied to non-overlapping windows of length 2500 (with step equal to 2500) and panels (<b>b</b>) and (<b>c</b>) correspond to the overlapping windows of length 2500 with step equal to 500 and 250, respectively. Black dash lines for the second and the third boxplot indicate the theoretical moments of <math display="inline"><semantics> <mi>σ</mi> </semantics></math> and <math display="inline"><semantics> <mi>α</mi> </semantics></math> stabilization, respectively.</p>
Full article ">Figure 16
<p>Coffee beans ground for about 5 s.</p>
Full article ">Figure 17
<p>Coffee beans ground for about 10 s.</p>
Full article ">Figure 18
<p>Coffee beans ground for about 15 s.</p>
Full article ">Figure 19
<p>Coffee beans ground for about 20 s.</p>
Full article ">Figure 20
<p>Coffee beans ground for about 25 s.</p>
Full article ">Figure 21
<p>Coffee beans ground for about 30 s.</p>
Full article ">
8 pages, 3286 KiB  
Communication
Plasma Generator with Dielectric Rim and FSS Electrode for Enhanced RCS Reduction Effect
by Taejoo Oh, Changseok Cho, Wookhyun Ahn, Jong-Gwan Yook, Jangjae Lee, Shinjae You, Jinwoo Yim, Jungje Ha, Gihun Bae, Heung-Cheol You and Yongshik Lee
Sensors 2021, 21(24), 8486; https://doi.org/10.3390/s21248486 - 20 Dec 2021
Cited by 10 | Viewed by 2708
Abstract
In this study, a method was experimentally verified for further reducing the radar cross-section (RCS) of a two-dimensional planar target by using a dielectric rim in a dielectric barrier discharge (DBD) plasma generator using a frequency selective surface (FSS) as an electrode. By [...] Read more.
In this study, a method was experimentally verified for further reducing the radar cross-section (RCS) of a two-dimensional planar target by using a dielectric rim in a dielectric barrier discharge (DBD) plasma generator using a frequency selective surface (FSS) as an electrode. By designing the frequency selective surface such that the passbands of the radar signal match, it is possible to minimize the effect of the conductor electrode, in order to maximize the RCS reduction effect due to the plasma. By designing the FSS to be independent of the polarization, the effect of RCS reduction can be insensitive to the polarization of the incoming wave. Furthermore, by introducing a dielectric rim between the FSS electrode and the target, an additional RCS reduction effect is achieved. By fabricating the proposed plasma generator, an RCS reduction effect of up to 6.4 dB in X-band was experimentally verified. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Components of the proposed dielectric barrier discharge (DBD) structure plasma generator: (<b>a</b>) frequency selective surface (FSS) electrode, (<b>b</b>) dielectric rim, (<b>c</b>) three-dimensional (3D) layer structure.</p>
Full article ">Figure 2
<p>Fabricated DBD structure plasma generator.</p>
Full article ">Figure 3
<p>Monostatic radar cross-section (RCS) experimental environment: (<b>a</b>) block diagram, (<b>b</b>) photograph.</p>
Full article ">Figure 4
<p>Plasma generation form of the fabricated generator.</p>
Full article ">Figure 5
<p>Voltage and current value of optimal plasma.</p>
Full article ">Figure 6
<p>Measured (thick) and simulated (thin) RCS results: target (20 × 20 cm<sup>2</sup> copper plate), target with DBD (plasma off), target with DBD (plasma on with and without dielectric rim). Simulated results are those of the target.</p>
Full article ">Figure 7
<p>Comparison of RCS reduction effect of proposed DBD: with and without dielectric rim. Measured (thick) and simulated (thin) results based on the Drude model using CST [<a href="#B14-sensors-21-08486" class="html-bibr">14</a>].</p>
Full article ">
34 pages, 2666 KiB  
Review
Machine Learning-Based Epileptic Seizure Detection Methods Using Wavelet and EMD-Based Decomposition Techniques: A Review
by Rabindra Gandhi Thangarajoo, Mamun Bin Ibne Reaz, Geetika Srivastava, Fahmida Haque, Sawal Hamid Md Ali, Ahmad Ashrif A. Bakar and Mohammad Arif Sobhan Bhuiyan
Sensors 2021, 21(24), 8485; https://doi.org/10.3390/s21248485 - 20 Dec 2021
Cited by 23 | Viewed by 4298
Abstract
Epileptic seizures are temporary episodes of convulsions, where approximately 70 percent of the diagnosed population can successfully manage their condition with proper medication and lead a normal life. Over 50 million people worldwide are affected by some form of epileptic seizures, and their [...] Read more.
Epileptic seizures are temporary episodes of convulsions, where approximately 70 percent of the diagnosed population can successfully manage their condition with proper medication and lead a normal life. Over 50 million people worldwide are affected by some form of epileptic seizures, and their accurate detection can help millions in the proper management of this condition. Increasing research in machine learning has made a great impact on biomedical signal processing and especially in electroencephalogram (EEG) data analysis. The availability of various feature extraction techniques and classification methods makes it difficult to choose the most suitable combination for resource-efficient and correct detection. This paper intends to review the relevant studies of wavelet and empirical mode decomposition-based feature extraction techniques used for seizure detection in epileptic EEG data. The articles were chosen for review based on their Journal Citation Report, feature selection methods, and classifiers used. The high-dimensional EEG data falls under the category of ‘3N’ biosignals—nonstationary, nonlinear, and noisy; hence, two popular classifiers, namely random forest and support vector machine, were taken for review, as they are capable of handling high-dimensional data and have a low risk of over-fitting. The main metrics used are sensitivity, specificity, and accuracy; hence, some papers reviewed were excluded due to insufficient metrics. To evaluate the overall performances of the reviewed papers, a simple mean value of all metrics was used. This review indicates that the system that used a Stockwell transform wavelet variant as a feature extractor and SVM classifiers led to a potentially better result. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>The international 10–20 system of the electrode [<a href="#B4-sensors-21-08485" class="html-bibr">4</a>]. Seen from (<b>A</b>) left and (<b>B</b>) above the head. A: earlobe, C: central, Pg: nasopharyngeal, P: parietal, F: frontal, Fp: frontal polar, O: occipital.</p>
Full article ">Figure 2
<p>The process of epileptic seizure data classification.</p>
Full article ">Figure 3
<p>Seizure decomposition method.</p>
Full article ">Figure 4
<p>The wavelet decomposition method used to process a signal x(n). (<b>A</b>) Decomposition of a signal into its approximate (g(n)) and detail (h(n)) coefficients. (<b>B</b>) The signal decomposed into its five level detail coefficients. (<b>C</b>) Example of what a fifth level decomposition looks like.</p>
Full article ">Figure 5
<p>The process of signal decomposition using empirical mode decomposition. (<b>A</b>) EEG signal of one electrode decomposed into 14 separate intrinsic mode functions, only the first and last four are shown here for clarity. (<b>B</b>) Process of Hilbert transform used to obtain each IMF’s instantaneous frequency information; the example shown here is only for IMF1 and IMF4. Signal was obtained from the CHB MIT database and was single-channel processed using Matlab 2015a.</p>
Full article ">Figure 6
<p>Ensemble decision tree structure that makes up a random forest classifier. The tally was four 1’s and two 0’s, thus resulting in prediction = 1.</p>
Full article ">Figure 7
<p>Hyperplane example for the support vector machine.</p>
Full article ">Figure 8
<p>The 3RF classifier-based state machine. The th<sub>roc</sub> value is determined at each stage of the state machine.</p>
Full article ">Figure 9
<p>TQWT based N-level decomposition adapted from [<a href="#B6-sensors-21-08485" class="html-bibr">6</a>].</p>
Full article ">Figure A1
<p>EEG pre-processing techniques and EEG artifacts/data cleaning used in this review. (<b>a</b>) The first five processes of epileptic seizure data classification as in <a href="#sensors-21-08485-f002" class="html-fig">Figure 2</a>. (<b>b</b>) The epileptic filter used to bandpass the frequency from 0.5 Hz to 60 Hz in study [<a href="#B12-sensors-21-08485" class="html-bibr">12</a>]. (<b>c</b>) The logarithmic operation applied to each BLIMF after decomposition in study [<a href="#B16-sensors-21-08485" class="html-bibr">16</a>]. (<b>d</b>) The 4th order Butterworth low-pass filter embedded in the decomposition phase in study [<a href="#B17-sensors-21-08485" class="html-bibr">17</a>].</p>
Full article ">
15 pages, 1408 KiB  
Article
Sagittal and Vertical Growth of the Maxillo–Mandibular Complex in Untreated Children: A Longitudinal Study on Lateral Cephalograms Derived from Cone Beam Computed Tomography
by Leah Yi, Hyeran Helen Jeon, Chenshuang Li, Normand Boucher and Chun-Hsi Chung
Sensors 2021, 21(24), 8484; https://doi.org/10.3390/s21248484 - 20 Dec 2021
Cited by 5 | Viewed by 2995
Abstract
The aim of this longitudinal study was to evaluate the sagittal and vertical growth of the maxillo–mandibular complex in untreated children using orthogonal lateral cephalograms compressed from cone beam computed tomography (CBCT). Two sets of scans, on 12 males (mean 8.75 years at [...] Read more.
The aim of this longitudinal study was to evaluate the sagittal and vertical growth of the maxillo–mandibular complex in untreated children using orthogonal lateral cephalograms compressed from cone beam computed tomography (CBCT). Two sets of scans, on 12 males (mean 8.75 years at T1, and 11.52 years at T2) and 18 females (mean 9.09 years at T1, and 10.80 years at T2), were analyzed using Dolphin 3D imaging. The displacements of the landmarks and rotations of both jaws relative to the cranial base were measured using the cranial base, and the maxillary and mandibular core lines. From T1 to T2, relative to the cranial base, the nasion, orbitale, A-point, and B-point moved anteriorly and inferiorly. The porion moved posteriorly and inferiorly. The ANB and mandibular plane angle decreased. All but one subject had forward rotation in reference to the cranial base. The maxillary and mandibular superimpositions showed no sagittal change on the A-point and B-point. The U6 and U1 erupted at 0.94 and 1.01 mm/year (males) and 0.82 and 0.95 mm/year (females), respectively. The L6 and L1 erupted at 0.66 and 0.88 mm/year (males), and at 0.41 mm/year for both the L6 and the L1 (females), respectively. Full article
Show Figures

Figure 1

Figure 1
<p>CBCT Orientation: (<b>A</b>) roll and yaw, and (<b>B</b>) pitch, as previously published in [<a href="#B25-sensors-21-08484" class="html-bibr">25</a>].</p>
Full article ">Figure 2
<p>The cranial base, maxillary, and mandibular core lines. (<b>A</b>) Landmarks were placed on the CBCT image to facilitate the identification of certain structures. (<b>B</b>) The cranial base core line is drawn on the T1 lateral cephalograms and transferred onto the T2 tracing on a cranial base superimposition. (<b>C</b>) The maxillary core lines. (<b>D</b>) The mandibular core lines. Black: T1, and red: T2.</p>
Full article ">Figure 3
<p>Angular measurements: (<b>A</b>) CB line-A, CB line-B, and A-CB point-B.; (<b>B</b>) CB line-GoGn.; (<b>C</b>) CB line-Mx point, CB line-Md point, and Mx point-CB line-Md point.; (<b>D</b>) CB line-Mx line and CB line-Md line; (<b>E</b>) Co-Md line.; (<b>F</b>) Md line-Md border.</p>
Full article ">Figure 4
<p>Sagittal (x) and vertical (y) movements of cephalometric points in males (mm/year). The average annual sagittal (x) and vertical (y) movements of cephalometric points on cranial base, maxillary, and mandibular superimpositions of the T1 and T2 lateral cephalograms in males.</p>
Full article ">Figure 5
<p>Sagittal (x) and vertical (y) movements of cephalometric points in females (mm/year). The average annual sagittal (x) and vertical (y) movements of cephalometric points on cranial base, maxillary, and mandibular superimpositions of the T1 and T2 lateral cephalograms in females.</p>
Full article ">Figure 6
<p>The annual average changes in the linear and angular measurements in males (gray) and females (white).</p>
Full article ">
9 pages, 3414 KiB  
Article
Estimation of Fluor Emission Spectrum through Digital Photo Image Analysis with a Water-Based Liquid Scintillator
by Ji-Won Choi, Ji-Young Choi and Kyung-Kwang Joo
Sensors 2021, 21(24), 8483; https://doi.org/10.3390/s21248483 - 20 Dec 2021
Cited by 2 | Viewed by 2224
Abstract
In this paper, we performed a feasibility study of using a water-based liquid scintillator (WbLS) for conducting imaging analysis with a digital camera. The liquid scintillator (LS) dissolves a scintillating fluor in an organic base solvent to emit light. We synthesized a liquid [...] Read more.
In this paper, we performed a feasibility study of using a water-based liquid scintillator (WbLS) for conducting imaging analysis with a digital camera. The liquid scintillator (LS) dissolves a scintillating fluor in an organic base solvent to emit light. We synthesized a liquid scintillator using water as a solvent. In a WbLS, a suitable surfactant is needed to mix water and oil together. As an application of the WbLS, we introduced a digital photo image analysis in color space. A demosaicing process to reconstruct and decode color is briefly described. We were able to estimate the emission spectrum of the fluor dissolved in the WbLS by analyzing the pixel information stored in the digital image. This technique provides the potential to estimate fluor components in the visible region without using an expensive spectrophotometer. In addition, sinogram analysis was performed with Radon transformation to reconstruct transverse images with longitudinal photo images of the WbLS sample. Full article
Show Figures

Figure 1

Figure 1
<p>A workflow example of CFA arrangement and the demosaicing process for an interpolated full color image. (<b>a</b>) Repetitive patterns consisting of R, G, and B are called mosaics. (<b>b</b>) Demosaicing according to each RGB color component. Input raw (mosaiced) image from CFA. (<b>c</b>) Spatial arrangement (lowercase of rgb) of each pixel is assigned based on neighboring pixel color information. The result of red, green and blue channel interpolation at white locations. (<b>d</b>) The final result of RGB interpolation of each pixel. The final result of red, green, and blue channel interpolation at red, green and blue locations.</p>
Full article ">Figure 2
<p>(<b>a</b>) Experimental setting for taking digital image photographs. The camera was placed approximately 50 cm in front of the WbLS sample. Because there was a wall between the camera and the WbLS sample the UV lamp light did not directly enter the camera. Only the desired light reached the camera. A cylindrical quartz container with a diameter of 4 cm and a height of 7 cm was filled with WbLS using IGEPAL CO-630 surfactant. It was placed on top of the rotating disk. UV light illuminated the sample from the top of the container. The camera was remotely controlled. (<b>b</b>) Rectangular boxes represent regions of interest. Only those pixel regions whose V value in the HSV model was greater than 60% were selected and their boundary lines were displayed as a rectangular box. The fourth box was selected for the background rejection.</p>
Full article ">Figure 3
<p>(<b>a</b>) Red, (<b>b</b>) Green, and (<b>c</b>) Blue components extracted from the photographed images in <a href="#sensors-21-08483-f002" class="html-fig">Figure 2</a>b taken by a CMOS digital camera (Canon EOS 450D). RGB values of PPO, PPO+POPOP, PPO+bis-MSB as a function of pixel intensity are listed. Among R, G, and B values, blue values are dominant in each fluor case. (<b>d</b>) Extracted emission spectrum of PPO, POPOP, and bis-MSB from hue value after background subtraction as a function of wavelength.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) Red, (<b>b</b>) Green, and (<b>c</b>) Blue components extracted from the photographed images in <a href="#sensors-21-08483-f002" class="html-fig">Figure 2</a>b taken by a CMOS digital camera (Canon EOS 450D). RGB values of PPO, PPO+POPOP, PPO+bis-MSB as a function of pixel intensity are listed. Among R, G, and B values, blue values are dominant in each fluor case. (<b>d</b>) Extracted emission spectrum of PPO, POPOP, and bis-MSB from hue value after background subtraction as a function of wavelength.</p>
Full article ">Figure 4
<p>(<b>a</b>) UV light was illuminated from above the WbLS sample. An initial front view digital image of the WbLS sample using HCO-60 surfactant placed on a rotating disk before rotation. Then, the disk rotated at a constant speed. (<b>b</b>) A horizontal dashed line is shown passing through the air bubble in the container. Its pixel number of z-axis is 542. The tomographic image was extracted and reconstructed based on this line. The camera was focused on the center of the container and a picture was taken after rotating the disk every 0.25°. Only a portion of the 1,400 projected images is shown.</p>
Full article ">Figure 5
<p>(<b>a</b>) A 360° sinogram reconstructed from the image of the small air bubble created in WbLS sample. The x-axis is 1 to 360° (1 to 1400 pixel labels), depending on how much the cylindrical circle is divided. The y-axis is the pixel label of the y-axis (height) of the yz-plane image. The color represents the brightness of the surroundings including the bottle when we reconstruct it with an arbitrary color. Each pixel number band on the y-axis means 700–800 (outer wall), 800–900 (air layer between sample container and outer wall), 900–1000 (inside WbLS sample container), 1000–1100 (air layer between sample container and outer wall, same as 800–900 pixel numbers), 1100–1200 (outer wall, same as 700–800 pixel numbers). (<b>b</b>) Tomographic top view image after inverse Radon transformation from (<b>a</b>). The pixel number of z-axis is 542. The x(y)-axis is the pixel label of the x(y) coordinate seen from the top. Two air bubbles in the center of the WbLS sample container can be seen. The ring shape surrounding the air bubbles represents the container. The outermost circle-shaped image represents a virtual image created by light reflected from the outside of the WbLS sample container.</p>
Full article ">
19 pages, 7344 KiB  
Article
Highly Configurable 100 Channel Recording and Stimulating Integrated Circuit for Biomedical Experiments
by Piotr Kmon
Sensors 2021, 21(24), 8482; https://doi.org/10.3390/s21248482 - 20 Dec 2021
Viewed by 3017
Abstract
This paper presents the design results of a 100-channel integrated circuit dedicated to various biomedical experiments requiring both electrical stimulation and recording ability. The main design motivation was to develop an architecture that would comprise not only the recording and stimulation, but would [...] Read more.
This paper presents the design results of a 100-channel integrated circuit dedicated to various biomedical experiments requiring both electrical stimulation and recording ability. The main design motivation was to develop an architecture that would comprise not only the recording and stimulation, but would also block allowing to meet different experimental requirements. Therefore, both the controllability and programmability were prime concerns, as well as the main chip parameters uniformity. The recording stage allows one to set their parameters independently from channel to channel, i.e., the frequency bandwidth can be controlled in the (0.3 Hz–1 kHz)–(20 Hz–3 kHz) (slow signal path) or (0.3 Hz–1 kHz)–4.7 kHz (fast signal path) range, while the voltage gain can be set individually either to 43.5 dB or 52 dB. Importantly, thanks to in-pixel circuitry, main system parameters may be controlled individually allowing to mitigate the circuitry components spread, i.e., lower corner frequency can be tuned in the 54 dB range with approximately 5% precision, and the upper corner frequency spread is only 4.2%, while the voltage gain spread is only 0.62%. The current stimulator may also be controlled in the broad range (69 dB) with its current setting precision being no worse than 2.6%. The recording channels’ input-referred noise is equal to 8.5 µVRMS in the 10 Hz–4.7 kHz bandwidth. The single-pixel occupies 0.16 mm2 and consumes 12 µW (recording part) and 22 µW (stimulation blocks). Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>The conceptual idea of the implantable multichannel system for neurobiological experiments.</p>
Full article ">Figure 2
<p>The NRS100 block idea, photograph of the PCB mounted IC, and the single pixels’ layout masks view.</p>
Full article ">Figure 3
<p>Recording channels’ schematic idea.</p>
Full article ">Figure 4
<p>The schematic idea of the core amplifiers used in (<b>a</b>) front-end amplifier and AMP2, (<b>b</b>) AMP1 amplifier.</p>
Full article ">Figure 5
<p>The conceptual idea of the current stimulator.</p>
Full article ">Figure 6
<p>Current stimulator schematic idea.</p>
Full article ">Figure 7
<p>The schematic idea of the current stimulators’ core amplifier (<b>a</b>) and schematic idea of the programmable current stimulator (<b>b</b>).</p>
Full article ">Figure 8
<p>Current stimulators’ exemplary signals.</p>
Full article ">Figure 9
<p>LCF control range (<b>a</b>) and exemplary frequency responses for different DAC settings (<b>b</b>).</p>
Full article ">Figure 10
<p>UCF control ranges (<b>a</b>) and exemplary frequency responses for different DAC settings (<b>b</b>).</p>
Full article ">Figure 11
<p>LCF spread from channel to channel before and after correction for a given LCF setting (<b>a</b>) and voltage gain, LCF, and UCF histograms (<b>b</b>).</p>
Full article ">Figure 12
<p>Noise spectral density for one of the selected recording channel settings.</p>
Full article ">Figure 13
<p>Current stimulators’ particular current ranges controlling range with inset pictures of histograms for particulars’ current correction (all 100 stimulation channels are here given).</p>
Full article ">Figure 14
<p>The photo of the 256 channel in-vitro recording platform with inset pictures of the specially developed multielectrode arrays used in the experiment, and figurative experiment description.</p>
Full article ">Figure 15
<p>Typical neurobiological signals recorded by the chip presented. Top: signal recorded by one electrode located inside the microchannel. Bottom: enlarged single action potentials (<b>left</b> and <b>right</b>) and burst activity (<b>center</b>). In the burst, several signals from different axons grown through the microchannel can be distinguished due to different signal amplitudes and signal shapes.</p>
Full article ">
21 pages, 5726 KiB  
Article
Efficient Online Object Tracking Scheme for Challenging Scenarios
by Khizer Mehmood, Ahmad Ali, Abdul Jalil, Baber Khan, Khalid Mehmood Cheema, Maria Murad and Ahmad H. Milyani
Sensors 2021, 21(24), 8481; https://doi.org/10.3390/s21248481 - 20 Dec 2021
Cited by 10 | Viewed by 3670
Abstract
Visual object tracking (VOT) is a vital part of various domains of computer vision applications such as surveillance, unmanned aerial vehicles (UAV), and medical diagnostics. In recent years, substantial improvement has been made to solve various challenges of VOT techniques such as change [...] Read more.
Visual object tracking (VOT) is a vital part of various domains of computer vision applications such as surveillance, unmanned aerial vehicles (UAV), and medical diagnostics. In recent years, substantial improvement has been made to solve various challenges of VOT techniques such as change of scale, occlusions, motion blur, and illumination variations. This paper proposes a tracking algorithm in a spatiotemporal context (STC) framework. To overcome the limitations of STC based on scale variation, a max-pooling-based scale scheme is incorporated by maximizing over posterior probability. To avert target model from drift, an efficient mechanism is proposed for occlusion handling. Occlusion is detected from average peak to correlation energy (APCE)-based mechanism of response map between consecutive frames. On successful occlusion detection, a fractional-gain Kalman filter is incorporated for handling the occlusion. An additional extension to the model includes APCE criteria to adapt the target model in motion blur and other factors. Extensive evaluation indicates that the proposed algorithm achieves significant results against various tracking methods. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Classification and Tracking)
Show Figures

Figure 1

Figure 1
<p>Challenging scenarios in visual object tracking (VOT). The first row shows motion blur in an image sequence. The second row shows the scale variation of the target. The third row shows heavy occlusion of the target. Pictures in the figure are part of OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 2
<p>The spatial relation between object and its context. Picture in the figure is part of OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 3
<p>Flowchart of proposed tracking method.</p>
Full article ">Figure 4
<p>Occlusion detection mechanism. Pictures in the figure are part of OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 5
<p>Learning rate mechanism. Pictures in the figure are part of OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 6
<p>Precision plot comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 6 Cont.
<p>Precision plot comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 7
<p>Center location error (in pixels) comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 7 Cont.
<p>Center location error (in pixels) comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 7 Cont.
<p>Center location error (in pixels) comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 8
<p>Qualitative comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 8 Cont.
<p>Qualitative comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">Figure 8 Cont.
<p>Qualitative comparison for the OTB-100 dataset [<a href="#B26-sensors-21-08481" class="html-bibr">26</a>].</p>
Full article ">
11 pages, 3912 KiB  
Article
Detecting Teeth Defects on Automotive Gears Using Deep Learning
by Abdelrahman Allam, Medhat Moussa, Cole Tarry and Matthew Veres
Sensors 2021, 21(24), 8480; https://doi.org/10.3390/s21248480 - 19 Dec 2021
Cited by 11 | Viewed by 5171
Abstract
Gears are a vital component in many complex mechanical systems. In automotive systems, and in particular vehicle transmissions, we rely on them to function properly on different types of challenging environments and conditions. However, when a gear is manufactured with a defect, the [...] Read more.
Gears are a vital component in many complex mechanical systems. In automotive systems, and in particular vehicle transmissions, we rely on them to function properly on different types of challenging environments and conditions. However, when a gear is manufactured with a defect, the gear’s integrity can become compromised and lead to catastrophic failure. The current inspection process used by an automotive gear manufacturer in Guelph, Ontario, requires human operators to visually inspect all gear produced. Yet, due to the quantity of gears manufactured, the diverse array of defects that can arise, the time requirements for inspection, and the reliance on the operator’s inspection ability, the system suffers from poor scalability, and defects can be missed during inspection. In this work, we propose a machine vision system for automating the inspection process for gears with damaged teeth defects. The implemented inspection system uses a faster R-CNN network to identify the defects, and combines domain knowledge to reduce the manual inspection of non-defective gears by 66%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Damaged Teeth defect on the Tooth Edge (<b>top</b> row), and Top Land (<b>bottom</b> row).</p>
Full article ">Figure 2
<p>Faster R-CNN deep learning network for defect detection.</p>
Full article ">Figure 3
<p>The same damaged teeth defect remains visible to the camera as the gear is rotated during inspection starting from the left most image to the right most image.</p>
Full article ">Figure 4
<p>Inspection cell and sample camera images. <b>Left</b>: inspection cell, <b>Top Right</b>: image from the first camera, <b>Bottom Right</b>: image from the second camera.</p>
Full article ">Figure 5
<p>Average precision and recall of the 10-folds for the damaged teeth defects.</p>
Full article ">Figure 6
<p>Precision and recall values for 306 images of damaged teeth defects.</p>
Full article ">Figure 7
<p>A false positive was not considered a defect since it was detected by the algorithm only on the first and second images (surrounded by blue and yellow bounding boxes), and was not detected on the third image (red circle).</p>
Full article ">
Previous Issue
Back to TopTop