[go: up one dir, main page]

Next Issue
Volume 20, June-1
Previous Issue
Volume 20, May-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 20, Issue 10 (May-2 2020) – 261 articles

Cover Story (view full-size image): Diffuse optical tomography is a non-invasive photonics-based imaging technology suited to functional brain imaging. Recent developments have proven the possibility to integrate time-resolved detectors inside the probe, in direct contact with the tissue under investigation, maximizing light harvesting and reducing system complexity. We have developed a system based on 8 probe-hosted SiPM detectors and 6 light injection fibers. Using 2 wavelengths permits quantifying both oxy- and deoxy-hemoglobin concentration evolution in the tissue. An automatic switch enables a complete tomographic acquisition to be performed in less than one second. The system was challenged against two in vivo tests: arm cuff occlusion and motor cortex brain activation. The results show that the tomographic system makes it possible to follow the evolution of brain activation over time with a 1 s resolution. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
14 pages, 3689 KiB  
Article
Modeling and Analysis of Capacitive Relaxation Quenching in a Single Photon Avalanche Diode (SPAD) Applied to a CMOS Image Sensor
by Akito Inoue, Toru Okino, Shinzo Koyama and Yutaka Hirose
Sensors 2020, 20(10), 3007; https://doi.org/10.3390/s20103007 - 25 May 2020
Cited by 12 | Viewed by 6061
Abstract
We present an analysis of carrier dynamics of the single-photon detection process, i.e., from Geiger mode pulse generation to its quenching, in a single-photon avalanche diode (SPAD). The device is modeled by a parallel circuit of a SPAD and a capacitance representing both [...] Read more.
We present an analysis of carrier dynamics of the single-photon detection process, i.e., from Geiger mode pulse generation to its quenching, in a single-photon avalanche diode (SPAD). The device is modeled by a parallel circuit of a SPAD and a capacitance representing both space charge accumulation inside the SPAD and parasitic components. The carrier dynamics inside the SPAD is described by time-dependent bipolar-coupled continuity equations (BCE). Numerical solutions of BCE show that the entire process completes within a few hundreds of picoseconds. More importantly, we find that the total amount of charges stored on the series capacitance gives rise to a voltage swing of the internal bias of SPAD twice of the excess bias voltage with respect to the breakdown voltage. This, in turn, gives a design methodology to control precisely generated charges and enables one to use SPADs as conventional photodiodes (PDs) in a four transistor pixel of a complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) with short exposure time and without carrier overflow. Such operation is demonstrated by experiments with a 6 µm size 400 × 400 pixels SPAD-based CIS designed with this methodology. Full article
(This article belongs to the Special Issue Photon Counting Image Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Circuit diagram of a four transistors pixel circuit. Abbreviations TRN, RST, FD, SF, and SEL denote transfer transistor, reset transistor, floating diffusion, source follower transistor, and select transistor. (<b>b</b>–<b>d</b>) Simplified equivalent circuit models and band diagrams of single-photon avalanche diode (SPAD), (<b>b</b>) after the reset process, (<b>c</b>) during avalanche multiplication, and (<b>d</b>) after RQ. It is noted that the series capacitances in (<b>b</b>–<b>d</b>) are a summation of the diode capacitance and the stray components. (<b>e</b>) A typical timing chart of the pixel circuit. The notations, “H” and “L”, mean high voltage and low voltage is applied to gates of the transistors, respectively. The arrow denoted as <span class="html-italic">h</span>ν indicates an arrival of a photon during an exposure period resulting in voltage drop of the node SPAD.</p>
Full article ">Figure 2
<p>Calculated parameters in time domain. The time increment is 0.1 ps. (<b>a</b>) Time evolutions of the electron number in the <span class="html-italic">i</span>-region, (<b>b</b>) the number of accumulated electrons in the series capacitance, and (<b>c</b>) reverse bias of SPAD. The red, green, purple, and blue lines denote, respectively, the results with initial biases 29 V, 30 V, 31 V, and 32 V. A horizontal dashed line in (<b>c</b>) indicates <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>B</mi> <mi>D</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> and vertical dashed lines show the times when d<span class="html-italic">n</span>/d<span class="html-italic">t</span> = 0 or <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <mi>V</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mrow> <mo>|</mo> <mo>=</mo> <mo>|</mo> </mrow> <msub> <mi>V</mi> <mrow> <mi>B</mi> <mi>D</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>The voltage drop after the quenching (<math display="inline"><semantics> <mrow> <mo>=</mo> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> </mrow> </semantics></math>) with respect to the initial bias (<math display="inline"><semantics> <mrow> <mo>=</mo> <msub> <mi>V</mi> <mn>0</mn> </msub> </mrow> </semantics></math> ). The blue dots connected by line, the black dashed line, and the red circles connected by dashed line denote, respectively, the calculated results of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>, and the experimental result. A red hatched area is a region between calculated curves; <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> <mo>=</mo> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> </semantics></math>, where experimental results fall within. It is noted that the measured results of <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> </mrow> </semantics></math> are converted from the actually measured voltage of a sensing node or a floating diffusion (FD) by taking account of the capacitance values of a SPAD and FD.</p>
Full article ">Figure 4
<p>A cross sectional view of a SPAD with a vertical avalanche photodiode structure (VAPD) and the designed potential profiles in the horizontal (A-A’) and the vertical (B-B’) direction. The transistor shown on the cross section represents the reset transistor (RST), in <a href="#sensors-20-03007-f001" class="html-fig">Figure 1</a>a.</p>
Full article ">Figure 5
<p>A block diagram of the developed CMOS image sensor (CIS).</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>c</b>) Oscilloscope waveforms of output signals (Yellow: SF output, Blue: Light pulse) of pixels measured at <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> = (<b>a</b>) N.A. (non-avalanche region), (<b>b</b>) 0.7 V, (<b>c</b>) 1.2 V. (<b>d</b>–<b>f</b>) Histograms of output signal of pixels measured at <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> = (<b>d</b>) N.A. (non-avalanche region), (<b>e</b>) 0.7 V, (f) 1.2 V. A red line in the graph indicates SF saturation voltage (1.3 V). (<b>g</b>–<b>i</b>) Pictures of a zebra taken at <math display="inline"><semantics> <mrow> <mo>|</mo> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> <mo>|</mo> </mrow> </semantics></math> equals (<b>g</b>) N.A. (non-avalanche region), (<b>h</b>) 0.7 V, (<b>i</b>) 1.2 V. It is noted that the output voltage is slightly less than <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>, e.g., 1.1 V with <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math> = 1.2 V. Considering the gain of source follower (0.8), the input referred voltage swing is 1.4 V which is larger than <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> <mrow> <msub> <mi>V</mi> <mrow> <mi>e</mi> <mi>x</mi> </mrow> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>(<b>a</b>) The standard deviation of <math display="inline"><semantics> <mrow> <mo>|</mo> <mi mathvariant="sans-serif">Δ</mi> <msub> <mi>V</mi> <mi>Q</mi> </msub> <mo>|</mo> </mrow> </semantics></math>. (<b>b</b>) Photon detection efficient (PDE).</p>
Full article ">Figure 8
<p>(<b>a</b>) A top view of a SPAD. The edge of the SPAD is hatched. (<b>b</b>,<b>c</b>) Simplified band diagrams at the center of the SPAD (<b>b</b>) and at the edge of the SPAD (<b>c</b>).</p>
Full article ">Figure 9
<p>(<b>a</b>) A model circuit for a resistive quenching. (<b>b</b>) Calculated reverse biases for <span class="html-italic">R</span> = 100 kΩ (blue) <span class="html-italic">R</span> = 30 kΩ (green) <span class="html-italic">R</span> = ∞ (red). (<b>c</b>) Carrier numbers for <span class="html-italic">R</span> = 100 kΩ (blue) <span class="html-italic">R</span> = 30 kΩ (green) <span class="html-italic">R</span> = ∞ (red). Capacitance is 6fF for all conditions.</p>
Full article ">
18 pages, 8471 KiB  
Article
A Comparison of Different Counting Methods for a Holographic Particle Counter: Designs, Validations and Results
by Georg Brunnhofer, Isabella Hinterleitner, Alexander Bergmann and Martin Kraft
Sensors 2020, 20(10), 3006; https://doi.org/10.3390/s20103006 - 25 May 2020
Cited by 1 | Viewed by 3336
Abstract
Digital Inline Holography (DIH) is used in many fields of Three-Dimensional (3D) imaging to locate micro or nano-particles in a volume and determine their size, shape or trajectories. A variety of different wavefront reconstruction approaches have been developed for 3D profiling and tracking [...] Read more.
Digital Inline Holography (DIH) is used in many fields of Three-Dimensional (3D) imaging to locate micro or nano-particles in a volume and determine their size, shape or trajectories. A variety of different wavefront reconstruction approaches have been developed for 3D profiling and tracking to study particles’ morphology or visualize flow fields. The novel application of Holographic Particle Counters (HPCs) requires observing particle densities in a given sampling volume which does not strictly necessitate the reconstruction of particles. Such typically spherical objects yield circular intereference patterns—also referred to as fringe patterns—at the hologram plane which can be detected by simpler Two-Dimensional (2D) image processing means. The determination of particle number concentrations (number of particles/unit volume [#/cm 3 ]) may therefore be based on the counting of fringe patterns at the hologram plane. In this work, we explain the nature of fringe patterns and extract the most relevant features provided at the hologram plane. The features aid the identification and selection of suitable pattern recognition techniques and its parameterization. We then present three different techniques which are customized for the detection and counting of fringe patterns and compare them in terms of detection performance and computational speed. Full article
(This article belongs to the Special Issue Photonics-Based Sensors for Environment and Pollution Monitoring)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) in-line holographic principle where a single particle creates a fringe pattern at the camera plane (particles are single neclei in droplets); (<b>b</b>) schematic of the in-line holographic counting unit, subsequently called Particle Imaging Unit (PIU) cf. [<a href="#B1-sensors-20-03006" class="html-bibr">1</a>].</p>
Full article ">Figure 2
<p>(<b>a</b>) example of a typical detection plane at low particle number concentration with overlapping fringe patterns and patterns of different extent as a result of the <math display="inline"><semantics> <msub> <mi>z</mi> <mrow> <mi>p</mi> <mi>r</mi> <mi>t</mi> </mrow> </msub> </semantics></math>-location in the sampling channel; (<b>b</b>) the normalized intensity histogram.</p>
Full article ">Figure 3
<p>Fresnel Zone Plate (FZP) to estimate the radius <math display="inline"><semantics> <msub> <mi>R</mi> <mi>n</mi> </msub> </semantics></math> of fringes and the size of fringe patterns; <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>d</mi> <mi>r</mi> </mrow> </semantics></math> is the distance between two successive zone centers of the same parity (even or odd) and may be interpreted as the smallest detail to preserve when lowpass filtering fringe patterns; <math display="inline"><semantics> <mrow> <mi>d</mi> <msub> <mi>r</mi> <mi>n</mi> </msub> </mrow> </semantics></math> is the width of <span class="html-italic">n</span>th zone.</p>
Full article ">Figure 4
<p>U-Net architecture example for 32 × 32 pixels in the lowest resolution. Each blue box corresponds to a multi-channel feature map. The number of channels is denoted on top of the box. The <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>-size is provided at the lower left edge of the box. White boxes represent copied feature maps. The arrows denote the different operations.</p>
Full article ">Figure 5
<p>Detection result of the customized HT (selected image section) (<b>a</b>) Gaussian filtered fringe patterns (<math display="inline"><semantics> <mrow> <msub> <mi>σ</mi> <mrow> <mi>l</mi> <mi>p</mi> </mrow> </msub> <mo>=</mo> <mn>2.62</mn> </mrow> </semantics></math>) where all detected fringes are highlighted with circles; (<b>b</b>) the original fringe patterns. Its determined centroids equal the actual position of the particles in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mi>y</mi> </mrow> </semantics></math>- plane. There is one missing hit (orange).</p>
Full article ">Figure 6
<p>Detection result of blob detection (selected image section). (<b>a</b>) histogram and the optimal threshold <math display="inline"><semantics> <msub> <mi>k</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> </mrow> </msub> </semantics></math> of the whole image, obtained by maximum entropy thresholding. Equation (<a href="#FD8-sensors-20-03006" class="html-disp-formula">8</a>) needs to be confined to a lower threshold limit set to <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>1</mn> </msub> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and an upper limit of <math display="inline"><semantics> <mrow> <msub> <mi>L</mi> <mn>2</mn> </msub> <mo>=</mo> <mn>0.84</mn> </mrow> </semantics></math> which is the peak of the mainlobe; (<b>b</b>) fringe patterns overlaid with the corresponding blobs that result from a threshold at <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mrow> <mi>o</mi> <mi>p</mi> <mi>t</mi> </mrow> </msub> <mo>=</mo> <mn>0.74</mn> </mrow> </semantics></math>. Four hits are missing (orange).</p>
Full article ">Figure 7
<p>Comparison of the monitored particle number concentration to the counting rates obtained by the PIU. (<b>a</b>) customized HT; (<b>b</b>) blob detection with maximum entropy thresholding; (<b>c</b>) DCNN based on a U-Net.</p>
Full article ">Figure 8
<p>Zoomed segments of measurement samples from <a href="#sensors-20-03006-f007" class="html-fig">Figure 7</a>b that suffer strong background fluctuations: (<b>1</b>) zero-particle frame; (<b>2</b>) particle number concentration of <math display="inline"><semantics> <mrow> <msub> <mi>C</mi> <mi>N</mi> </msub> <mo>=</mo> <mn>194</mn> </mrow> </semantics></math> #/cm<math display="inline"><semantics> <msup> <mrow/> <mn>3</mn> </msup> </semantics></math>; while the customized HT outputs correct hits (only True Positives), the blob detection in both scenarios fails (also False Positives) because of a misinterpreted intensity threshold in the histogram.</p>
Full article ">Figure 9
<p>Comparison of computational speed.</p>
Full article ">
9 pages, 10418 KiB  
Letter
Terahertz Gas-Phase Spectroscopy Using a Sub-Wavelength Thick Ultrahigh-Q Microresonator
by Dominik Walter Vogt, Angus Harvey Jones and Rainer Leonhardt
Sensors 2020, 20(10), 3005; https://doi.org/10.3390/s20103005 - 25 May 2020
Cited by 17 | Viewed by 4325
Abstract
The terahertz spectrum provides tremendous opportunities for broadband gas-phase spectroscopy, as numerous molecules exhibit strong fundamental resonances in the THz frequency range. However, cutting-edge THz gas-phase spectrometer require cumbersome multi-pass gas cells to reach sufficient sensitivity for trace level gas detection. Here, we [...] Read more.
The terahertz spectrum provides tremendous opportunities for broadband gas-phase spectroscopy, as numerous molecules exhibit strong fundamental resonances in the THz frequency range. However, cutting-edge THz gas-phase spectrometer require cumbersome multi-pass gas cells to reach sufficient sensitivity for trace level gas detection. Here, we report on the first demonstration of a THz gas-phase spectrometer using a sub-wavelength thick ultrahigh-Q THz disc microresonator. Leveraging the microresonator’s ultrahigh quality factor in excess of 120,000 as well as the intrinsically large evanescent field, allows for the implementation of a very compact spectrometer without the need for complex multi-pass gas cells. Water vapour concentrations as low as 4 parts per million at atmospheric conditions have been readily detected in proof-of-concept experiments. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Microscope image of a 12 mm diameter, 66 ± 1 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m thick HRFZ-Si THz disc microresonator. The resonator is mounted on a 1 mm diameter aluminium rod. (<b>b</b>) Corresponding simulated intensity distribution (normalised) on logarithmic scale showing the large extend of the evanescent field. The microresonator cross-section is indicated with grey lines. Please note that all simulations presented in this work are performed with COMSOL Multiphysics<sup>®</sup> software [<a href="#B32-sensors-20-03005" class="html-bibr">32</a>], and fabrication imperfections are not considered in the simulations. (<b>c</b>) Measured intensity profile of the THz disc microresonator showing the fundamental mode. (<b>d</b>) Resonance at 0.5561 THz (highlighted in red in sub-figure (c)) close to critical coupling. The frequency step size is 1 MHz (blue dots). The fitted analytical model [<a href="#B33-sensors-20-03005" class="html-bibr">33</a>] is shown with the orange solid line.</p>
Full article ">Figure 2
<p>Schematic of the gas-phase THz spectrometer with a commercial CW-THz system and a sub-wavelenth thick THz disc microresonator. The THz microresonator is mounted on a 3D translation stage to control the position of the resonator relative to the air-flouropolymer-silica waveguide. Both the horizontal and vertical position of the waveguide relative to the resonator were monitored with digital microscopes. Because of the intriguing field distribution, best coupling is achieved by placing the waveguide above or below the edge of the disc. Strong coupling is typically achieved at a position of the waveguide of about 200 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m inside from the edge of the microresonator and a gap of about 100 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m–200 <math display="inline"><semantics> <mi mathvariant="sans-serif">μ</mi> </semantics></math>m to the microresonator. The deployed symmetric-pass THz lenses are specifically designed to achieve high coupling efficiency to the sub-wavelength waveguide [<a href="#B39-sensors-20-03005" class="html-bibr">39</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) Measured intensity and phase profiles (blue dots) of the resonance at 0.5561 THz at 7 ppmv with the corresponding fit (orange solid lines). (<b>b</b>) The same resonance at 120 ppmv water vapour concentration. Both measurements are recorded with similar coupling strength to ease comparison.</p>
Full article ">Figure 4
<p>Measured Q-factors (blue dots with error-bars) as a function of water vapour concentration, with the corresponding fit (Equation (<a href="#FD1-sensors-20-03005" class="html-disp-formula">1</a>), orange solid line). The calculated and simulated Q (ppmv) curves using the HITRAN database are shown with a green solid and red dashed lines, respectively. The simulated curve assuming a continuously growing water layer film on the disc is shown with the purple dashed line. The water layer is modelled as a Transition Boundary Condition with a uniform coverage of the disc, and an effective layer thickness. The dielectric function assumed for the liquid water layer is <math display="inline"><semantics> <mrow> <mi>ϵ</mi> <mo>=</mo> <mn>4.8</mn> <mo>+</mo> <mi>i</mi> <mn>3.2</mn> </mrow> </semantics></math> [<a href="#B45-sensors-20-03005" class="html-bibr">45</a>].</p>
Full article ">
14 pages, 556 KiB  
Article
Characterisation of Ex Vivo Liver Thermal Properties for Electromagnetic-Based Hyperthermic Therapies
by Nuno P. Silva, Anna Bottiglieri, Raquel C. Conceição, Martin O’Halloran and Laura Farina
Sensors 2020, 20(10), 3004; https://doi.org/10.3390/s20103004 - 25 May 2020
Cited by 28 | Viewed by 3773
Abstract
Electromagnetic-based hyperthermic therapies induce a controlled increase of temperature in a specific tissue target in order to increase the tissue perfusion or metabolism, or even to induce cell necrosis. These therapies require accurate knowledge of dielectric and thermal properties to optimise treatment plans. [...] Read more.
Electromagnetic-based hyperthermic therapies induce a controlled increase of temperature in a specific tissue target in order to increase the tissue perfusion or metabolism, or even to induce cell necrosis. These therapies require accurate knowledge of dielectric and thermal properties to optimise treatment plans. While dielectric properties have been well investigated, only a few studies have been conducted with the aim of understanding the changes of thermal properties as a function of temperature; i.e., thermal conductivity, volumetric heat capacity and thermal diffusivity. In this study, we experimentally investigate the thermal properties of ex vivo ovine liver in the hyperthermic temperature range, from 25 °C to 97 °C. A significant increase in thermal properties is observed only above 90 °C. An analytical model is developed to model the thermal properties as a function of temperature. Thermal properties are also investigated during the natural cooling of the heated tissue. A reversible phenomenon of the thermal properties is observed; during the cooling, thermal properties followed the same behaviour observed in the heating process. Additionally, tissue density and water content are evaluated at different temperatures. Density does not change with temperature; mass and volume losses change proportionally due to water vaporisation. A 30% water loss was observed above 90 °C. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental setup: TEMPOS thermal properties analyser, metallic container, thermal bath and fibre optic temperature sensor [<a href="#B33-sensors-20-03004" class="html-bibr">33</a>].</p>
Full article ">Figure 2
<p>Sketch of the container used to hold the samples, with the related dimensions. <b>Left</b>: The dual-needle SH-3 sensor from the TEMPOS thermal properties analyser is sketched. The sensor was placed at least 15 mm from the border of the container, according to the manufacturer’s specification. A fibre optic temperature sensor used to monitor the temperature during the experiment is also shown. <b>Right</b>: The container’s lid is sketched together with the holes dedicated to the insertion of the SH-3 sensor and the fibre optic temperature sensor [<a href="#B33-sensors-20-03004" class="html-bibr">33</a>].</p>
Full article ">Figure 3
<p>Thermal properties of ex vivo ovine liver as a function of the increasing temperature: (<b>a</b>) thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity. Average values are reported with the associated uncertainty (error bars). Experimental data (red) as well as literature data (blue) are shown. The best fit model is also presented by a dashed line. The estimated error of how well the least square model fits the data is reported in (<b>d</b>).</p>
Full article ">Figure 4
<p>Thermal properties of ex vivo ovine liver as a function of the decreasing temperature: (<b>a</b>) thermal conductivity, (<b>b</b>) thermal diffusivity and (<b>c</b>) volumetric heat capacity. Raw data measured during the cooling of each sample are reported; their nominal associated uncertainty is 10%, which corresponds to the device accuracy. The best fit model described in <a href="#sensors-20-03004-t002" class="html-table">Table 2</a> is also reported by a dashedline. The estimated error of how well the least square model fits the data is reported in (<b>d</b>).</p>
Full article ">
15 pages, 3821 KiB  
Article
Sensor Based on PZT Ceramic Resonator with Lateral Electric Field for Immunodetectionof Bacteria in the Conducting Aquatic Environment †
by Irina Borodina, Boris Zaitsev, Andrey Teplykh, Gennady Burygin and Olga Guliy
Sensors 2020, 20(10), 3003; https://doi.org/10.3390/s20103003 - 25 May 2020
Cited by 4 | Viewed by 2396
Abstract
A biological sensor for detection and identification of bacterial cells, including a resonator with a lateral electric field based on PZT ceramics was experimentally investigated. For bacterial immunodetection the frequency dependencies of the electric impedance of the sensor with a suspension of microbial [...] Read more.
A biological sensor for detection and identification of bacterial cells, including a resonator with a lateral electric field based on PZT ceramics was experimentally investigated. For bacterial immunodetection the frequency dependencies of the electric impedance of the sensor with a suspension of microbial cells were measured before and after adding the specific antibodies. It was found that the addition of specific antibodies to a suspension of microbial cells led to a significant change in these frequency dependencies due to the increase in the conductivity of suspension. The analysis of microbial cells was carried out in aqueous solutions with a conductivity of 4.5–1000 μS/cm, as well as in the tap and drinking water. The detection limit of microbial cells was found to be 103 cells/mL and the analysis time did not exceed 4 min. Experiments with non-specific antibodies were also carried out and it was shown that their addition to the cell suspension did not lead to a change in the analytical signal of the sensor. This confirms the ability to not only detect, but also identify bacterial cells in suspensions. Full article
(This article belongs to the Special Issue Acoustic Wave Sensors for Gaseous and Liquid Environments)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The general scheme of the experiment.</p>
Full article ">Figure 2
<p>The frequency dependencies of the real (<b>a</b>) and imaginary (<b>b</b>) parts of the electrical impedance of PZT resonator with empty liquid container.</p>
Full article ">Figure 3
<p>The frequency dependencies of the real part of the electrical impedance of PZT resonator with container loaded by <span class="html-italic">A. brasilense</span> Sp7 suspension before (curve 1) and after (curve 2) adding the specific Abs. The cell concentration was 10<sup>3</sup> cells/mL, the concentration of Abs was 4 μg/mL. (<b>a</b>) Peak near 68.7 kHz, (<b>b</b>) peak near 97.8 kHz, (<b>c</b>) peak near 264 kHz.</p>
Full article ">Figure 4
<p>The frequency dependencies of the real part of the electrical impedance of PZT resonator with container loaded by <span class="html-italic">E. coli</span> K-12 suspension before (curve 1) and after (curve 2) adding the Abs specific to <span class="html-italic">A. brasilense</span> Sp7. The cell concentration was 10<sup>3</sup> cells/mL, the concentration of Abs was 4 μg/mL. (<b>a</b>) Peak near 68.7 kHz, (<b>b</b>) peak near 97.8 kHz, (<b>c</b>) peak near 264 kHz.</p>
Full article ">Figure 5
<p>Results of the immunodiffusion assay (<b>a</b>) and ELISA (<b>b</b>) of strains <span class="html-italic">A. brasilense</span> Sp7 and <span class="html-italic">E. coli</span> K-12 with the antibodies against <span class="html-italic">A. brasilense</span> Sp7 O-antigen (Ab). For ELISA, the optical density of the enzyme reaction for a concentration of 10<sup>6</sup> cells/mL is shown.</p>
Full article ">Figure 6
<p>Electron microscopy of (<b>a</b>) cells of <span class="html-italic">A. brasilense</span> Sp7 with Abs specific to <span class="html-italic">A. brasilense</span> Sp7 labeled with colloidal gold (× 10,000) and (<b>b</b>) <span class="html-italic">E. coli</span> K-12 with Abs specific to <span class="html-italic">A. brasilense</span> Sp7.</p>
Full article ">Figure 7
<p>The time dependencies of the maximum of real part of the electrical impedance of PZT resonator during a specific interaction “<span class="html-italic">A. brasilense</span> Sp7—specific Abs” (<b>a</b>) and nonspecific interaction “<span class="html-italic">E. coli</span> K-12—Abs specific to <span class="html-italic">A. brasilense</span> Sp7 (<b>b</b>), for resonance peaks near frequencies 68.7, 97.8, and 264 kHz.</p>
Full article ">Figure 8
<p>The time dependencies of the maximum of the real part of the electrical impedance of PZT resonator at addition of Abs specific to <span class="html-italic">A. brasilense</span> Sp7 to distilled water without cells for resonance peaks near the frequencies 68.7, 97.8, and 264 kHz.</p>
Full article ">Figure 9
<p>Dependencies of ΔR<sub>max</sub> of the resonator on the concentration of the <span class="html-italic">A. brasilense</span> Sp7 cells for resonance peaks near the frequencies 68.7, 97.8, and 264 kHz where x is power. The concentration of antibodies was 4 μg/mL.</p>
Full article ">Figure 10
<p>Dependencies of the change of ΔR<sub>max</sub> of the resonator on the Abs concentration for <span class="html-italic">A. brasilense</span> Sp7 cell suspension after adding the specific Abs for resonance peaks near frequencies 68.7, 97.8, and 264 kHz. The cell concentration is 10<sup>3</sup> (<b>a</b>) and 10<sup>8</sup> (<b>b</b>) cells/mL.</p>
Full article ">Figure 11
<p>The frequency dependencies of the real part of the electrical impedance of a PZT resonator with a container loaded with <span class="html-italic">A. brasilense</span> Sp7 suspension before (curve 1) and after (curve 2) adding the specific Abs. The cell concentration was 10<sup>3</sup> cells/mL, the concentration of antibodies was 6 μg/mL. The buffer conductivity was 10 μS/cm (<b>a</b>, <b>c</b>, <b>e</b>) and 50 μS/cm (<b>b</b>, <b>d</b>, <b>f</b>). The resonance peaks correspond to frequencies: 68.7 kHz (<b>a</b>, <b>b</b>), 97.8 kHz (<b>c</b>, <b>d</b>), and 264 kHz (<b>e</b>, <b>f</b>).</p>
Full article ">Figure 11 Cont.
<p>The frequency dependencies of the real part of the electrical impedance of a PZT resonator with a container loaded with <span class="html-italic">A. brasilense</span> Sp7 suspension before (curve 1) and after (curve 2) adding the specific Abs. The cell concentration was 10<sup>3</sup> cells/mL, the concentration of antibodies was 6 μg/mL. The buffer conductivity was 10 μS/cm (<b>a</b>, <b>c</b>, <b>e</b>) and 50 μS/cm (<b>b</b>, <b>d</b>, <b>f</b>). The resonance peaks correspond to frequencies: 68.7 kHz (<b>a</b>, <b>b</b>), 97.8 kHz (<b>c</b>, <b>d</b>), and 264 kHz (<b>e</b>, <b>f</b>).</p>
Full article ">Figure 12
<p>Dependencies of ΔR<sub>max</sub> on the conductivity of the buffer solution when specific Abs was added to the buffer with the cells <span class="html-italic">A. brasilense</span> Sp7 for resonance peaks near frequencies 67.8, 98.7, and 264 kHz.</p>
Full article ">Figure 13
<p>Dependencies of ΔR<sub>max</sub> on the conductivity of the buffer solution when Abs specific for <span class="html-italic">A. brasilense</span> Sp7 cells was added to the buffer with <span class="html-italic">E. coli</span> K-12 cells for resonance peaks near frequencies 67.8, 98.7, and 264 kHz.</p>
Full article ">
2 pages, 197 KiB  
Comment
Comment on the Article "A Lightweight and Low-Power UAV-Borne Ground Penetrating Radar Design for Landmine Detection"
by Yuri Álvarez López, María García Fernández and Fernando Las-Heras Andrés
Sensors 2020, 20(10), 3002; https://doi.org/10.3390/s20103002 - 25 May 2020
Cited by 1 | Viewed by 2225
Abstract
This reply aims to correct some incomplete/incorrect information provided in the article "A Lightweight and Low-Power UAV-Borne Ground Penetrating Radar Design for Landmine Detection", when the authors compare their results with some state-of-the-art contributions. Full article
(This article belongs to the Section Remote Sensors)
2 pages, 162 KiB  
Reply
Reply to Comments: A Lightweight and Low-Power UAV-Borne Ground Penetrating Radar Design for Landmine Detection
by Danijel Šipoš and Dušan Gleich
Sensors 2020, 20(10), 3001; https://doi.org/10.3390/s20103001 - 25 May 2020
Cited by 2 | Viewed by 1969
Abstract
In this brief note, we respond to the comments made by Dr [...] Full article
(This article belongs to the Section Remote Sensors)
19 pages, 481 KiB  
Review
Selfishness in Vehicular Delay-Tolerant Networks: A Review
by Ghani-Ur Rehman, Anwar Ghani, Shad Muhammad, Madhusudan Singh and Dhananjay Singh
Sensors 2020, 20(10), 3000; https://doi.org/10.3390/s20103000 - 25 May 2020
Cited by 25 | Viewed by 3834
Abstract
Various operational communication models are using Delay-Tolerant Network as a communication tool in recent times. In such a communication paradigm, sometimes there are disconnections and interferences as well as high delays like vehicle Ad hoc networks (VANETs). A new research mechanism, namely, the [...] Read more.
Various operational communication models are using Delay-Tolerant Network as a communication tool in recent times. In such a communication paradigm, sometimes there are disconnections and interferences as well as high delays like vehicle Ad hoc networks (VANETs). A new research mechanism, namely, the vehicle Delay-tolerant network (VDTN), is introduced due to several similar characteristics. The store-carry-forward mechanism in VDTNs is beneficial in forwarding the messages to the destination without end-to-end connectivity. To accomplish this task, the cooperation of nodes is needed to forward messages to the destination. However, we cannot be sure that all the nodes in the network will cooperate and contribute their computing resources for message forwarding without any reward. Furthermore, there are some selfish nodes in the network which may not cooperate to forward the messages, and are inclined to increase their own resources. This is one of the major challenges in VDTNs and incentive mechanisms are used as a major solution. This paper presents a detailed study of the recently proposed incentive schemes for VDTNs. This paper also gives some open challenges and future directions for interested researchers in the future. Full article
(This article belongs to the Special Issue Internet of Things for Smart Community Solutions)
Show Figures

Figure 1

Figure 1
<p>Store-carry-forwarding scenario in Vehicle Delay-Tolerant Networks (VDTNs).</p>
Full article ">Figure 2
<p>Types of selfish behavior.</p>
Full article ">
23 pages, 850 KiB  
Article
Remote Monitoring of Human Vital Signs Based on 77-GHz mm-Wave FMCW Radar
by Yong Wang, Wen Wang, Mu Zhou, Aihu Ren and Zengshan Tian
Sensors 2020, 20(10), 2999; https://doi.org/10.3390/s20102999 - 25 May 2020
Cited by 140 | Viewed by 12139
Abstract
In recent years, non-contact radar detection technology has been able to achieve long-term and long-range detection for the breathing and heartbeat signals. Compared with contact-based detection methods, it brings a more comfortable and a faster experience to the human body, and it has [...] Read more.
In recent years, non-contact radar detection technology has been able to achieve long-term and long-range detection for the breathing and heartbeat signals. Compared with contact-based detection methods, it brings a more comfortable and a faster experience to the human body, and it has gradually received attention in the field of radar sensing. Therefore, this paper extends the application of millimeter-wave radar to the field of health care. The millimeter-wave radar first transmits the frequency-modulated continuous wave (FMCW) and collects the echo signals of the human body. Then, the phase information of the intermediate frequency (IF) signals including the breathing and heartbeat signals are extracted, and the Direct Current (DC) offset of the phase information is corrected using the circle center dynamic tracking algorithm. The extended differential and cross-multiply (DACM) is further applied for phase unwrapping. We propose two algorithms, namely the compressive sensing based on orthogonal matching pursuit (CS-OMP) algorithm and rigrsure adaptive soft threshold noise reduction based on discrete wavelet transform (RA-DWT) algorithm, to separate and reconstruct the breathing and heartbeat signals. Then, a frequency-domain fast Fourier transform and a time-domain autocorrelation estimation algorithm are proposed to calculate the respiratory and heartbeat rates. The proposed algorithms are compared with the contact-based detection ones. The results demonstrate that the proposed algorithms effectively suppress the noise and harmonic interference, and the accuracies of the proposed algorithms for both respiratory rate and heartbeat rate reach about 93%. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>FMCW radar system block diagram.</p>
Full article ">Figure 2
<p>Time dependence of transmitted and received signals (frequency as a function of time).</p>
Full article ">Figure 3
<p>Human heartbeat and breathing signal detection flowchart.</p>
Full article ">Figure 4
<p>Single chirp and multiple chirps comparison: (<b>a</b>) range spectrum; and (<b>b</b>) signal-to-noise ratio.</p>
Full article ">Figure 5
<p>RTM construction.</p>
Full article ">Figure 6
<p>Waveform and spectrum of separated signal: (<b>a</b>) heartbeat waveform; (<b>b</b>) heartbeat spectrum; (<b>c</b>) respiratory waveform; and (<b>d</b>) respiratory spectrum.</p>
Full article ">Figure 7
<p>Experimental platform: (<b>a</b>) experimental scene; (<b>b</b>) radar antenna layout; and (<b>c</b>) radar chirp parameters setting.</p>
Full article ">Figure 8
<p>Target Detection: (<b>a</b>) RTM; (<b>b</b>) range FFT; and (<b>c</b>) vibration-range.</p>
Full article ">Figure 8 Cont.
<p>Target Detection: (<b>a</b>) RTM; (<b>b</b>) range FFT; and (<b>c</b>) vibration-range.</p>
Full article ">Figure 9
<p>DC offset correction: (<b>a</b>) center-dynamic DC offset tracking correction; (<b>b</b>) time-domain; and (<b>c</b>) frequency domain.</p>
Full article ">Figure 10
<p>Phase extraction: (<b>a</b>) phase waveform and its corresponding spectrum after arctangent demodulation; (<b>b</b>) phase waveform and its spectrum after phase unwrapping; and (<b>c</b>) phase waveform after phase difference and its spectrum.</p>
Full article ">Figure 11
<p>CS-OMP reconstructed signal: (<b>a</b>) heartbeat signal and its spectrum; (<b>b</b>) respiration signal and its spectrum.</p>
Full article ">Figure 12
<p>Wavelet decomposition and noise reduction reconstruction results.</p>
Full article ">Figure 13
<p>Wavelet decomposition spectrum: (<b>a</b>) respiratory signal; and (<b>b</b>) heartbeat signal.</p>
Full article ">Figure 14
<p>Separate reconstructed waveform: (<b>a</b>) heartbeat waveform and its autocorrelation; and (<b>b</b>) breathing waveform and its autocorrelation.</p>
Full article ">Figure 15
<p>Real-time comparison: (<b>a</b>) heartbeat rate; and (<b>b</b>) breathing rate.</p>
Full article ">
19 pages, 15440 KiB  
Article
Residual Energy Analysis in Cognitive Radios with Energy Harvesting UAV under Reliability and Secrecy Constraints
by Waqas Khalid, Heejung Yu and Song Noh
Sensors 2020, 20(10), 2998; https://doi.org/10.3390/s20102998 - 25 May 2020
Cited by 15 | Viewed by 3538
Abstract
The integration of unmanned aerial vehicles (UAVs) with a cognitive radio (CR) technology can improve the spectrum utilization. However, UAV network services demand reliable and secure communications, along with energy efficiency to prolong battery life. We consider an energy harvesting UAV (e.g., surveillance [...] Read more.
The integration of unmanned aerial vehicles (UAVs) with a cognitive radio (CR) technology can improve the spectrum utilization. However, UAV network services demand reliable and secure communications, along with energy efficiency to prolong battery life. We consider an energy harvesting UAV (e.g., surveillance drone) flying periodically in a circular track around a ground-mounted primary transmitter. The UAV, with limited-energy budget, harvests radio frequency energy and uses the primary spectrum band opportunistically. To obtain intuitive insight into the performance of energy-harvesting, and reliable and secure communications, the closed-form expressions of the residual energy, connection outage probability, and secrecy outage probability, respectively, are analytically derived. We construct the optimization problems of residual energy with reliable and secure communications, under scenarios without and with an eavesdropper, respectively, and the analytical solutions are obtained with the approximation of perfect sensing. The numerical simulations verify the analytical results and identify the requirements of length of sensing phase and transmit power for the maximum residual energy in both reliable and secure communication scenarios. Additionally, it is shown that the residual energy in secure communication is lower than that in reliable communication. Full article
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Total residual energy, connection and secrecy outage probabilities, and their approximation under perfect sensing with respect to (<b>a</b>) length of sensing phase (duration) with a fixed transmit power, (<b>b</b>) transmit power for a fixed length of sensing phase.</p>
Full article ">Figure 3
<p>Total residual energy and its approximation with respect to energy harvesting (EH) power splitting ratio for <math display="inline"><semantics> <mrow> <mi>θ</mi> <mo>=</mo> <mo>{</mo> <mi>π</mi> <mo>/</mo> <mn>2</mn> <mo>,</mo> <mi>π</mi> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> <mo>=</mo> <mrow> <mo>{</mo> <mn>50</mn> <mo>,</mo> <mn>90</mn> <mo>}</mo> </mrow> </mrow> </semantics></math> mW.</p>
Full article ">Figure 4
<p>Monotonic functions of connection outage probability, secrecy outage probability vs. (<b>a</b>) mean (expectation) of <math display="inline"><semantics> <mrow> <mrow> <mo>|</mo> </mrow> <msub> <mi>h</mi> <mi>l</mi> </msub> <msup> <mrow> <mo>|</mo> </mrow> <mn>2</mn> </msup> </mrow> </semantics></math> and (<b>b</b>) target rates.</p>
Full article ">Figure 5
<p>Exact and approximated total residual energy as a function of <math display="inline"><semantics> <mi>θ</mi> </semantics></math> for EH-unmanned aerial vehicle (UAV) under (<b>a</b>) connection outage constraint, (<b>b</b>) secrecy outage constraint.</p>
Full article ">Figure 6
<p>Exact and approximated total residual energy as a function of <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <mi>t</mi> <mi>x</mi> </mrow> </msub> </semantics></math> for EH-UAV under (<b>a</b>) connection outage constraint, (<b>b</b>) secrecy outage constraint.</p>
Full article ">Figure 7
<p>Optimal lengths of sensing phase (duration) and optimal transmit powers for EH-UAV over (<b>a</b>) connection outage threshold (<math display="inline"><semantics> <msub> <mi>φ</mi> <mn>1</mn> </msub> </semantics></math>), (<b>b</b>) secrecy outage threshold (<math display="inline"><semantics> <msub> <mi>φ</mi> <mn>2</mn> </msub> </semantics></math>).</p>
Full article ">
26 pages, 1131 KiB  
Review
A Survey of Marker-Less Tracking and Registration Techniques for Health & Environmental Applications to Augmented Reality and Ubiquitous Geospatial Information Systems
by Abolghasem Sadeghi-Niaraki and Soo-Mi Choi
Sensors 2020, 20(10), 2997; https://doi.org/10.3390/s20102997 - 25 May 2020
Cited by 32 | Viewed by 6688
Abstract
Most existing augmented reality (AR) applications are suitable for cases in which only a small number of real world entities are involved, such as superimposing a character on a single surface. In this case, we only need to calculate pose of the camera [...] Read more.
Most existing augmented reality (AR) applications are suitable for cases in which only a small number of real world entities are involved, such as superimposing a character on a single surface. In this case, we only need to calculate pose of the camera relative to that surface. However, when an AR health or environmental application involves a one-to-one relationship between an entity in the real-world and the corresponding object in the computer model (geo-referenced object), we need to estimate the pose of the camera in reference to a common coordinate system for better geo-referenced object registration in the real-world. New innovations in developing cheap sensors, computer vision techniques, machine learning, and computing power have helped to develop applications with more precise matching between a real world and a virtual content. AR Tracking techniques can be divided into two subcategories: marker-based and marker-less approaches. This paper provides a comprehensive overview of marker-less registration and tracking techniques and reviews their most important categories in the context of ubiquitous Geospatial Information Systems (GIS) and AR focusing to health and environmental applications. Basic ideas, advantages, and disadvantages, as well as challenges, are discussed for each subcategory of tracking and registration techniques. We need precise enough virtual models of the environment for both calibrations of tracking and visualization. Ubiquitous GISs can play an important role in developing AR in terms of providing seamless and precise spatial data for outdoor (e.g., environmental applications) and indoor (e.g., health applications) environments. Full article
Show Figures

Figure 1

Figure 1
<p>Evolution of GIS user interfaces.</p>
Full article ">Figure 2
<p>Tracking and registration techniques for mobile augmented reality.</p>
Full article ">
26 pages, 4560 KiB  
Review
Hollow-Core Photonic Crystal Fiber Gas Sensing
by Ruowei Yu, Yuxing Chen, Lingling Shui and Limin Xiao
Sensors 2020, 20(10), 2996; https://doi.org/10.3390/s20102996 - 25 May 2020
Cited by 62 | Viewed by 8896
Abstract
Fiber gas sensing techniques have been applied for a wide range of industrial applications. In this paper, the basic fiber gas sensing principles and the development of different fibers have been introduced. In various specialty fibers, hollow-core photonic crystal fibers (HC-PCFs) can overcome [...] Read more.
Fiber gas sensing techniques have been applied for a wide range of industrial applications. In this paper, the basic fiber gas sensing principles and the development of different fibers have been introduced. In various specialty fibers, hollow-core photonic crystal fibers (HC-PCFs) can overcome the fundamental limits of solid fibers and have attracted intense interest recently. Here, we focus on the review of HC-PCF gas sensing, including the light-guiding mechanisms of HC-PCFs, various sensing configurations, microfabrication approaches, and recent research advances including the mid-infrared gas sensors via hollow core anti-resonant fibers. This review gives a detailed and deep understanding of HC-PCF gas sensors and will promote more practical applications of HC-PCFs in the near future. Full article
(This article belongs to the Special Issue Photonic Crystal Fiber Gas Sensor)
Show Figures

Figure 1

Figure 1
<p>Principle of absorption spectroscopy [<a href="#B42-sensors-20-02996" class="html-bibr">42</a>].</p>
Full article ">Figure 2
<p>Mid-infrared (MIR) absorption spectroscopy for major atmospheric species in the wavelength range of (<b>a</b>) 2.5–5.555 μm and (<b>b</b>) 5.556–25 μm [<a href="#B102-sensors-20-02996" class="html-bibr">102</a>,<a href="#B103-sensors-20-02996" class="html-bibr">103</a>].</p>
Full article ">Figure 3
<p>Phase modulation principle based on the photothermal effect [<a href="#B17-sensors-20-02996" class="html-bibr">17</a>].</p>
Full article ">Figure 4
<p>Principle of gas detection based on the photoacoustic spectroscopy. f: periodic modulation with a frequency f [<a href="#B42-sensors-20-02996" class="html-bibr">42</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>,<b>b</b>) Cross-section images of suspended core photonic crystal fibers (SC-PCFs) with triangular-like microcores [<a href="#B53-sensors-20-02996" class="html-bibr">53</a>,<a href="#B58-sensors-20-02996" class="html-bibr">58</a>]. (<b>c</b>,<b>d</b>) Cross-section images of exposed-core SC-PCFs fabricated by etching the drawn fiber and drawing from the drawing tower [<a href="#B112-sensors-20-02996" class="html-bibr">112</a>]; the image sizes are 40 µm.</p>
Full article ">Figure 6
<p>Scanning electronic microscopy (SEM) images of some representative hollow-core photonic crystal fibers (HC-PCFs): (<b>a</b>) Commercial available hollow core photonic bandgap fiber (HC-PBGF): HC-1550-02 from NKT Photonics [<a href="#B138-sensors-20-02996" class="html-bibr">138</a>]; (<b>b</b>) HC-PCF designed for the guiding in the MIR region [<a href="#B137-sensors-20-02996" class="html-bibr">137</a>]; (<b>c</b>) Hollow core Bragg fiber [<a href="#B74-sensors-20-02996" class="html-bibr">74</a>]; (<b>d</b>) Kagome lattice HC-PCF [<a href="#B124-sensors-20-02996" class="html-bibr">124</a>]; (<b>e</b>) hollow core antiresonant fibers (HC-ARF) [<a href="#B59-sensors-20-02996" class="html-bibr">59</a>]; (<b>f</b>) nodeless HC-ARF [<a href="#B78-sensors-20-02996" class="html-bibr">78</a>]; (<b>g</b>) hollow fiber with negative curvature of the core wall [<a href="#B73-sensors-20-02996" class="html-bibr">73</a>]; (<b>h</b>) double anti-resonant hollow square core fiber [<a href="#B72-sensors-20-02996" class="html-bibr">72</a>].</p>
Full article ">Figure 7
<p>Experimental setup for the gas detection with a 15-m-long HC-PCF. The inset shows the SEM image of the HC-PCF [<a href="#B38-sensors-20-02996" class="html-bibr">38</a>].</p>
Full article ">Figure 8
<p>Schematic structure of the gas-sensing probe with an replaceable insert [<a href="#B20-sensors-20-02996" class="html-bibr">20</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) Fiber section connector. (<b>b</b>) Multi-section sensing head containing 4 sections of photonic crystal fibers (PCF) [<a href="#B19-sensors-20-02996" class="html-bibr">19</a>].</p>
Full article ">Figure 10
<p>(<b>a</b>) Fiber assembly for the gas sensor. FC/APC: ferrule connector/asperity polishing connector. (<b>b</b>,<b>c</b>) Images of the laser drilled hole in a glass capillary tube side, from a vertical and a front view respectively [<a href="#B85-sensors-20-02996" class="html-bibr">85</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) SEM image of a penetrated hole of 5 × 5 μm<sup>2</sup> in an HC-PBGF. (<b>b</b>) Image of the mechanical connection between the PCF and the SMF with a micromirror [<a href="#B88-sensors-20-02996" class="html-bibr">88</a>].</p>
Full article ">Figure 12
<p>The femtosecond (fs) laser system for fabricating microchannels in an HC-PBGF [<a href="#B42-sensors-20-02996" class="html-bibr">42</a>].</p>
Full article ">Figure 13
<p>(<b>a</b>) Cross-section of the exposed-core unsymmetrical-gap nodeless HC-ARF. (<b>b</b>) Schematic three-dimensional image of the PCF with an exposed core, where the light interacts with matter [<a href="#B80-sensors-20-02996" class="html-bibr">80</a>].</p>
Full article ">Figure 14
<p>(<b>a</b>) Confinement losses in two orthogonal polarization directions (the <span class="html-italic">x</span>-axis is the black dashed line and the <span class="html-italic">y</span>-axis is the red solid line) as functions of the larger gap G for the PCF at the wavelength of 1550 nm. The inset are the fundamental mode patterns in the <span class="html-italic">y</span>-axis direction with G values of 2.5, 7.5, 12.5, and 15 μm at the wavelength of 1550 nm, respectively. (<b>b</b>) A fraction of power inside the air core (red solid line) and in air (black dashed line) of the fiber with a G value of 7.5 μm. The inset is the mode pattern of the fundamental mode at the wavelength of 1550 nm [<a href="#B80-sensors-20-02996" class="html-bibr">80</a>].</p>
Full article ">Figure 15
<p>Experimental setup of the tunable diode laser absorption spectroscopy (TDLAS)-based CO sensor using the HC-ARF. FM: flip mirror; L: lens; M: mirror; CM: concave mirror; PD: photodetector; P: pressure gauge; M: mass flow meter; LPF: low-pass filter [<a href="#B95-sensors-20-02996" class="html-bibr">95</a>].</p>
Full article ">Figure 16
<p>(<b>a</b>) Experimental setup of the nitrous oxide detection based on the laser absorption spectroscopy. QCL: quantum cascade laser; PD: photodetector; HCF: HC-ARF. (<b>b</b>) Cross-section of the HC-ARF with nested capillaries obtained with SEM [<a href="#B67-sensors-20-02996" class="html-bibr">67</a>].</p>
Full article ">
5 pages, 723 KiB  
Comment
Comment on “Hurdle Clearance Detection and Spatiotemporal Analysis in 400 Meters Hurdles Races Using Shoe-Mounted Magnetic and Inertial Sensor”
by Marcus Schmidt, Tobias Alt, Kevin Nolte and Thomas Jaitner
Sensors 2020, 20(10), 2995; https://doi.org/10.3390/s20102995 - 25 May 2020
Cited by 5 | Viewed by 2840
Abstract
The recent paper “Hurdle Clearance Detection and Spatiotemporal Analysis in 400 Meters Hurdles Races Using Shoe-Mounted Magnetic and Inertial Sensor” (Sensors 2020, 20, 354) proposes a wearable system based on a foot-worn miniature inertial measurement unit (MIMU) and different methods to detect [...] Read more.
The recent paper “Hurdle Clearance Detection and Spatiotemporal Analysis in 400 Meters Hurdles Races Using Shoe-Mounted Magnetic and Inertial Sensor” (Sensors 2020, 20, 354) proposes a wearable system based on a foot-worn miniature inertial measurement unit (MIMU) and different methods to detect hurdle clearance and to identify the leading leg during 400-m hurdle races. Furthermore, the presented system identifies changes in contact time, flight time, running speed, and step frequency throughout the race. In this comment, we discuss the original paper with a focus on the ecological validity and the applicability of MIMU systems for field-based settings, such as training or competition for elite athletes. Full article
Show Figures

Figure 1

Figure 1
<p>Bland–Altman plot of the estimation errors for the contact time (CT) measured by a MIMU compared to a reference system. Individual athletes are marked in different colors (modified according to Schmidt et al. [<a href="#B14-sensors-20-02995" class="html-bibr">14</a>]).</p>
Full article ">Figure 2
<p>Bland–Altman plot of the estimation errors for CT of the validation study [<a href="#B13-sensors-20-02995" class="html-bibr">13</a>].</p>
Full article ">
18 pages, 631 KiB  
Article
Efficient Resource Allocation for Backhaul-Aware Unmanned Air Vehicles-to-Everything (U2X)
by Takshi Gupta, Fabio Arena and Ilsun You
Sensors 2020, 20(10), 2994; https://doi.org/10.3390/s20102994 - 25 May 2020
Cited by 8 | Viewed by 3033
Abstract
Unmanned aerial vehicles (UAVs) allow better coverage, enhanced connectivity, and elongated lifetime when used in telecommunications. However, these features are predominately affected by the policies used for sharing resources amongst the involved nodes. Moreover, the architecture and deployment strategies also have a considerable [...] Read more.
Unmanned aerial vehicles (UAVs) allow better coverage, enhanced connectivity, and elongated lifetime when used in telecommunications. However, these features are predominately affected by the policies used for sharing resources amongst the involved nodes. Moreover, the architecture and deployment strategies also have a considerable impact on their functionality. Recently, many researchers have suggested using layer-based UAV deployment, which allows better communications between the entities. Regardless of these solutions, there are a limited number of studies which focus on connecting layered-UAVs to everything (U2X). In particular, none of them have actually addressed the aspect of resource allocation. This paper considers the issue of resource allocation and helps decide the optimal number of transfers amongst the UAVs, which can conserve the maximum amount of energy while increasing the overall probability of resource allocation. The proposed approach relies on mutual-agreement based reward theory, which considers Minkowski distance as a decisive metric and helps attain efficient resource allocation for backhaul-aware U2X. The effectiveness of the proposed solution is demonstrated using Monte-Carlo simulations. Full article
Show Figures

Figure 1

Figure 1
<p>An exemplary illustration of a multi-layer U2X scenario for resource allocation.</p>
Full article ">Figure 2
<p>An exemplary illustration of reward-jump system used for mutual agreement between the entities to ensure smooth and low-overhead offloading.</p>
Full article ">Figure 3
<p>Flow chart representing all of the procedures and work flow for energy efficient resource allocation in U2X.</p>
Full article ">Figure 4
<p>Probability variations for Monte-carlo based simulations vs. Number of UAVs.</p>
Full article ">Figure 5
<p>Probability of handling users with reward-based reshuffling vs. number of UAVs.</p>
Full article ">Figure 6
<p>Induced failure policies vs. number of UAVs.</p>
Full article ">Figure 7
<p>Rewards vs. number of UAVs.</p>
Full article ">Figure 8
<p>Number of transfers vs. number of UAVs at varying percentages of failures.</p>
Full article ">Figure 9
<p>Range of Number of transfers vs. number of UAVs at a fixed number of failures.</p>
Full article ">Figure 10
<p>Energy conservation vs. number of UAVs.</p>
Full article ">Figure 11
<p>Probability of resource allocation with random link deployment vs. number of UAVs.</p>
Full article ">Figure 12
<p>Probability of resource allocation with fixed link deployment vs. number of UAVs.</p>
Full article ">
3 pages, 431 KiB  
Reply
Reply to Comments: Hurdle Clearance Detection and Spatiotemporal Analysis in 400 Meters Hurdles Races Using Shoe-Mounted Magnetic and Inertial Sensor
by Mathieu Falbriard, Maurice Mohr and Kamiar Aminian
Sensors 2020, 20(10), 2993; https://doi.org/10.3390/s20102993 - 25 May 2020
Viewed by 2014
Abstract
The current document answers the comment addressed by Schmidt, M [...] Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>The processing steps added before temporal events detection to improve the segmentation in mid-swing to mid-swing periods.</p>
Full article ">
24 pages, 5063 KiB  
Article
Routing Based Multi-Agent System for Network Reliability in the Smart Microgrid
by Niharika Singh, Irraivan Elamvazuthi, Perumal Nallagownden, Gobbi Ramasamy and Ajay Jangra
Sensors 2020, 20(10), 2992; https://doi.org/10.3390/s20102992 - 25 May 2020
Cited by 15 | Viewed by 3590
Abstract
Microgrids help to achieve power balance and energy allocation optimality for the defined load networks. One of the major challenges associated with microgrids is the design and implementation of a suitable communication-control architecture that can coordinate actions with system operating conditions. In this [...] Read more.
Microgrids help to achieve power balance and energy allocation optimality for the defined load networks. One of the major challenges associated with microgrids is the design and implementation of a suitable communication-control architecture that can coordinate actions with system operating conditions. In this paper, the focus is to enhance the intelligence of microgrid networks using a multi-agent system while validation is carried out using network performance metrics i.e., delay, throughput, jitter, and queuing. Network performance is analyzed for the small, medium and large scale microgrid using Institute of Electrical and Electronics Engineers (IEEE) test systems. In this paper, multi-agent-based Bellman routing (MABR) is proposed where the Bellman–Ford algorithm serves the system operating conditions to command the actions of multiple agents installed over the overlay microgrid network. The proposed agent-based routing focuses on calculating the shortest path to a given destination to improve network quality and communication reliability. The algorithm is defined for the distributed nature of the microgrid for an ideal communication network and for two cases of fault injected to the network. From this model, up to 35%–43.3% improvement was achieved in the network delay performance based on the Constant Bit Rate (CBR) traffic model for microgrids. Full article
(This article belongs to the Special Issue Smart Energy City with AI, IoT and Big Data)
Show Figures

Figure 1

Figure 1
<p>Structure of a microgrid network.</p>
Full article ">Figure 2
<p>Microgrid communication and control co-simulation approach for communication and control design.</p>
Full article ">Figure 3
<p>A microgrid physical bus system network.</p>
Full article ">Figure 4
<p>Construction of a microgrid bus system-based communication network.</p>
Full article ">Figure 5
<p>Interactive environment of a microgrid communication network.</p>
Full article ">Figure 6
<p>Flow chart for the proposed methodology.</p>
Full article ">Figure 7
<p>Interconnection of the software modules used for simulation of the grid network.</p>
Full article ">Figure 8
<p>The packet flow for Transmission Control Protocol (TCP) in the microgrid network.</p>
Full article ">Figure 9
<p>Classification scale of Micro-Grids.</p>
Full article ">Figure 10
<p>Classifications of micro-grids by Mitsubishi Ltd.</p>
Full article ">
20 pages, 4673 KiB  
Article
Quality Control and Pre-Analysis Treatment of the Environmental Datasets Collected by an Internet Operated Deep-Sea Crawler during Its Entire 7-Year Long Deployment (2009–2016)
by Damianos Chatzievangelou, Jacopo Aguzzi, Martin Scherwath and Laurenz Thomsen
Sensors 2020, 20(10), 2991; https://doi.org/10.3390/s20102991 - 25 May 2020
Cited by 6 | Viewed by 2657
Abstract
Deep-sea environmental datasets are ever-increasing in size and diversity, as technological advances lead monitoring studies towards long-term, high-frequency data acquisition protocols. This study presents examples of pre-analysis data treatment steps applied to the environmental time series collected by the Internet Operated Deep-sea Crawler [...] Read more.
Deep-sea environmental datasets are ever-increasing in size and diversity, as technological advances lead monitoring studies towards long-term, high-frequency data acquisition protocols. This study presents examples of pre-analysis data treatment steps applied to the environmental time series collected by the Internet Operated Deep-sea Crawler “Wally” during a 7-year deployment (2009–2016) in the Barkley Canyon methane hydrates site, off Vancouver Island (BC, Canada). Pressure, temperature, electrical conductivity, flow, turbidity, and chlorophyll data were subjected to different standardizing, normalizing, and de-trending methods on a case-by-case basis, depending on the nature of the treated variable and the range and scale of the values provided by each of the different sensors. The final pressure, temperature, and electrical conductivity (transformed to practical salinity) datasets are ready for use. On the other hand, in the cases of flow, turbidity, and chlorophyll, further in-depth processing, in tandem with data describing the movement and position of the crawler, will be needed in order to filter out all possible effects of the latter. Our work evidences challenges and solutions in multiparametric data acquisition and quality control and ensures that a big step is taken so that the available environmental data meet high quality standards and facilitate the production of reliable scientific results. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the Barkley Canyon hydrates site (<b>A</b>) and the crawler (<b>B</b>).</p>
Full article ">Figure 2
<p>Steps of pressure data processing. (<b>a</b>) Original 7-year time series (i.e., December 2009–December 2016), with red lines indicating the pressure mean in each temporal window, (<b>b</b>) cumulative sum of the model-predicted differences, with data gaps filled to facilitate the Singular Spectrum Analysis (SSA) and the red line indicating the underlying linear trend and finally, (<b>c</b>) clean time series, with the original data gaps restored.</p>
Full article ">Figure 3
<p>Steps of temperature data processing. (<b>a</b>) Original 7-year time series (i.e., December 2009−December 2016), with black lines indicating the mean in each temporal window, (<b>b</b>) clean time series after centering.</p>
Full article ">Figure 4
<p>Steps of conductivity data processing. (<b>a</b>) original 7-year time series (i.e., December 2009–December 2016), with black lines indicating the mean in each temporal window, (<b>b</b>) linear relationship between conductivity and temperature and finally, (<b>c</b>) model-predicted time series.</p>
Full article ">Figure 5
<p>Steps of flow data processing. (<b>a</b>) Original 7-year time series (i.e., December 2009–December 2016) of E (East) and N (North) flow components (i.e., black for E and gray for N), (<b>b</b>) histograms for each component for the despiking of the Aquadopp data and finally, (<b>c</b>) complete time series of flow magnitude and direction.</p>
Full article ">Figure 6
<p>Steps of turbidity data processing. (<b>a</b>) Original 7-year time series (i.e., December 2009–December 2016), with the red line indicating the sensor’s theoretical maximum reading (i.e., 25 Formazin Turbidity Units (FTU), based on the selected sensitivity and range settings applied before deployment), (<b>b</b>) clean time series, after back-calculating the sensor’s electrical output and applying the correct calibration coefficients.</p>
Full article ">Figure 7
<p>Steps of chlorophyll data processing. (<b>a</b>) Original 7-year time series (i.e., December 2009 – December 2016), with the red line indicating the sensor’s theoretical maximum reading (i.e., 1 μg/L, based on the selected sensitivity and range settings applied before deployment), (<b>b</b>) processed time series, after back-calculating the sensor’s electrical output and applying the correct calibration coefficients.</p>
Full article ">Figure 8
<p>Flowchart of the quality control and treatment procedures. Red and blue labels indicate the timing of the process and the basis of the criteria used, respectively. The first five steps (automated) are performed either natively in the instrument, before the data are uploaded on the Oceans 2.0 database or before data are downloaded by the user.</p>
Full article ">Figure A1
<p>Graphical analysis of the tidal model residuals. (<b>a</b>) Quantile-quantile plot of standardized residuals vs. theoretical quantiles, with the red normality line, (<b>b</b>) histogram of the residuals with a red normal distribution curve. Both plates are scaled for visual purposes, to exclude extreme outliers that would make the figure hard to read.</p>
Full article ">Figure A2
<p>Graphical analysis of the hydrates–Mid-Canyon East temperature model coefficients. (<b>a</b>) Time series of the intercept, (<b>b</b>) time series of the slope. Each point corresponds to a linear model fitted to a 24 h-wide window.</p>
Full article ">Figure A3
<p>Graphical analysis of the conductivity – temperature model residuals. (<b>a</b>) Quantile-quantile plot of standardized residuals vs. theoretical quantiles, with the red normality line, (<b>b</b>) histogram of the residuals with a red normal distribution curve.</p>
Full article ">Figure A4
<p>Polar plots of current meter flow data. (<b>a</b>) Magnitude and direction derived from E–N components, (<b>b</b>) magnitude and direction derived from X–Y components. The labels in the grid indicate magnitudes in m/s.</p>
Full article ">
24 pages, 943 KiB  
Article
IoT-Blockchain Enabled Optimized Provenance System for Food Industry 4.0 Using Advanced Deep Learning
by Prince Waqas Khan, Yung-Cheol Byun and Namje Park
Sensors 2020, 20(10), 2990; https://doi.org/10.3390/s20102990 - 25 May 2020
Cited by 180 | Viewed by 14245
Abstract
Agriculture and livestock play a vital role in social and economic stability. Food safety and transparency in the food supply chain are a significant concern for many people. Internet of Things (IoT) and blockchain are gaining attention due to their success in versatile [...] Read more.
Agriculture and livestock play a vital role in social and economic stability. Food safety and transparency in the food supply chain are a significant concern for many people. Internet of Things (IoT) and blockchain are gaining attention due to their success in versatile applications. They generate a large amount of data that can be optimized and used efficiently by advanced deep learning (ADL) techniques. The importance of such innovations from the viewpoint of supply chain management is significant in different processes such as for broadened visibility, provenance, digitalization, disintermediation, and smart contracts. This article takes the secure IoT–blockchain data of Industry 4.0 in the food sector as a research object. Using ADL techniques, we propose a hybrid model based on recurrent neural networks (RNN). Therefore, we used long short-term memory (LSTM) and gated recurrent units (GRU) as a prediction model and genetic algorithm (GA) optimization jointly to optimize the parameters of the hybrid model. We select the optimal training parameters by GA and finally cascade LSTM with GRU. We evaluated the performance of the proposed system for a different number of users. This paper aims to help supply chain practitioners to take advantage of the state-of-the-art technologies; it will also help the industry to make policies according to the predictions of ADL. Full article
(This article belongs to the Special Issue Blockchain Security and Privacy for the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Cryptography scheme for transactions in a blockchain.</p>
Full article ">Figure 2
<p>Applications of IoT and blockchain in the food industry.</p>
Full article ">Figure 3
<p>Designed system scenario.</p>
Full article ">Figure 4
<p>IoT–blockchain-enabled intelligent provenance system.</p>
Full article ">Figure 5
<p>Architecture diagram of a private blockchain.</p>
Full article ">Figure 6
<p>Transaction processes in the blockchain.</p>
Full article ">Figure 7
<p>Identity management system for endpoint security.</p>
Full article ">Figure 8
<p>Integration scenarios of IoT with blockchain.</p>
Full article ">Figure 9
<p>Rules implemented in a smart contract.</p>
Full article ">Figure 10
<p>Advanced deep learning workflow.</p>
Full article ">Figure 11
<p>Architecture of basic LSTM and GRU units.</p>
Full article ">Figure 12
<p>Forecasting results of sales data.</p>
Full article ">Figure 13
<p>Latency for the query “get request transaction”.</p>
Full article ">Figure 14
<p>Response time for different simultaneous requests from three user groups.</p>
Full article ">
18 pages, 592 KiB  
Article
A RPCA-Based ISAR Imaging Method for Micromotion Targets
by Liangyou Lu, Peng Chen and Lenan Wu
Sensors 2020, 20(10), 2989; https://doi.org/10.3390/s20102989 - 25 May 2020
Cited by 4 | Viewed by 2512
Abstract
Micro-Doppler generated by the micromotion of a target contaminates the inverse synthetic aperture radar (ISAR) image heavily. To acquire a clear ISAR image, removing the Micro-Doppler is an indispensable task. By exploiting the sparsity of the ISAR image and the low-rank of Micro-Doppler [...] Read more.
Micro-Doppler generated by the micromotion of a target contaminates the inverse synthetic aperture radar (ISAR) image heavily. To acquire a clear ISAR image, removing the Micro-Doppler is an indispensable task. By exploiting the sparsity of the ISAR image and the low-rank of Micro-Doppler signal in the Range-Doppler (RD) domain, a novel Micro-Doppler removal method based on the robust principal component analysis (RPCA) framework is proposed. We formulate the model of sparse ISAR imaging for micromotion target in the framework of RPCA. Then, the imaging problem is decomposed into iterations between the sub-problem of sparse imaging and Micro-Doppler extraction. The alternative direction method of multipliers (ADMM) approach is utilized to seek for the solution of each sub-problem. Furthermore, to improve the computational efficiency and numerical robustness in the Micro-Doppler extraction, an SVD-free method is presented to further lessen the calculative burden. Experimental results with simulated data validate the effectiveness of the proposed method. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>The inverse synthetic aperture radar (ISAR) imaging geometry for a micromotion target.</p>
Full article ">Figure 2
<p>Singular value distribution of simulated ISAR data.</p>
Full article ">Figure 3
<p>Root Mean Square Error (RMSE) values between the rank-<span class="html-italic">r</span> approximated matrix <math display="inline"><semantics> <msub> <mi mathvariant="bold">D</mi> <mi>r</mi> </msub> </semantics></math> and the original matrix <math display="inline"><semantics> <mi mathvariant="bold">D</mi> </semantics></math> with respect to different values of <span class="html-italic">r</span>.</p>
Full article ">Figure 4
<p>(<b>a</b>) The scatterer model of the target; (<b>b</b>) The high-resolution range profile (HRRP) sequence of the full data after pulse compression; (<b>c</b>) Imaging result of range-doppler algorithm (RDA) with the full data.</p>
Full article ">Figure 5
<p>ISAR images obtained by the two proposed algorithms under different sampling schemes. Column 1: continuously sampling. Column 2: random sampling. Row 1: Algorithm 1. Row 2: Algorithm 2.</p>
Full article ">Figure 6
<p>ISAR images obtained by Algorithm 2 using different number of pulses. (<b>a</b>–<b>d</b>) corresponding to results from 64, 32, 16, 8 sampled pulses, respectively.</p>
Full article ">Figure 7
<p>ISAR imaging results obtained under different SNRs. (<b>a</b>–<b>f</b>) corresponding to the results under 10, 5, 0, −5, −10 and −15 dB, respectively.</p>
Full article ">Figure 8
<p>ISAR imaging results obtained by different methods. (<b>a</b>–<b>d</b>) corresponding to the imaging results by Algorithm 1, Algorithm 2, methods proposed in [<a href="#B14-sensors-20-02989" class="html-bibr">14</a>,<a href="#B15-sensors-20-02989" class="html-bibr">15</a>], respectively.</p>
Full article ">
20 pages, 5795 KiB  
Viewpoint
Can Building “Artificially Intelligent Cities” Safeguard Humanity from Natural Disasters, Pandemics, and Other Catastrophes? An Urban Scholar’s Perspective
by Tan Yigitcanlar, Luke Butler, Emily Windle, Kevin C. Desouza, Rashid Mehmood and Juan M. Corchado
Sensors 2020, 20(10), 2988; https://doi.org/10.3390/s20102988 - 25 May 2020
Cited by 126 | Viewed by 10624
Abstract
In recent years, artificial intelligence (AI) has started to manifest itself at an unprecedented pace. With highly sophisticated capabilities, AI has the potential to dramatically change our cities and societies. Despite its growing importance, the urban and social implications of AI are still [...] Read more.
In recent years, artificial intelligence (AI) has started to manifest itself at an unprecedented pace. With highly sophisticated capabilities, AI has the potential to dramatically change our cities and societies. Despite its growing importance, the urban and social implications of AI are still an understudied area. In order to contribute to the ongoing efforts to address this research gap, this paper introduces the notion of an artificially intelligent city as the potential successor of the popular smart city brand—where the smartness of a city has come to be strongly associated with the use of viable technological solutions, including AI. The study explores whether building artificially intelligent cities can safeguard humanity from natural disasters, pandemics, and other catastrophes. All of the statements in this viewpoint are based on a thorough review of the current status of AI literature, research, developments, trends, and applications. This paper generates insights and identifies prospective research questions by charting the evolution of AI and the potential impacts of the systematic adoption of AI in cities and societies. The generated insights inform urban policymakers, managers, and planners on how to ensure the correct uptake of AI in our cities, and the identified critical questions offer scholars directions for prospective research and development. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Classification of AI-driven computational techniques, derived from Corea [<a href="#B14-sensors-20-02988" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>Hype cycle of AI applications, derived from Gartner [<a href="#B28-sensors-20-02988" class="html-bibr">28</a>].</p>
Full article ">Figure 3
<p>Countries with a national AI strategy, derived from Holon IQ [<a href="#B29-sensors-20-02988" class="html-bibr">29</a>].</p>
Full article ">Figure 4
<p>AI and big data analytics in natural disaster management, derived from Kankanamge et al. [<a href="#B59-sensors-20-02988" class="html-bibr">59</a>].</p>
Full article ">Figure 5
<p>AI capabilities and their use by domains, derived from McKinsey Global Research Institute [<a href="#B93-sensors-20-02988" class="html-bibr">93</a>].</p>
Full article ">Figure 6
<p>AI utilization for achieving sustainable development goals, derived from McKinsey Global Research Institute [<a href="#B93-sensors-20-02988" class="html-bibr">93</a>].</p>
Full article ">Figure A1
<p>Global landscape of national artificial intelligence strategies, derived from Holon IQ [<a href="#B29-sensors-20-02988" class="html-bibr">29</a>].</p>
Full article ">
12 pages, 3806 KiB  
Article
Bragg Peak Localization with Piezoelectric Sensors for Proton Therapy Treatment
by Jorge Otero, Ivan Felis, Alicia Herrero, José A. Merchán and Miguel Ardid
Sensors 2020, 20(10), 2987; https://doi.org/10.3390/s20102987 - 25 May 2020
Cited by 4 | Viewed by 3163
Abstract
A full chain simulation of the acoustic hadrontherapy monitoring for brain tumours is presented in this work. For the study, a proton beam of 100 MeV is considered. In the first stage, Geant4 is used to simulate the energy deposition and to study [...] Read more.
A full chain simulation of the acoustic hadrontherapy monitoring for brain tumours is presented in this work. For the study, a proton beam of 100 MeV is considered. In the first stage, Geant4 is used to simulate the energy deposition and to study the behaviour of the Bragg peak. The energy deposition in the medium produces local heating that can be considered instantaneous with respect to the hydrodynamic time scale producing a sound pressure wave. The resulting thermoacoustic signal has been subsequently obtained by solving the thermoacoustic equation. The acoustic propagation has been simulated by FEM methods in the brain and the skull, where a set of piezoelectric sensors are placed. Last, the final received signals in the sensors have been processed in order to reconstruct the position of the thermal source and, thus, to determine the feasibility and accuracy of acoustic beam monitoring in hadrontherapy. Full article
(This article belongs to the Special Issue Piezoelectric Transducers)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Position of the sensors in the skull where the acceleration produced by the propagation of the pressure pulse will be evaluated; (<b>b</b>) Simulated volume and pressure source.</p>
Full article ">Figure 2
<p>(<b>a</b>) PIC 255 Ceramic disc measured; (<b>b</b>) Mesh geometry for the finite element method.</p>
Full article ">Figure 3
<p>Optimization process diagram with input parameters for the thermoacoustic and piezoelectric parts. Time and energy for the first model and diameter and thickness for second one.</p>
Full article ">Figure 4
<p>Energy deposition for a proton beam with Gauss profile (<math display="inline"><semantics> <mi>σ</mi> </semantics></math> = 1 mm) and 10<sup>6</sup> protons. (<b>a</b>) layer bone interaction with different beam energies; (<b>b</b>) Interaction with the water phantom and with a 1 cm bone layer; (<b>c</b>) Deposition in a plane for 100 MeV, the upper figure shows the phantom water and the lower one shows the interaction with a layer of bone.</p>
Full article ">Figure 5
<p>(<b>a</b>) Pressure received at 20 mm from the Bragg peak on the proton beam emission axis; (<b>b</b>) simulated pressure as a function of the distance.</p>
Full article ">Figure 6
<p>(<b>a</b>) Propagation of a longitudinal wave to the point <math display="inline"><semantics> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mrow> <mi>r</mi> <mo>,</mo> <mi>t</mi> </mrow> <mo>)</mo> </mrow> </mrow> </semantics></math> where the speed of sound changes due to the change of medium; (<b>b</b>) general diagram of the incidence and transmission angle in the skull; (<b>c</b>) transmission angle for longitudinal and shear waves in terms of the incidence angle; (<b>d</b>) power transmission and reception coefficients of the cerebrospinal fluid-skull interface.</p>
Full article ">Figure 7
<p>(<b>a</b>) Signal received on one of the sensors and pressure on the surface of the skull; (<b>b</b>) Propagation in a plane in the X, Y plane 53 μs after the energy deposition.</p>
Full article ">Figure 8
<p>(<b>a</b>) Resonance and anti-resonance frequency for the first vibration mode in terms of the diameter and thickness simulated. The horizontal plane represents the central frequency of the pressure pulse of <a href="#sensors-20-02987-f005" class="html-fig">Figure 5</a>a; (<b>b</b>) Ratio <math display="inline"><semantics> <mrow> <msub> <mi>k</mi> <mn>1</mn> </msub> <mo>/</mo> <msub> <mi>k</mi> <mn>2</mn> </msub> </mrow> </semantics></math> for the first two modes of vibration. The shaded zone represents the area with the best diameter and thickness ratio is optimizing in this region the sensitivity and frequency response of piezoelectric ceramics.</p>
Full article ">Figure 9
<p>Pressure and terminal voltage for the optimized piezoelectric ceramic for a pressure signal in the emitted source as shown in <a href="#sensors-20-02987-f005" class="html-fig">Figure 5</a>a. The propagated pressure on the sensor surface is 0.28 Pa as shown in <a href="#sensors-20-02987-f007" class="html-fig">Figure 7</a>a.</p>
Full article ">Figure 10
<p>Receiving Voltage Response (RVR) for the piezoelectric ceramic PIC255 with diameter 25 mm and thickness 2 mm measured in the laboratory (solid line) together with its simulation (dotted line), the green area represents the standard deviation in the measurements. The optimized geometry shows a significant increase in the low-frequency band around 110 kHz.</p>
Full article ">
22 pages, 3814 KiB  
Article
Building Dynamic Communities of Interest for Internet of Things in Smart Cities
by Monira N. Aldelaimi, M. Anwar Hossain and Mohammed F. Alhamid
Sensors 2020, 20(10), 2986; https://doi.org/10.3390/s20102986 - 25 May 2020
Cited by 18 | Viewed by 3935
Abstract
The Internet of things (IoT) is a growing area of research in the context of smart cities. It links a city’s physical objects that are equipped with embedded sensing, communicating, and computing technology. These objects possess the capability to connect and share data [...] Read more.
The Internet of things (IoT) is a growing area of research in the context of smart cities. It links a city’s physical objects that are equipped with embedded sensing, communicating, and computing technology. These objects possess the capability to connect and share data with minimal human intervention, which creates the potential to establish social relationships among them. However, it is challenging for an object to discover, communicate, and collaborate dynamically with other objects, such as social entities, and provide services to humans. This is due to the increase in the number of objects and the complexity in defining social-like relationships among them. The current research aims to address this by introducing an object architecture and defining a Dynamic Community of Interest Model (DCIM) for IoT objects. The proposed model will help IoT objects to socialize and build communities amongst themselves based on different criteria. In this approach, objects belonging to a community will collaborate with each other to collect, manipulate, and share interesting content and provide services to enhance the quality of human interactions in smart cities. Full article
(This article belongs to the Special Issue IoT-Enabled Smart Cities)
Show Figures

Figure 1

Figure 1
<p>SIoT objects relationships comparison.</p>
Full article ">Figure 2
<p>Interest list structure.</p>
Full article ">Figure 3
<p>Examples of common interest relationship (CIR).</p>
Full article ">Figure 4
<p>SIoT Architecture.</p>
Full article ">Figure 5
<p>Proposed architecture.</p>
Full article ">Figure 6
<p>Example of Agent-Based Modeling system structure.</p>
Full article ">Figure 7
<p>Initial view for the airport scenario.</p>
Full article ">Figure 8
<p>The scalability of the model handling 1000 objects.</p>
Full article ">Figure 9
<p>Number of objects in different communities over a period of time.</p>
Full article ">Figure 10
<p>Changes in sport community as an example.</p>
Full article ">Figure 11
<p>Disappearance of travel community.</p>
Full article ">Figure 12
<p>Inspection of object 173.</p>
Full article ">Figure 13
<p>Changes in the sport community after removing the communication range restriction.</p>
Full article ">
21 pages, 5830 KiB  
Article
A Canopy Information Measurement Method for Modern Standardized Apple Orchards Based on UAV Multimodal Information
by Guoxiang Sun, Xiaochan Wang, Haihui Yang and Xianjie Zhang
Sensors 2020, 20(10), 2985; https://doi.org/10.3390/s20102985 - 25 May 2020
Cited by 17 | Viewed by 4599
Abstract
To make canopy information measurements in modern standardized apple orchards, a method for canopy information measurements based on unmanned aerial vehicle (UAV) multimodal information is proposed. Using a modern standardized apple orchard as the study object, a visual imaging system on a quadrotor [...] Read more.
To make canopy information measurements in modern standardized apple orchards, a method for canopy information measurements based on unmanned aerial vehicle (UAV) multimodal information is proposed. Using a modern standardized apple orchard as the study object, a visual imaging system on a quadrotor UAV was used to collect canopy images in the apple orchard, and three-dimensional (3D) point-cloud models and vegetation index images of the orchard were generated with Pix4Dmapper software. A row and column detection method based on grayscale projection in orchard index images (RCGP) is proposed. Morphological information measurements of fruit tree canopies based on 3D point-cloud models are established, and a yield prediction model for fruit trees based on the UAV multimodal information is derived. The results are as follows: (1) When the ground sampling distance (GSD) was 2.13–6.69 cm/px, the accuracy of row detection in the orchard using the RCGP method was 100.00%. (2) With RCGP, the average accuracy of column detection based on grayscale images of the normalized green (NG) index was 98.71–100.00%. The hand-measured values of H, SXOY, and V of the fruit tree canopy were compared with those obtained with the UAV. The results showed that the coefficient of determination R2 was the most significant, which was 0.94, 0.94, and 0.91, respectively, and the relative average deviation (RADavg) was minimal, which was 1.72%, 4.33%, and 7.90%, respectively, when the GSD was 2.13 cm/px. Yield prediction was modeled by the back-propagation artificial neural network prediction model using the color and textural characteristic values of fruit tree vegetation indices and the morphological characteristic values of point-cloud models. The R2 value between the predicted yield values and the measured values was 0.83–0.88, and the RAD value was 8.05–9.76%. These results show that the UAV-based canopy information measurement method in apple orchards proposed in this study can be applied to the remote evaluation of canopy 3D morphological information and can yield information about modern standardized orchards, thereby improving the level of orchard informatization. This method is thus valuable for the production management of modern standardized orchards. Full article
(This article belongs to the Special Issue Sensors in Agriculture 2020)
Show Figures

Figure 1

Figure 1
<p>Three-dimensional point-cloud models and vegetation index images of the apple orchard.</p>
Full article ">Figure 1 Cont.
<p>Three-dimensional point-cloud models and vegetation index images of the apple orchard.</p>
Full article ">Figure 1 Cont.
<p>Three-dimensional point-cloud models and vegetation index images of the apple orchard.</p>
Full article ">Figure 2
<p>Row and column detection with multimodal data of orchard canopy.</p>
Full article ">Figure 3
<p>Row and column detection results with orchard canopy data.</p>
Full article ">Figure 4
<p>Performance of column detection with orchard canopy data.</p>
Full article ">Figure 5
<p>Detection of morphological characteristic values of orchard canopy.</p>
Full article ">Figure 6
<p>Performance parameters of the fruit yield prediction model using various vegetation indices.</p>
Full article ">Figure 7
<p>Performance parameters of the fruit yield prediction model with various characteristic values.</p>
Full article ">
16 pages, 9447 KiB  
Article
Intact Detection of Highly Occluded Immature Tomatoes on Plants Using Deep Learning Techniques
by Yue Mu, Tai-Shen Chen, Seishi Ninomiya and Wei Guo
Sensors 2020, 20(10), 2984; https://doi.org/10.3390/s20102984 - 25 May 2020
Cited by 81 | Viewed by 5502
Abstract
Automatic detection of intact tomatoes on plants is highly expected for low-cost and optimal management in tomato farming. Mature tomato detection has been wildly studied, while immature tomato detection, especially when occluded with leaves, is difficult to perform using traditional image analysis, which [...] Read more.
Automatic detection of intact tomatoes on plants is highly expected for low-cost and optimal management in tomato farming. Mature tomato detection has been wildly studied, while immature tomato detection, especially when occluded with leaves, is difficult to perform using traditional image analysis, which is more important for long-term yield prediction. Therefore, tomato detection that can generalize well in real tomato cultivation scenes and is robust to issues such as fruit occlusion and variable lighting conditions is highly desired. In this study, we build a tomato detection model to automatically detect intact green tomatoes regardless of occlusions or fruit growth stage using deep learning approaches. The tomato detection model used faster region-based convolutional neural network (R-CNN) with Resnet-101 and transfer learned from the Common Objects in Context (COCO) dataset. The detection on test dataset achieved high average precision of 87.83% (intersection over union ≥ 0.5) and showed a high accuracy of tomato counting (R2 = 0.87). In addition, all the detected boxes were merged into one image to compile the tomato location map and estimate their size along one row in the greenhouse. By tomato detection, counting, location and size estimation, this method shows great potential for ripeness and yield prediction. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The camera layout for taking tomato photos in the greenhouse of Seki farm.</p>
Full article ">Figure 2
<p>The location of the sub-image in one stitched image of part of the row. Red lines are the outlines of subimages.</p>
Full article ">Figure 3
<p>Accuracy of the tomato detection model.</p>
Full article ">Figure 4
<p>Distribution of true positives and false positives with scores by applying the tomato detection model on the test dataset. (<b>A</b>) Percentage change of true positives and false positives with scores; (<b>B</b>) relative frequency change of true positives and false positives.</p>
Full article ">Figure 5
<p>Examples of detected tomatoes with different scores in the test dataset. Subfigure (<b>A</b>) shows boxes with scores ≤ 0.3, (<b>B</b>) shows boxes with 0.3 &lt; scores ≤ 0.5, (<b>C</b>) shows boxes with 0.5 &lt; scores &lt; 0.7, and (<b>D</b>) shows boxes with scores &gt; 0.7.</p>
Full article ">Figure 6
<p>Example of all detected tomatoes (<b>A</b>,<b>C</b>) and filtered detected tomatoes with a score ≥ 0.5 in the test dataset (<b>B</b>,<b>D</b>).</p>
Full article ">Figure 7
<p>Correlation between labelled and detected tomatoes per subimage in the test dataset.</p>
Full article ">Figure 8
<p>Detected bounding boxes of the tomatoes in the stitched image. (<b>A</b>) is the location map for tomatoes in one row; (<b>B</b>,<b>C</b>) are two enlarged zones in (<b>A</b>) for clear illustration.</p>
Full article ">Figure 9
<p>Relative frequency change of the tomatoes size (<b>A</b>) and aspect ratio (<b>B</b>) in images. Tomato size is represented by box width and box height. Tomato aspect ratio is calculated by width/height.</p>
Full article ">Figure 10
<p>Change in average precision (AP) at an intersection of union (IoU) threshold of 0.5 of training data and validation data using Resnet 101 with the epoch.</p>
Full article ">Figure 11
<p>Illustration of the IoU value difference for the same offset in pixels but for different size objects. The IoU of the left boxes (<b>A</b>) is 0.53, and the IoU of right the boxes (<b>B</b>) is 0.47. Although both of them has the same offset (one pixel) in the same direction, the left detection is true positive while the right one is false positive with IoU threshold of 0.5.</p>
Full article ">Figure 12
<p>Area distribution of detected true positive boxes and false positive boxes with the scores, and their moving median value with the scores.</p>
Full article ">Figure 13
<p>Application of tomato detection in ripeness estimation. The tags labelled ripe or immature tomatoes.</p>
Full article ">
14 pages, 2597 KiB  
Article
Comparison of Trotting Stance Detection Methods from an Inertial Measurement Unit Mounted on the Horse’s Limb
by Marie Sapone, Pauline Martin, Khalil Ben Mansour, Henry Château and Frédéric Marin
Sensors 2020, 20(10), 2983; https://doi.org/10.3390/s20102983 - 25 May 2020
Cited by 14 | Viewed by 3581
Abstract
The development of on-board sensors, such as inertial measurement units (IMU), has made it possible to develop new methods for analyzing horse locomotion to detect lameness. The detection of spatiotemporal events is one of the keystones in the analysis of horse locomotion. This [...] Read more.
The development of on-board sensors, such as inertial measurement units (IMU), has made it possible to develop new methods for analyzing horse locomotion to detect lameness. The detection of spatiotemporal events is one of the keystones in the analysis of horse locomotion. This study assesses the performance of four methods for detecting Foot on and Foot off events. They were developed from an IMU positioned on the canon bone of eight horses during trotting recording on a treadmill and compared to a standard gold method based on motion capture. These methods are based on accelerometer and gyroscope data and use either thresholding or wavelets to detect stride events. The two methods developed from gyroscopic data showed more precision than those developed from accelerometric data with a bias less than 0.6% of stride duration for Foot on and 0.1% of stride duration for Foot off. The gyroscope is less impacted by the different patterns of strides, specific to each horse. To conclude, methods using the gyroscope present the potential of further developments to investigate the effects of different gait paces and ground types in the analysis of horse locomotion. Full article
(This article belongs to the Special Issue Human and Animal Motion Tracking Using Inertial Sensors)
Show Figures

Figure 1

Figure 1
<p>Placement of the two IMUs (represented in red) and the kinematics markers according to points of interest: six at anatomical points: carpal joint, metacarpo-phalangeal joint, hoof (toe, heel, front coronary band, lateral coronary band), one at the center of wither’s IMU and three on canon bone’s IMU (center, up lateral part, down lateral part). One additional free marker was used for synchronization keystroke on the wither’s marker.</p>
Full article ">Figure 2
<p>Positioning of Vicon cameras (in black) around the treadmill (in blue) to record the locomotion of the limbs of the right side of the horses. The orange crosses represent the different experimenters and their control tables (computers with software for MoCap and IMUs and treadmill control panel) are shown in gray.</p>
Full article ">Figure 3
<p>Representation of the Y-axis gyroscopic filtered signal used for the pre-segmentation of processing windows (<b>o</b>). In this figure, the <span class="html-italic">i</span>-th ImuWindow is shown in dotted lines. It is preceded by the (<span class="html-italic">i</span>-1) th ImuWindows delimited by the first two maximum points represented in red.</p>
Full article ">Figure 4
<p>Representation of (<b>a</b>) the hoof angle calculated from the hoof markers allowing the detection of <span class="html-italic">Foot on</span> (o) and <span class="html-italic">Foot off</span> (<b>∆</b>) reference events (<span class="html-italic">MoCapFootOn</span> and <span class="html-italic">MoCapFootOff</span>), (<b>b</b>) the Y-axis gyroscopic signal used for the detection of <span class="html-italic">Foot on</span> (o) and <span class="html-italic">Foot off</span> (<b>∆</b>) events in method A (<span class="html-italic">ImuFootOn_A</span> and <span class="html-italic">ImuFootOff_A</span>) and method C (<span class="html-italic">ImuFootOn_C</span> and <span class="html-italic">ImuFootOff_C</span>), (<b>c</b>) the Z-axis accelerometric signal used for detection of <span class="html-italic">Foot on</span> (o) events in method B (<span class="html-italic">ImuFootOn_B</span>) and method D (<span class="html-italic">ImuFootOn_D</span>), (<b>d</b>) the X-axis accelerometric signal used for detection of <span class="html-italic">Foot off</span> (<b>∆</b>) events in method B (<span class="html-italic">ImuFootOff_B</span>) and method D (<span class="html-italic">ImuFootOff_D</span>).</p>
Full article ">Figure 5
<p>Bland-Altman comparison of the <span class="html-italic">Foot on</span> detection of the four methods developed with IMU data and MoCap <span class="html-italic">Foot on</span> detection. Accuracy (bias between each method and MoCap) and limits of agreement (95% limits of agreement) of method A were represented on the upper left corner (<b>A</b>), method B on the upper right corner (<b>B</b>), method C on the lower left corner (<b>C</b>), and method D on the lower right corner (<b>D</b>).</p>
Full article ">Figure 6
<p>Bland-Altman comparison of the <span class="html-italic">Foot off</span> detection of the four methods developed with IMU data and MoCap <span class="html-italic">Foot off</span> detection. Accuracy (bias between each method and MoCap) and limits of agreement (95% limits of agreement) of method A were represented on the upper left corner (<b>A</b>), method B on the upper right corner (<b>B</b>), method C on the lower left corner (<b>C</b>), and method D on the lower right corner (<b>D</b>).</p>
Full article ">Figure 7
<p>Bland-Altman comparison of the <span class="html-italic">Stride Duration</span>, calculated from the <span class="html-italic">Foot on</span> obtained from the four methods developed with IMU data and MoCap. Accuracy (bias between each method and MoCap) and limits of agreement (95% limits of agreement) of method A were represented on the upper left corner (<b>A</b>), method B on the upper right corner (<b>B</b>), method C on the lower left corner (<b>C</b>), and method D on the lower right corner (<b>D</b>).</p>
Full article ">Figure 8
<p>Bland-Altman comparison of the <span class="html-italic">Stance Duration</span>, calculated from the <span class="html-italic">Foot on</span> and <span class="html-italic">Foot off</span> obtained from the four methods developed with IMU data and MoCap. Accuracy (bias between each method and MoCap) and limits of agreement (95% limits of agreement) of method A were represented on the upper left corner (<b>A</b>), method B on the upper right corner (<b>B</b>), method C on the lower left corner (<b>C</b>), and method D on the lower right corner (<b>D</b>).</p>
Full article ">
18 pages, 8270 KiB  
Article
BIM in People2People and Things2People Interactive Process
by Bruno Mataloto, João C. Ferreira, Ricardo Resende, Rita Moura and Sílvia Luís
Sensors 2020, 20(10), 2982; https://doi.org/10.3390/s20102982 - 24 May 2020
Cited by 11 | Viewed by 3819
Abstract
In this research work, we present an IoT solution to environment variables using a LoRa transmission technology to give real-time information to users in a Things2People process and achieve savings by promoting behavior changes in a People2People process. These data are stored and [...] Read more.
In this research work, we present an IoT solution to environment variables using a LoRa transmission technology to give real-time information to users in a Things2People process and achieve savings by promoting behavior changes in a People2People process. These data are stored and later processed to identify patterns and integrate with visualization tools, which allow us to develop an environmental perception while using the system. In this project, we implemented a different approach based on the development of a 3D visualization tool that presents the system collected data, warnings, and other users’ perception in an interactive 3D model of the building. This data representation introduces a new People2People interaction approach to achieve savings in shared spaces like public buildings by combining sensor data with the users’ individual and collective perception. This approach was validated at the ISCTE-IUL University Campus, where this 3D IoT data representation was presented in mobile devices, and from this, influenced user behavior toward meeting campus sustainability goals. Full article
Show Figures

Figure 1

Figure 1
<p>Behavior change diagnostic model for ISCTE’s community assessment.</p>
Full article ">Figure 2
<p>Scheme of the BIM, campus activities, and sensor data in Unity.</p>
Full article ">Figure 3
<p>Box drawn with AutoCAD and printed in lactic polyacid (PLA) at the ISCTE-IUL FabLab.</p>
Full article ">Figure 4
<p>Prototypes of LoRa nodes for the data centre in the process of completion.</p>
Full article ">Figure 5
<p>Placement of external temperature and humidity sensors.</p>
Full article ">Figure 6
<p>ISCTE-IUL University Academic Services, indoor sensors placement marked with blue circles and outdoor sensor marked with red circle.</p>
Full article ">Figure 7
<p>Flash Web Server developed in Python.</p>
Full article ">Figure 8
<p>Query performed in the database, which indicates the last reading of each sensor.</p>
Full article ">Figure 9
<p>Dashboard main console displaying details about each sensor.</p>
Full article ">Figure 10
<p>3D building imported from BIM with indoor sensors as volumes and outdoor sensors as spheres.</p>
Full article ">Figure 11
<p>ISCTE-IUL Building I, 3D BIM developed with Autodesk’s Revit.</p>
Full article ">Figure 12
<p>User interface with light mode selected for 01-08-2019 at 06:00 h.</p>
Full article ">Figure 13
<p>User interface with temperature mode selected for 01-08-2019 at 09:00 h.</p>
Full article ">Figure 14
<p>User interface with temperature mode selected for 01-08-2019 at 09:00 h.</p>
Full article ">Figure 15
<p>Percentage of warnings detected per hour.</p>
Full article ">Figure 16
<p>Percentage of warnings detected per zone.</p>
Full article ">Figure 17
<p>Relation between indoor thermal comfort and outdoor temperature, based on the indoor temperature when the warning was created.</p>
Full article ">Figure 18
<p>Number of warnings evolution during the last month.</p>
Full article ">Figure 19
<p>Users’ collective thermal perception. color-coded into avatars.</p>
Full article ">
20 pages, 4944 KiB  
Article
Construction of Hybrid Dual Radio Frequency RSSI (HDRF-RSSI) Fingerprint Database and Indoor Location Method
by Haotai Sun, Xiaodong Zhu, Yuanning Liu and Wentao Liu
Sensors 2020, 20(10), 2981; https://doi.org/10.3390/s20102981 - 24 May 2020
Cited by 14 | Viewed by 3735
Abstract
Radio frequency communication technology has not only greatly improved public network service, but also developed a new technological route for indoor navigation service. However, there is a gap between the precision and accuracy of indoor navigation services provided by indoor navigation service and [...] Read more.
Radio frequency communication technology has not only greatly improved public network service, but also developed a new technological route for indoor navigation service. However, there is a gap between the precision and accuracy of indoor navigation services provided by indoor navigation service and the expectation of the public. This study proposed a method for constructing a hybrid dual frequency received signal strength indicator (HDRF-RSSI) fingerprint library, which is different from the traditional RSSI fingerprint library constructing method in indoor space using 2.4G radio frequency (RF) under the same Wi-Fi infrastructure condition. The proposed method combined 2.4G RF and 5G RF on the same access point (AP) device to construct a HDRF-RSSI fingerprint library, thereby doubling the fingerprint dimension of each reference point (RP). Experimental results show that the feature discriminability of HDRF-RSSI fingerprinting is 18.1% higher than 2.4G RF RSSI fingerprinting. Moreover, the hybrid radio frequency fingerprinting model, training loss function, and location evaluation algorithm based on the machine learning method were designed, so as to avoid limitation that transmission point (TP) and AP must be visible in the positioning method. In order to verify the effect of the proposed HDRF-RSSI fingerprint library construction method and the location evaluation algorithm, dual RF RSSI fingerprint data was collected to construct a fingerprint library in the experimental scene, which was trained using the proposed method. Several comparative experiments were designed to compare the positioning performance indicators such as precision and accuracy. Experimental results demonstrate that compared with the existing machine learning method based on Wi-Fi 2.4G RF RSSI fingerprint, the machine learning method combining Wi-Fi 5G RF RSSI vector and the original 2.4G RF RSSI vector can effectively improve the precision and accuracy of indoor positioning of the smart phone. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Deterministic positioning method and probabilistic positioning method.</p>
Full article ">Figure 2
<p>(<b>a</b>) Noise interference of Wi-Fi (2.4G) channel; (<b>b</b>) Noise interference of Wi-Fi (5G) channel noise interference.</p>
Full article ">Figure 3
<p>Distinguishability of 2.4G 2D fingerprint data clustering intervals of reference points at adjacent locations.</p>
Full article ">Figure 4
<p>Distinguishability of 2.4G and 5G four-dimensional fingerprint data clustering intervals of reference points at adjacent positions.</p>
Full article ">Figure 5
<p>The schematic diagram of the hybrid dual frequency received signal strength indicator (HDRF-RSSI) fingerprint location method.</p>
Full article ">Figure 6
<p>CNN structure diagram of model.</p>
Full article ">Figure 7
<p>Flow chart of the location evaluation algorithm.</p>
Full article ">Figure 8
<p>The schematic diagram of the position evaluation algorithm.</p>
Full article ">Figure 9
<p>Experimental floor layout.</p>
Full article ">Figure 10
<p>Data collection site 2.4G Wi-Fi radiation pattern and 5G Wi-Fi radiation pattern.</p>
Full article ">Figure 11
<p>The cumulative distribution function of the location algorithm as Cifar-10, K nearest neighbor (KNN), support vector machines (SVM), and random forest.</p>
Full article ">
15 pages, 5371 KiB  
Article
Directional Response of Randomly Dispersed Carbon Nanotube Strain Sensors
by Alfredo Güemes, Angel Renato Pozo Morales, Antonio Fernandez-Lopez, Xoan Xose F. Sanchez-Romate, Maria Sanchez and Alejandro Ureña
Sensors 2020, 20(10), 2980; https://doi.org/10.3390/s20102980 - 24 May 2020
Cited by 4 | Viewed by 3283
Abstract
Tests on a double lap bonded joint, with transverse strips of randomly oriented carbon nanotubes (CNT) sprayed onto an epoxy adhesive film, showed a positive increment in electrical resistance under tensile load, even though the transverse strains were negative. Other experiments included in [...] Read more.
Tests on a double lap bonded joint, with transverse strips of randomly oriented carbon nanotubes (CNT) sprayed onto an epoxy adhesive film, showed a positive increment in electrical resistance under tensile load, even though the transverse strains were negative. Other experiments included in this work involved placing longitudinal and transversal CNT sensors in a tensile loaded aluminum plate, and, as reported by other authors, the results confirm that the resistance change is not only dependent on the strains oriented with the electrode line, while the other strain components also influence the response. This behavior is quite different to that of conventional strain gages which have a near zero sensitivity to strains not aligned to the sensor direction. The dependence of the electrical response on all the strain components makes it quite difficult, possibly unfeasible, to experimentally determine the individual strain components with this kind of sensors; however, the manufacturing of aligned CNT sensors could deal with this issue. Full article
(This article belongs to the Special Issue Carbon Nanotube and Graphene-based Sensors)
Show Figures

Figure 1

Figure 1
<p>Sketch of electrical conduction in randomly dispersed carbon nanotubes (CNTs) in doped resins. (<b>a</b>) Randomly oriented CNT. (<b>b</b>) Contact resistance among nanotubes (from reference [<a href="#B6-sensors-20-02980" class="html-bibr">6</a>], with CC permission).</p>
Full article ">Figure 2
<p>Uniaxial tensile test of a steel plate with longitudinal and transverse sensors. (from reference [<a href="#B16-sensors-20-02980" class="html-bibr">16</a>], with permission).</p>
Full article ">Figure 3
<p>Experimental results obtained from the former specimen. The abscissa axis shows the longitudinal or transverse strain measured experimentally with a biaxial strain gauge (from reference [<a href="#B16-sensors-20-02980" class="html-bibr">16</a>], with permission from CC4).</p>
Full article ">Figure 4
<p>(<b>A</b>) Double lap bonded joint specimen.Photograph of one side of the specimen, constructed with a longitudinally aligned optical fiber. (<b>B</b>) Double lap bonded joint specimen. Photograph of the other side of the specimen, where the transverse black strips are CNT-doped regions on the green adhesive. Copper wires were used as electrodes, fixed with silver conductive paint, and sealed with an adhesive layer to avoid environmental issues, as sketched in the figure.</p>
Full article ">Figure 5
<p>Uniaxial tensile specimens made with aluminum with a CNT bonded sensor, and electrodes at the corners (size of the sensor 20 × 20 mm). The black strips are the doped areas, 5 mm in width.</p>
Full article ">Figure 6
<p>Experimental axial strain (<b>a</b>) and shear strain (<b>b</b>) at the double lap joint, obtained by distributed optical fiber sensing.</p>
Full article ">Figure 7
<p>The electrical response as a function of the mechanical load for channels 2 (<b>left</b>) and 10 (<b>right</b>). Channel 2 was outside the bonded area, while channel 10 was inside the bonded region (see <a href="#sensors-20-02980-f004" class="html-fig">Figure 4</a>).</p>
Full article ">Figure 8
<p>Electrical response as a function of the mechanical load for Channel 6.</p>
Full article ">Figure 9
<p>Longitudinal and transversal responses of carbon nanotube (CNT) sensors bonded to an aluminum plate and submitted to uniaxial loads. (<b>a</b>) Resistance changes for AD and DC paths. (<b>b</b>) Resitance changes for AB and BC paths.</p>
Full article ">
18 pages, 2785 KiB  
Article
Ferroelectret-based Hydrophone Employed in Oil Identification—A Machine Learning Approach
by Daniel R. de Luna, T.T.C. Palitó, Y.A.O. Assagra, R.A.P. Altafim, J.P. Carmo, R.A.C. Altafim, A.A.O. Carneiro and Vicente A. de Sousa, Jr.
Sensors 2020, 20(10), 2979; https://doi.org/10.3390/s20102979 - 24 May 2020
Cited by 4 | Viewed by 3150
Abstract
This work focuses on acoustic analysis as a way of discriminating mineral oil, providing a robust technique, immune to electromagnetic noise, and in some cases, depending on the applied sensor, a low-cost technique. Thus, we propose a new method for the diagnosis of [...] Read more.
This work focuses on acoustic analysis as a way of discriminating mineral oil, providing a robust technique, immune to electromagnetic noise, and in some cases, depending on the applied sensor, a low-cost technique. Thus, we propose a new method for the diagnosis of the quality of mineral oil used in electrical transformers, integrating a ferroelectric-based hydrophone and an acoustic transducer. Our classification solution is based on a supervised machine learning technique applied to the signals generated by an in-home built hydrophone. A total of three statistical datasets entries were collected during the acoustic experiments on four types of oils. The first, the second, and third datasets contain 180, 240, and 420 entries, respectively. Eighty-four features were considered from each dataset to apply to two classification approaches. The first classification approach is able to distinguish the oils from the four possible classes with a classification error less than 2%, while the second approach is able to successfully classify the oils without errors (e.g., with a score of 100%). Full article
(This article belongs to the Special Issue Acoustic Wave Sensors for Gaseous and Liquid Environments)
Show Figures

Figure 1

Figure 1
<p>Experiments configuration scheme.</p>
Full article ">Figure 2
<p>Ultrasonic emitter used.</p>
Full article ">Figure 3
<p>Illustration of the acoustic chamber prototype.</p>
Full article ">Figure 4
<p>Acoustic transducer: design of the metallic enclosure, seen in perspective: (<b>a</b>) lateral; (<b>b</b>) frontal.</p>
Full article ">Figure 5
<p>Electronic acoustic transducer circuit.</p>
Full article ">Figure 6
<p>Acoustic transducer, seen in perspective: (<b>a</b>) frontal side; (<b>b</b>) and rear side.</p>
Full article ">Figure 7
<p>Experimental setup.</p>
Full article ">Figure 8
<p>Example of Signal SWEEP for different types of oils: (<b>a</b>) new oil; (<b>b</b>) processed oil; (<b>c</b>) contaminated oil; and (<b>d</b>) out of service oil (Database 2).</p>
Full article ">Figure 9
<p>Correlation Heatmap: (<b>a</b>) Dataset 1, (<b>b</b>) Dataset 2, (<b>c</b>) Dataset 3.</p>
Full article ">Figure 10
<p>Classification Approach 1.</p>
Full article ">Figure 11
<p>Classification Approach 2.</p>
Full article ">Figure 12
<p>Confusion Matrice of Classification Approach 1: (<b>a</b>) Dataset 1; (<b>b</b>) Dataset 2; and (<b>c</b>) Dataset 3.</p>
Full article ">Figure 13
<p>Recognition Rate per Oil and Dataset.</p>
Full article ">Figure 14
<p>Confusion Matrices of Classification Approach 2: (<b>a</b>) Dataset 1; (<b>b</b>) Dataset 2; and (<b>c</b>) Dataset 3.</p>
Full article ">Figure 15
<p>Recognition Rate per Oil and Dataset.</p>
Full article ">
26 pages, 5272 KiB  
Article
Cognitive States Matter: Design Guidelines for Driving Situation Awareness in Smart Vehicles
by Daehee Park, Wan Chul Yoon and Uichin Lee
Sensors 2020, 20(10), 2978; https://doi.org/10.3390/s20102978 - 24 May 2020
Cited by 12 | Viewed by 4766
Abstract
Situation awareness (SA) is crucial for safe driving. It is all about perception, comprehension of current situations and projection of the future status. It is demanding for drivers to constantly maintain SA by checking for potential hazards while performing the primary driving tasks. [...] Read more.
Situation awareness (SA) is crucial for safe driving. It is all about perception, comprehension of current situations and projection of the future status. It is demanding for drivers to constantly maintain SA by checking for potential hazards while performing the primary driving tasks. As vehicles in the future will be equipped with more sensors, it is likely that an SA aiding system will present complex situational information to drivers. Although drivers have difficulty to process a variety of complex situational information due to limited cognitive capabilities and perceive the information differently depending upon their cognitive states, the well-known SA design principles by Endsley only provide general guidelines. The principles lack detailed guidelines for dealing with limited human cognitive capabilities. Cognitive capability is a mental capability including planning, complex idea comprehension, and learning from experience. A cognitive state can be regarded as a condition of being (e.g., the state of being aware of the situation). In this paper, we investigate the key cognitive attributes related to SA in driving contexts (i.e., attention focus, mental model, workload, and memory). Endsley proposed that those key cognitive attributes are the main factors that influence SA. In those with higher levels of attributes, we found eight cognitive states which mainly influence a human driver in achieving SA. These are the focused attention state, inattentional blindness state, unfamiliar situation state, familiar situation state, insufficient mental resource state, sufficient mental resource state, high time pressure state, and low time pressure state. We then propose cognitive state aware SA design guidelines that can help designers to effectively convey situation information to drivers. As a case study, we demonstrated the usefulness of our cognitive state aware SA design guidelines by conducting controlled experiments where an existing SA interface is compared with a new SA interface designed following the key guidelines. We used the Situation Awareness Global Assessment Technique (SAGAT) and Decision-Making Questionnaire (DMQ) to measure the SA and decision-making style scores, respectively. Our results show that the new guidelines allowed participants to achieve significantly higher SA and exhibit better decision making performance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Cognitive states play an important role in perceiving situation information and in achieving situation awareness (Source: Own creation).</p>
Full article ">Figure 2
<p>A simple version of Endsley’s model: situation awareness (SA), decision-making, and action (Own Creation).</p>
Full article ">Figure 3
<p>Descriptions of the first driving scenario (Source: Own creation).</p>
Full article ">Figure 4
<p>Descriptions of the second driving scenario (Source: Own creation).</p>
Full article ">Figure 5
<p>SA questions and what to know (Total of 6 questions) (Source: Own creation).</p>
Full article ">Figure 6
<p>Decision-Making Questionnaire (DMQ) questions (Total of 12 questions) (Source: Own creation).</p>
Full article ">Figure 7
<p>Descriptions of the main screen of the current telematics system (Source: Own creation): (<b>1</b>) The current telematics system presents the navigation route guidance; (<b>2</b>) It presents the navigation route guidance. It does not support [Global SA], but only navigation route guidance; and (<b>3</b>) It only presents expected arrival time, but do not present how long it takes to the destination (It does not support summarized information).</p>
Full article ">Figure 8
<p>Descriptions of the pop-up screen of the current telematics system (Source: Own creation): (<b>1</b>) Through a pop-up, the telematics system informs the traffic info, however, it does not present information about the accident that occurred ahead, but only the part of the situation information presented; (<b>2</b>) It only supports a limited interaction by using a button; and (<b>3</b>) It does not support direct perception regarding the remaining time; the driver needs to calculate as the change of expected arrival time (No direct perception supported).</p>
Full article ">Figure 9
<p>Descriptions of how the participant responds to the emergency braking situation (Source: Own creation): (<b>1</b>) the participants can use the frontal windshield since the telematics system does not support the function that informs frontal situation information and (<b>2</b>) the participant can recognize the rear situation through the rear camera.</p>
Full article ">Figure 10
<p>Descriptions of the main screen of the SA aiding system (Source: Own creation): (<b>1</b>) The main screen consists of four sections to provide global SA and high priority information for each section; (<b>2</b>) Navigation route guidance (No real data due to GPS disconnection); (<b>3</b>) Traffic information (it indicates traffic information on the route); (<b>4</b>) Lane information (The color highlights which lane is free or heavy); and (<b>5</b>) Other information (the personal schedule which can influence the driving strategy).</p>
Full article ">Figure 11
<p>Descriptions of the interaction screen between the SA system and the participant (Source: Own creation): (<b>1</b>) It supports [Direct Perception]. The driver’s attention becomes arousal through proactively blinking red and green colors and it switches from driving to the situation information fast; and (<b>2</b>) it supports [Active Communication]. The driver can communicate actively with a system and it helps comprehension of the situation via voice interaction; (<b>3</b>) After the driver understands the situation, the driver can ask for more information which affects the projection; (<b>4</b>) Since the system tells the disadvantage of the current route, the driver can change the driving strategy; and (<b>5</b>) The driver checks the options to change the driving strategy.</p>
Full article ">Figure 12
<p>Descriptions of the interaction screen for a sudden braking situation (Source: Own creation): (<b>1</b>,<b>2</b>) The information covers a broad range of situations, but it focuses on the important. The driver’s attention becomes arousal through proactively blinking red and green colors and it switches from driving to the situation information fast [Direct Perception]; and (<b>3</b>) It supports [Active Communication]. The driver can communicate actively with a system and it can help comprehension of the situation.</p>
Full article ">Figure 13
<p>A driving simulator which can simulate various driving environments (Source: Own creation).</p>
Full article ">Figure 14
<p>The case study procedure (Source: Own creation).</p>
Full article ">Figure 15
<p>Mean Score and 95% Confidence Interval graph of SA and DMQ: (<b>a</b>) Mean score and 95% Confidence Interval of each SA level and (<b>b</b>) Mean score and 95% Confidence Interval of each dimension in DMQ (Source: Own creation).</p>
Full article ">Figure 16
<p>Mean Score and 95% Confidence Interval graph of ‘Hesitancy’ (Source: Own creation).</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop