[go: up one dir, main page]

Next Issue
Volume 23, August-1
Previous Issue
Volume 23, July-1
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 14 (July-2 2023) – 405 articles

Cover Story (view full-size image): Ultra-high-speed (UHS) image sensors are used to study fast scientific phenomena and may also be useful in medicine. Several recently published studies have achieved frame rates of up to millions of frames per second (Mfps) using advanced processes and/or customized processes. This paper presents a burst-mode (108 frame) UHS low-noise CMOS image sensor (CIS) based on charge sweep transfer gates in an unmodified, standard 180 nm front-side-illuminated CIS process. By optimizing the photodiode geometry, the 52.8 μm pitch pixels with a 20 × 20 μm2 active area achieved a charge transfer time of less than 10 ns. A proof-of-concept CIS was designed and fabricated. Through characterization, it is shown that the designed CIS has the potential to achieve 20 Mfps with an input-referred noise of 5.1 e− rms. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
21 pages, 5796 KiB  
Article
Predicting the Early-Age Time-Dependent Behaviors of a Prestressed Concrete Beam by Using Physics-Informed Neural Network
by Hyun-Woo Park and Jin-Ho Hwang
Sensors 2023, 23(14), 6649; https://doi.org/10.3390/s23146649 - 24 Jul 2023
Cited by 2 | Viewed by 2253
Abstract
This paper proposes a physics-informed neural network (PINN) for predicting the early-age time-dependent behaviors of prestressed concrete beams. The PINN utilizes deep neural networks to learn the time-dependent coupling among the effective prestress force and the several factors that affect the time-dependent behavior [...] Read more.
This paper proposes a physics-informed neural network (PINN) for predicting the early-age time-dependent behaviors of prestressed concrete beams. The PINN utilizes deep neural networks to learn the time-dependent coupling among the effective prestress force and the several factors that affect the time-dependent behavior of the beam, such as concrete creep and shrinkage, tendon relaxation, and changes in concrete elastic modulus. Unlike traditional numerical algorithms such as the finite difference method, the PINN directly solves the integro-differential equation without the need for discretization, offering an efficient and accurate solution. Considering the trade-off between solution accuracy and the computing cost, optimal hyperparameter combinations are determined for the PINN. The proposed PINN is verified through the comparison to the numerical results from the finite difference method for two representative cross sections of PSC beams. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Simplified beam model for a PSC beam: (<b>a</b>) PSC beam model with a single equivalent tendon; (<b>b</b>) cross section at <math display="inline"><semantics><mrow><mi>A</mi><msup><mi>A</mi><mo>′</mo></msup></mrow></semantics></math>.</p>
Full article ">Figure 2
<p>The basic architecture of the PINN.</p>
Full article ">Figure 3
<p>Schematic of determining optimal hyperparameters for the PINN using a trade-off curve (<math display="inline"><semantics><mrow><msub><mi>π</mi><mi mathvariant="normal">T</mi></msub></mrow></semantics></math>) between accuracy error and computing cost where <math display="inline"><semantics><mrow><msub><mi>π</mi><mi mathvariant="normal">T</mi></msub><mo>=</mo><mi>α</mi><msub><mi>π</mi><mi mathvariant="normal">E</mi></msub><mo>+</mo><mfenced><mrow><mn>1</mn><mo>−</mo><mi>α</mi></mrow></mfenced><msub><mi>π</mi><mi mathvariant="normal">C</mi></msub></mrow></semantics></math> and <math display="inline"><semantics><mrow><mn>0</mn><mo>≤</mo><mi>α</mi><mo>≤</mo><mn>1</mn></mrow></semantics></math>.</p>
Full article ">Figure 4
<p>A 40 m long simply-supported PSC beam with a rectangular cross section.</p>
Full article ">Figure 5
<p>Accuracy error of PINN for solving Equation (34). ‘×’ and ‘o’ represent mean values and outliers, respectively.</p>
Full article ">Figure 6
<p>Poorly fitted results from PINN for solving Equation (34) [<math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">N</mi></msub><mo>=</mo><mn>128</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mi>HL</mi></mrow></msub><mo>=</mo><mn>7</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub><mo>=</mo><mn>2</mn></mrow></semantics></math>].</p>
Full article ">Figure 7
<p>Well-fitted results from PINN for solving Equation (34) [<math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">N</mi></msub><mo>=</mo><mn>128</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mi>HL</mi></mrow></msub><mo>=</mo><mn>7</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub><mo>=</mo><mn>32</mn></mrow></semantics></math>].</p>
Full article ">Figure 8
<p>Point-wise errors of the well-fitted results from PINN in <a href="#sensors-23-06649-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Computing time of PINN for solving Equation (34). ‘×’ and ‘o’ represent mean values and outliers, respectively.</p>
Full article ">Figure 10
<p>Determining optimal <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub></mrow></semantics></math> through trade-off curves for different <math display="inline"><semantics><mi>α</mi></semantics></math> as described in <a href="#sensors-23-06649-f003" class="html-fig">Figure 3</a>.</p>
Full article ">Figure 11
<p>The predicted solutions using optimal <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub></mrow></semantics></math> for different <math display="inline"><semantics><mi>α</mi></semantics></math> from <a href="#sensors-23-06649-f010" class="html-fig">Figure 10</a> with <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">N</mi></msub><mo>=</mo><mn>64</mn></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mi>HL</mi></mrow></msub><mo>=</mo><mn>8</mn></mrow></semantics></math>.</p>
Full article ">Figure 12
<p>Point-wise errors of the predicted solutions from 28 days to 50 days as in <a href="#sensors-23-06649-f011" class="html-fig">Figure 11</a>.</p>
Full article ">Figure 13
<p>Comparison of numerical results from the PINNs (<math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">N</mi></msub><mo>=</mo><mn>64</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mi>HL</mi></mrow></msub><mo>=</mo><mn>8</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub><mo>=</mo><mn>32</mn></mrow></semantics></math>) to the forward finite difference method: (<b>a</b>) loss of prestress force; (<b>b</b>) stress at the top and the bottom of the beam and at the tendon location.</p>
Full article ">Figure 14
<p>Point-wise error of the numerical results from the PINNs in <a href="#sensors-23-06649-f013" class="html-fig">Figure 13</a>: (<b>a</b>) loss of prestress force; (<b>b</b>) stress at the top and the bottom of the beam and at the tendon location.</p>
Full article ">Figure 15
<p>A 45 m long simply-supported PSC beam with an I-shaped cross section.</p>
Full article ">Figure 16
<p>Comparison of numerical results from PINNs (<math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">N</mi></msub><mo>=</mo><mn>64</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mrow><mi>HL</mi></mrow></msub><mo>=</mo><mn>8</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><msub><mi>N</mi><mi mathvariant="normal">D</mi></msub><mo>=</mo><mn>32</mn></mrow></semantics></math>) to the forward finite difference method: (<b>a</b>) loss of prestress force; (<b>b</b>) stress at the top and the bottom of the beam and at the tendon location.</p>
Full article ">Figure 17
<p>Point-wise error of the numerical results from the PINNs in <a href="#sensors-23-06649-f016" class="html-fig">Figure 16</a>: (<b>a</b>) loss of prestress force; (<b>b</b>) stress at the top and the bottom of the beam and at the tendon location.</p>
Full article ">
30 pages, 35272 KiB  
Article
Graph Neural Network-Based Method of Spatiotemporal Land Cover Mapping Using Satellite Imagery
by Domen Kavran, Domen Mongus, Borut Žalik and Niko Lukač
Sensors 2023, 23(14), 6648; https://doi.org/10.3390/s23146648 - 24 Jul 2023
Cited by 11 | Viewed by 3297
Abstract
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification [...] Read more.
Multispectral satellite imagery offers a new perspective for spatial modelling, change detection and land cover classification. The increased demand for accurate classification of geographically diverse regions led to advances in object-based methods. A novel spatiotemporal method is presented for object-based land cover classification of satellite imagery using a Graph Neural Network. This paper introduces innovative representation of sequential satellite images as a directed graph by connecting segmented land region through time. The method’s novel modular node classification pipeline utilises the Convolutional Neural Network as a multispectral image feature extraction network, and the Graph Neural Network as a node classification model. To evaluate the performance of the proposed method, we utilised EfficientNetV2-S for feature extraction and the GraphSAGE algorithm with Long Short-Term Memory aggregation for node classification. This innovative application on Sentinel-2 L2A imagery produced complete 4-year intermonthly land cover classification maps for two regions: Graz in Austria, and the region of Portorož, Izola and Koper in Slovenia. The regions were classified with Corine Land Cover classes. In the level 2 classification of the Graz region, the method outperformed the state-of-the-art UNet model, achieving an average F1-score of 0.841 and an accuracy of 0.831, as opposed to UNet’s 0.824 and 0.818, respectively. Similarly, the method demonstrated superior performance over UNet in both regions under the level 1 classification, which contains fewer classes. Individual classes have been classified with accuracies up to 99.17%. Full article
(This article belongs to the Special Issue Deep Learning for Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The proposed method’s workflow with four main steps.</p>
Full article ">Figure 2
<p>Examples of applying Felzenszwalb’s image segmentation algorithm (<math display="inline"><semantics><mi>σ</mi></semantics></math> = 0.5) on a Sentinel-2 13-layer image with 10 m spatial resolution of an example region of Graz, Austria in July of 2017. A True colour (RGB) composite is shown of the multispectral image.</p>
Full article ">Figure 3
<p>An example of overlap portion calculations between segments, shown in (<b>a</b>), and creation of <span class="html-italic">G</span> with <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.3 in (<b>b</b>) and <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.4 in (<b>c</b>). (<b>a</b>) Overlap portion calculations between the red segment in time <span class="html-italic">t</span> and colored segments in time <math display="inline"><semantics><mrow><mi>t</mi><mo>+</mo><mn>1</mn></mrow></semantics></math>, (<b>b</b>) <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.3, (<b>c</b>) <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.4.</p>
Full article ">Figure 4
<p>Examples of <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> construction for a <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> (red) in time <span class="html-italic">t</span> at <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math> = [0, 2]. The <span class="html-italic">G</span> was constructed for a <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></msub></mrow></semantics></math> with 3 images of a small example subregion using <math display="inline"><semantics><mi>τ</mi></semantics></math> = 0.2. The orange edges connect all the included (enlarged) nodes in the <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>. Each included <span class="html-italic">v</span> has a <math display="inline"><semantics><mrow><mi>b</mi><mi>b</mi><mi>o</mi><mi>x</mi></mrow></semantics></math> drawn around the coloured <span class="html-italic">s</span> it represents. <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> in (<b>a</b>) includes only the <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math>, and the <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math> in (<b>b</b>) contains <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> and 3 nodes with 3 edges between them.</p>
Full article ">Figure 5
<p>Target node classification pipeline, which outputs the land cover class of the <math display="inline"><semantics><msub><mi>s</mi><mrow><mi>s</mi><mi>e</mi><mi>l</mi><mi>e</mi><mi>c</mi><mi>t</mi><mi>e</mi><mi>d</mi></mrow></msub></semantics></math> by classifying the <math display="inline"><semantics><msub><mi>v</mi><mrow><mi>t</mi><mi>a</mi><mi>r</mi><mi>g</mi><mi>e</mi><mi>t</mi></mrow></msub></semantics></math> based on the input <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>.</p>
Full article ">Figure 6
<p>Intermonthly <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> for the region of Graz and the region of Portorož, Izola and Koper. The individual multispectral image contains <span class="html-italic">C</span> = 17 layers. The images in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> are visualised with a True colour (RGB) composite.</p>
Full article ">Figure 7
<p>Examples of the segmented regions. Images (<b>a</b>,<b>c</b>) show the True colour (RGB) composites, while (<b>b</b>,<b>d</b>) show their respective segmentation masks. (<b>a</b>) The region of Graz in January 2019, (<b>b</b>) The <math display="inline"><semantics><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></semantics></math> for the region of Graz in January 2019, (<b>c</b>) The region of Portorož, Izola and Koper in November 2018, (<b>d</b>) The <math display="inline"><semantics><mrow><mi>m</mi><mi>a</mi><mi>s</mi><mi>k</mi></mrow></semantics></math> for the region of Portorož, Izola and Koper in November 2018.</p>
Full article ">Figure 8
<p>Examples of CLC level 2 classification outputs, obtained with the UNet model by Esri, are shown in (<b>a</b>,<b>c</b>). The manually corrected ground truth, derived from respective UNet model outputs, are shown in (<b>b</b>,<b>d</b>). (<b>a</b>) Classification output of Esri’s UNet model for the region of Graz in January 2019, (<b>b</b>) Manually corrected ground truth for the region of Graz in January 2019, (<b>c</b>) Classification output of Esri’s UNet model for the region of Portorož, Izola and Koper in November 2018, (<b>d</b>) Manually corrected ground truth for the region of Portorož, Izola and Koper in November 2018.</p>
Full article ">Figure 9
<p>Number of pixels per land cover class for each region in the dataset. Images (<b>a</b>,<b>c</b>) show the class distribution in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math>, while (<b>b</b>,<b>d</b>) show the class distribution in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math>. (<b>a</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Graz, (<b>b</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Graz, (<b>c</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Portorož, Izola and Koper, (<b>d</b>) Distribution of ground truth land cover labels of pixels in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>g</mi><mi>t</mi><mtext>_</mtext><mi>c</mi><mi>o</mi><mi>v</mi><mi>e</mi><mi>r</mi></mrow></msub></mrow></semantics></math> for the region of Portorož, Izola and Koper.</p>
Full article ">Figure 10
<p>Number of nodes per land cover class for each region in the dataset. Images (<b>a</b>,<b>c</b>) shows the class distribution in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math>, while (<b>b</b>,<b>d</b>) show the class distribution in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math>. (<b>a</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math> for the region of Graz, (<b>b</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math> for the region of Graz, (<b>c</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math> for the region of Portorož, Izola and Koper, (<b>d</b>) Distribution of ground truth land cover labels of nodes in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi></mrow></msub></semantics></math> for the region of Portorož, Izola and Koper.</p>
Full article ">Figure 11
<p>CLC level 2 classification results for a region of Portorož, Izola and Koper, depending on the selection of GNN and the value of <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math>.</p>
Full article ">Figure 12
<p>Confusion matrices for classification of the region of Graz, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p>
Full article ">Figure 12 Cont.
<p>Confusion matrices for classification of the region of Graz, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p>
Full article ">Figure 13
<p>Confusion matrices for classification of the region of Portorož, Izola and Koper, obtained with the best performing classification model of the proposed GNN-based method. (<b>a</b>) Confusion matrix for CLC level 2 classification. (<b>b</b>) Confusion matrix for CLC level 1 classification.</p>
Full article ">Figure 14
<p>Individual weighted F1-scores for CLC level 2 classification for the consecutive images in <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> for both regions of the dataset, obtained with the best performing corresponding classification model of the proposed GNN-based method. (<b>a</b>) Results for the 11 consecutive images of the region of Graz. (<b>b</b>) Results for the 12 consecutive images of the region of Portorož, Izola and Koper.</p>
Full article ">Figure 15
<p>Pixel -based heatmaps of cumulative incorrect CLC level 2 classifications for both regions of the dataset, derived from classifying the <math display="inline"><semantics><mrow><mi>T</mi><msub><mi>S</mi><mrow><mi>t</mi><mi>e</mi><mi>s</mi><mi>t</mi><mtext>_</mtext><mi>i</mi><mi>m</mi><mi>a</mi><mi>g</mi><mi>e</mi></mrow></msub></mrow></semantics></math> with the best performing corresponding classification model of the proposed GNN-based method. The colours’ transition from dark purple to bright yellow represent the frequency of misclassifications, with intensifying brightness signifying a higher count of errors. (<b>a</b>) Heatmap for the region of Graz. (<b>b</b>) Heatmap for the region of Portorož, Izola and Koper.</p>
Full article ">Figure 16
<p>Examples of CLC level 2 ground truth ((<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>)) and predictions ((<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>)) for both regions of the dataset, obtained with the best performing corresponding classification model of the proposed GNN-based method. (<b>a</b>) Ground truth—May 2020—region of Graz, (<b>b</b>) Predicted land cover—May 2020—region of Graz, (<b>c</b>) Ground truth—June 2021—region of Graz, (<b>d</b>) Predicted land cover—June 2021—region of Graz, (<b>e</b>) Ground truth—January 2020—region of Portorož, Izola and Koper, (<b>f</b>) Predicted land cover—January 2020—region of Portorož, Izola and Koper, (<b>g</b>) Ground truth—May 2021—region of Portorož, Izola and Koper, (<b>h</b>) Predicted land cover—May 2021—region of Portorož, Izola and Koper.</p>
Full article ">Figure 17
<p>Average node count and cumulative spatial coverage (total size of all segments) in a <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>s</mi><mi>u</mi><mi>b</mi></mrow></msub></semantics></math>, along with metric scores, depending on the value of <math display="inline"><semantics><msub><mi>T</mi><mrow><mi>l</mi><mi>o</mi><mi>o</mi><mi>k</mi><mi>b</mi><mi>a</mi><mi>c</mi><mi>k</mi></mrow></msub></semantics></math>. (<b>a</b>) Subgraph-related statistics for the region of Graz. (<b>b</b>) Subgraph-related statistics for the region of Portorož, Izola and Koper.</p>
Full article ">Figure 18
<p>Class-specific CLC level 2 classification accuracies (sourced from the confusion matrices in <a href="#sensors-23-06648-f012" class="html-fig">Figure 12</a>a and <a href="#sensors-23-06648-f013" class="html-fig">Figure 13</a>a) for both dataset regions in relation to the distribution of ground truth land cover labels in <math display="inline"><semantics><msub><mi>G</mi><mrow><mi>t</mi><mi>r</mi><mi>a</mi><mi>i</mi><mi>n</mi></mrow></msub></semantics></math>, derived from <a href="#sensors-23-06648-f010" class="html-fig">Figure 10</a>a,c.</p>
Full article ">
18 pages, 3828 KiB  
Article
Global Path Planning of Unmanned Surface Vehicle Based on Improved A-Star Algorithm
by Huixia Zhang, Yadong Tao and Wenliang Zhu
Sensors 2023, 23(14), 6647; https://doi.org/10.3390/s23146647 - 24 Jul 2023
Cited by 24 | Viewed by 2921
Abstract
To make unmanned surface vehicles that are better applied to the field of environmental monitoring in inland rivers, reservoirs, or coasts, we propose a global path-planning algorithm based on the improved A-star algorithm. The path search is carried out using the raster method [...] Read more.
To make unmanned surface vehicles that are better applied to the field of environmental monitoring in inland rivers, reservoirs, or coasts, we propose a global path-planning algorithm based on the improved A-star algorithm. The path search is carried out using the raster method for environment modeling and the 8-neighborhood search method: a bidirectional search strategy and an evaluation function improvement method are used to reduce the total number of traversing nodes; the planned path is smoothed to remove the inflection points and solve the path folding problem. The simulation results reveal that the improved A-star algorithm is more efficient in path planning, with fewer inflection points and traversing nodes, and the smoothed paths are more to meet the actual navigation demands of unmanned surface vehicles than the conventional A-star algorithm. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

Figure 1
<p>Grid map.</p>
Full article ">Figure 2
<p>Diagram of different search neighborhood. The searches are: (<b>a</b>) 4-neighborhood search, (<b>b</b>)-8 neighborhood search, (<b>c</b>) 16-neighborhood search.</p>
Full article ">Figure 3
<p>Diagram of a heuristic function. (<b>a</b>) Manhattan distance, (<b>b</b>) Euclidean distance, (<b>c</b>) Chebyshev distance.</p>
Full article ">Figure 4
<p>Diagram of node vertical distance.</p>
Full article ">Figure 5
<p>Search tree of different search methods. (<b>a</b>) Search tree for direct search, (<b>b</b>) search tree for bidirectional search.</p>
Full article ">Figure 6
<p>A-star algorithm bidirectional search mechanism.</p>
Full article ">Figure 7
<p>Diagram of spline curves.</p>
Full article ">Figure 8
<p>Diagram of cubic quasi-uniform B spline curve.</p>
Full article ">Figure 9
<p>Path planning process of improved A-star algorithm.</p>
Full article ">Figure 10
<p>Diagram of path planning. (<b>a</b>) robot_radius = 0, (<b>b</b>) robot_radius = 1.</p>
Full article ">Figure 11
<p>Simulation results of different methods. (<b>a</b>) The conventional A-star algorithm, (<b>b</b>) improving evaluation function, (<b>c</b>) bidirectional search.</p>
Full article ">Figure 12
<p>Simulation results of 40 × 40 map. (<b>a</b>) Path of the conventional A-star algorithm, (<b>b</b>) Path of the improved A-star algorithm, (<b>c</b>) Smoothed path.</p>
Full article ">Figure 13
<p>Simulation results of 60 × 60 map. (<b>a</b>) Path of the conventional A-star algorithm, (<b>b</b>) Path of the improved A-star algorithm, (<b>c</b>) Smoothed path.</p>
Full article ">Figure 14
<p>Simulation results of 80 × 80 map. (<b>a</b>) Path of the conventional A-star algorithm, (<b>b</b>) Path of the improved A-star algorithm, (<b>c</b>) Smoothed path.</p>
Full article ">Figure 15
<p>Simulation results of different initial points and goal points of 80 × 80 map. (<b>a</b>) Path of the conventional A-star algorithm ((15, 10), (75, 50)), (<b>b</b>) Path of the improved A-star algorithm ((15, 10), (75, 50)), (<b>c</b>) Path of the conventional A-star algorithm ((15, 63), (65, 20)), (<b>d</b>) Path of the improved A-star algorithm ((15, 63), (65, 20)).</p>
Full article ">
15 pages, 10414 KiB  
Article
Traction Machine State Recognition Method Based on DPCA Algorithm and Convolution Neural Network
by Dongyang Li, Jianyi Yang, Zaisheng Pan and Nanyang Li
Sensors 2023, 23(14), 6646; https://doi.org/10.3390/s23146646 - 24 Jul 2023
Viewed by 1246
Abstract
It is important to improve the identification accuracy of the operating status of elevator traction machines. The distribution difference of the time-frequency signals utilized to identify operating circumstances is modest, making it difficult to extract features from the vibration signals of traction machines [...] Read more.
It is important to improve the identification accuracy of the operating status of elevator traction machines. The distribution difference of the time-frequency signals utilized to identify operating circumstances is modest, making it difficult to extract features from the vibration signals of traction machines under various operating conditions, leading to low recognition accuracy. A novel method for identifying the operating status of traction machines based on signal demodulation method and convolutional neural network (CNN) is proposed. The original vibration time-frequency signals are demodulated by the demodulation method based on time-frequency analysis and principal component analysis (DPCA). Firstly, the signal demodulation method based on principal component analysis is used to extract the modulation features of the experimentally measured vibration signals. Then, The CNN is used for feature vector extraction, and the training model is obtained through multiple iterations to achieve automatic recognition of the running state. The experimental results show that the proposed method can effectively extract feature parameters under different states. The diagnostic accuracy is up to 96.94%, which is about 16.61% higher than conventional methods. It provides a feasible solution for identifying the operating status of elevator traction machines. Full article
(This article belongs to the Section Electronic Sensors)
Show Figures

Figure 1

Figure 1
<p>DPCA-VGG16 state recognition model.</p>
Full article ">Figure 2
<p>VGG16 Structure.</p>
Full article ">Figure 3
<p>Experimental system.</p>
Full article ">Figure 4
<p>Signal demodulation diagram. (<b>a</b>) Input frequency is 5 Hz; (<b>b</b>) Input frequency is 10 Hz; (<b>c</b>) Input frequency is 15 Hz; (<b>d</b>) Input frequency is 20 Hz; (<b>e</b>) Input frequency is 25 Hz; (<b>f</b>) Input frequency is 30 Hz.</p>
Full article ">Figure 4 Cont.
<p>Signal demodulation diagram. (<b>a</b>) Input frequency is 5 Hz; (<b>b</b>) Input frequency is 10 Hz; (<b>c</b>) Input frequency is 15 Hz; (<b>d</b>) Input frequency is 20 Hz; (<b>e</b>) Input frequency is 25 Hz; (<b>f</b>) Input frequency is 30 Hz.</p>
Full article ">Figure 5
<p>Accuracy and Loss Curve. (<b>a</b>) Vertical diameter direction; (<b>b</b>) Horizontal radial; (<b>c</b>) Axial direction.</p>
Full article ">Figure 5 Cont.
<p>Accuracy and Loss Curve. (<b>a</b>) Vertical diameter direction; (<b>b</b>) Horizontal radial; (<b>c</b>) Axial direction.</p>
Full article ">Figure 6
<p>Confusion matrix of operation status identification results. (<b>a</b>) Vertical diameter direction; (<b>b</b>) Horizontal radial; (<b>c</b>) Axial direction.</p>
Full article ">Figure 7
<p>PCA dimensionality reduction diagram of operating state recognition results. (<b>a</b>) Vertical diameter direction; (<b>b</b>) Horizontal radial; (<b>c</b>) Axial direction.</p>
Full article ">Figure 8
<p>Confusion matrix identified by the time domain diagram. (<b>a</b>) Vertical diameter direction; (<b>b</b>) Horizontal radial; (<b>c</b>) Axial direction.</p>
Full article ">Figure 9
<p>Resnet Network Accuracy and Loss Curve.</p>
Full article ">Figure 10
<p>Alexnet Network Accuracy and Loss Curve.</p>
Full article ">
20 pages, 6100 KiB  
Article
Rolling Bearing Fault Diagnosis Based on Support Vector Machine Optimized by Improved Grey Wolf Algorithm
by Weijie Shen, Maohua Xiao, Zhenyu Wang and Xinmin Song
Sensors 2023, 23(14), 6645; https://doi.org/10.3390/s23146645 - 24 Jul 2023
Cited by 10 | Viewed by 1739
Abstract
This study targets the low accuracy and efficiency of the support vector machine (SVM) algorithm in rolling bearing fault diagnosis. An improved grey wolf optimizer (IGWO) algorithm was proposed based on deep learning and a swarm intelligence optimization algorithm to optimize the structural [...] Read more.
This study targets the low accuracy and efficiency of the support vector machine (SVM) algorithm in rolling bearing fault diagnosis. An improved grey wolf optimizer (IGWO) algorithm was proposed based on deep learning and a swarm intelligence optimization algorithm to optimize the structural parameters of SVM and improve the rolling bearing fault diagnosis. A nonlinear contraction factor update strategy was also proposed. The variable coefficient changes with the shrinkage factor α. Thus, the search ability was balanced at different early and late stages by controlling the dynamic changes of the variable coefficient. In the early stages of optimization, its speed is low to avoid falling into local optimization. In the later stages of optimization, the speed is higher, and finding the optimal solution is easier, balancing the two different global and local optimization capabilities to complete efficient convergence. The dynamic weight update strategy was adopted to perform position updates based on adaptive dynamic weights. First, the dataset of Case Western Reserve University was used for simulation, and the results showed that the diagnosis accuracy of IGWO-SVM was 98.75%. Then, the IGWO-SVM model was trained and tested using data obtained from the full-life-cycle test platform of mechanical transmission bearings independently researched and developed by Nanjing Agricultural University. The fault diagnosis accuracy and convergence value of the adaptation curve were compared with those of PSO-SVM (particle swarm optimization) and GWO-SVM diagnosis models. Results showed that the IGWO-SVM model had the highest rolling bearing fault diagnosis accuracy and the best diagnosis convergence. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Social hierarchy of grey wolves (figure provided by Grey wolf optimizer: a review of recent variants and applications).</p>
Full article ">Figure 2
<p>How alpha, beta, delta, and omega are defined in GWO (figure provided by GWO: a review of recent variants and applications).</p>
Full article ">Figure 3
<p>Flow chart of the GWO algorithm.</p>
Full article ">Figure 4
<p>IGWO-SVM fault diagnosis model.</p>
Full article ">Figure 5
<p>Case Western Reserve University bearing fault test bench (figure provided by the Case School of Engineering).</p>
Full article ">Figure 6
<p>Vibration signals of four types of rolling bearings in different states: (<b>a</b>) normal bearings, (<b>b</b>) bearings with faulty inner rings, (<b>c</b>) bearings with faulty rolling elements, (<b>d</b>) bearings with faulty outer rings.</p>
Full article ">Figure 7
<p><b>Figure 7.</b> The fitness of three models: (<b>a</b>) PSO, (<b>b</b>) GWO, and (<b>c</b>) IGWO. The predictions of the three models: (<b>d</b>) PSO, (<b>e</b>) GWO, and (<b>f</b>) IGWO.</p>
Full article ">Figure 8
<p>Test material: (<b>a</b>) general layout of the test bench, (<b>b</b>) schematic diagram of the main structure of the test bed, (<b>c</b>) normal bearing, (<b>d</b>) bearing with faulty inner ring, (<b>e</b>) bearing with faulty outer ring, (<b>f</b>) bearing with faulty rolling element.</p>
Full article ">Figure 9
<p>Time-domain waveform diagrams of vibration signals of four fault types: (<b>a</b>) vibration signal of normal bearing, (<b>b</b>) vibration signal of bearing with faulty inner ring, (<b>c</b>) vibration signal of bearing with faulty rolling element, (<b>d</b>) vibration signal of bearing with faulty outer ring.</p>
Full article ">Figure 9 Cont.
<p>Time-domain waveform diagrams of vibration signals of four fault types: (<b>a</b>) vibration signal of normal bearing, (<b>b</b>) vibration signal of bearing with faulty inner ring, (<b>c</b>) vibration signal of bearing with faulty rolling element, (<b>d</b>) vibration signal of bearing with faulty outer ring.</p>
Full article ">Figure 10
<p>Fault diagnosis results of SVM model with different optimization algorithms: (<b>a</b>) PSO, (<b>b</b>) GWO, (<b>c</b>) IGWO.</p>
Full article ">Figure 10 Cont.
<p>Fault diagnosis results of SVM model with different optimization algorithms: (<b>a</b>) PSO, (<b>b</b>) GWO, (<b>c</b>) IGWO.</p>
Full article ">Figure 11
<p>Fitness curves of different optimization algorithms.</p>
Full article ">Figure 12
<p>Comparison of optimization algorithms for SVM parameters.</p>
Full article ">
19 pages, 2200 KiB  
Article
Automatic Recognition Reading Method of Pointer Meter Based on YOLOv5-MR Model
by Le Zou, Kai Wang, Xiaofeng Wang, Jie Zhang, Rui Li and Zhize Wu
Sensors 2023, 23(14), 6644; https://doi.org/10.3390/s23146644 - 24 Jul 2023
Cited by 10 | Viewed by 2679
Abstract
Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading [...] Read more.
Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading method for pointer-type meters based on the YOLOv5-Meter Reading (YOLOv5-MR) model. Firstly, in order to improve the detection performance of small targets in YOLOv5 framework, a multi-scale target detection layer is added to the YOLOv5 framework, and a set of Anchors is designed based on the lightning rod dial data set; secondly, the loss function and up-sampling method are improved to enhance the model training convergence speed and obtain the optimal up-sampling parameters; Finally, a new external circle fitting method of the dial is proposed, and the dial reading is calculated by the center angle algorithm. The experimental results on the self-built dataset show that the Mean Average Precision (mAP) of the YOLOv5-MR target detection model reaches 79%, which is 3% better than the YOLOv5 model, and outperforms other advanced pointer-type meter reading models. Full article
Show Figures

Figure 1

Figure 1
<p>PANet architecture.</p>
Full article ">Figure 2
<p>Illustration of the predicted bounding box.</p>
Full article ">Figure 3
<p>YOLOv5-MR network structure.</p>
Full article ">Figure 4
<p>The convergence of the three loss functions of GIoU, CIoU, EIoU at the same anchor point and ground truth, where yellow box represents Ground Truth, black box represents the initial position. The first row represents the convergence process of GIoU, the second row represents the CIoU convergence process, and the third row represents the convergence process of EIoU. The pink box, the brown box and the green box represent the convergence process of the three prediction box from the 10th iteration to the 150th iteration, respectly.</p>
Full article ">Figure 5
<p>Schematic diagram of (<b>a</b>) ordinary convolution and (<b>b</b>) transposed convolution.</p>
Full article ">Figure 6
<p>The recognition effect of the YOLOv5-MR model. Predicted-Reading represents the predicted reading, and True Reading represents the actual reading. In the rectangular box in the center of the index dial, the green number 2 represents the scale value 2, the yellow number 0 and 4 represents the scale value 0 and 4, the purple number 6 represents the scale value 6, the black number 8 represents the scale value 8, and the red number 10 represents the scale value 10. The purple center represents the predicted dial center, and the black 000… 0 represents the true center of the dial.</p>
Full article ">Figure 7
<p>Schematic diagram of the smallest circumscribed circle fitted to an ellipse.</p>
Full article ">Figure 8
<p>Calculates the angle between the individual tick marks of the pointer.</p>
Full article ">Figure 9
<p>Setting the cluster center to 8 for k-means clustering.</p>
Full article ">Figure 10
<p>The first row represents the original graph, and the second row represents the YOLOv5-MR recognition results. Predicted-Reading represents the final reading result of the YOLOv5-MR model, and True Reading represents the actual reading. In the rectangular box in the center of the index dial of the second row, the green number 2 represents the scale value 2, the yellow number 0 and 4 represents the scale value 0 and 4, the purple number 6 represents the scale value 6, the black number 8 represents the scale value 8, and the red number 10 represents the scale value 10. The purple center represents the predicted dial center, and the black 000… 0 represents the true center of the dial.</p>
Full article ">Figure 11
<p>Loss comparison chart.</p>
Full article ">
16 pages, 5648 KiB  
Article
Automated Laser-Fiber Coupling Module for Optical-Resolution Photoacoustic Microscopy
by Seongyi Han, Hyunjun Kye, Chang-Seok Kim, Tae-Kyoung Kim, Jinwoo Yoo and Jeesu Kim
Sensors 2023, 23(14), 6643; https://doi.org/10.3390/s23146643 - 24 Jul 2023
Viewed by 2064
Abstract
Photoacoustic imaging has emerged as a promising biomedical imaging technique that enables visualization of the optical absorption characteristics of biological tissues in vivo. Among the different photoacoustic imaging system configurations, optical-resolution photoacoustic microscopy stands out by providing high spatial resolution using a tightly [...] Read more.
Photoacoustic imaging has emerged as a promising biomedical imaging technique that enables visualization of the optical absorption characteristics of biological tissues in vivo. Among the different photoacoustic imaging system configurations, optical-resolution photoacoustic microscopy stands out by providing high spatial resolution using a tightly focused laser beam, which is typically transmitted through optical fibers. Achieving high-quality images depends significantly on optical fluence, which is directly proportional to the signal-to-noise ratio. Hence, optimizing the laser-fiber coupling is critical. Conventional coupling systems require manual adjustment of the optical path to direct the laser beam into the fiber, which is a repetitive and time-consuming process. In this study, we propose an automated laser-fiber coupling module that optimizes laser delivery and minimizes the need for manual intervention. By incorporating a motor-mounted mirror holder and proportional derivative control, we successfully achieved efficient and robust laser delivery. The performance of the proposed system was evaluated using a leaf-skeleton phantom in vitro and a human finger in vivo, resulting in high-quality photoacoustic images. This innovation has the potential to significantly enhance the quality and efficiency of optical-resolution photoacoustic microscopy. Full article
(This article belongs to the Special Issue Photoacoustic Imaging and Sensing)
Show Figures

Figure 1

Figure 1
<p>Schematic illustration of the system. (<b>a</b>) The automated laser-fiber coupling module, which selectively connected to the power measurement unit or the OR-PAM module. Ch1 and Ch2, respectively control the horizontal and vertical angle of MM1, while Ch3 and Ch4, respectively control the horizontal and vertical angle of MM2. (<b>b</b>) The OR-PAM module for scanning the imaging targets. PA, photoacoustic; OR-PAM, optical-resolution PA microscopy; AT, attenuator; MM, motorized mirror; Ch, control channel; CL, collimator; OF, optical fiber; PD, photodiode; OL, objective lens; BC, opto-acoustic beam combiner; AL, acoustic lens; TR, ultrasound transducer; AMP, amplifier; DAQ, data acquisition module; MEMS, microelectromechanical system.</p>
Full article ">Figure 2
<p>Schematic illustration of the control algorithm for optimizing the laser-fiber coupling efficiency.</p>
Full article ">Figure 3
<p>The evaluation for the automated laser-fiber coupling module. (<b>a</b>) The measured laser power during the optimization processing. Dashed lines indicate the boundary of each iteration. Laser power variations are depicted as different colors for each channel. (<b>b</b>) The measured laser power during the 15 optimizations. The yellow area is a 5% deviation of the mean laser power. (<b>c</b>) Beam profiles of the delivered laser at the disturbed and optimized states.</p>
Full article ">Figure 4
<p>The feasibility test of the automated laser-fiber coupling module through in vitro phantom imaging. (<b>a</b>) The measured laser power during the multiple image acquisition. Red and green points represent disturbed (D1–D4) and optimized (O1–O4) points, respectively. (<b>b</b>) PA images of a leaf-skeleton phantom at each disturbed or optimized point. White and blue area denote the signal and background regions, respectively. Laser power was measured before each image acquisition. (<b>c</b>) The average and standard deviation of PA amplitude in the signal region (white dashed area in (<b>b</b>)) at each measured point. Red and green represent disturbed and optimized points, respectively. (<b>d</b>) Correlated SNR and laser power. The dashed line is the first-order polynomial regression of the measured SNR. PA, photoacoustic; SNR, signal-to-noise ratio.</p>
Full article ">Figure 5
<p>The in vivo validation of the automated laser-fiber coupling module. (<b>a</b>) The measured laser power during the multiple PA image acquisition. (<b>b</b>) A photograph of the imaging area in the little finger of a healthy volunteer. (<b>c</b>) Digital microscope image of the microvasculature in the imaging area. (<b>d</b>) PA images in the imaging area at the disturbed (D) and optimized (O1–O4) states. White and blue area denote the signal and background regions, respectively. Black arrows in (<b>c</b>,<b>d</b>) denote corresponding blood vessels. PA, photoacoustic.</p>
Full article ">
15 pages, 3833 KiB  
Article
Multi-Temporal Hyperspectral Classification of Grassland Using Transformer Network
by Xuanhe Zhao, Shengwei Zhang, Ruifeng Shi, Weihong Yan and Xin Pan
Sensors 2023, 23(14), 6642; https://doi.org/10.3390/s23146642 - 24 Jul 2023
Cited by 7 | Viewed by 1781
Abstract
In recent years, grassland monitoring has shifted from traditional field surveys to remote-sensing-based methods, but the desired level of accuracy has not yet been obtained. Multi-temporal hyperspectral data contain valuable information about species and growth season differences, making it a promising tool for [...] Read more.
In recent years, grassland monitoring has shifted from traditional field surveys to remote-sensing-based methods, but the desired level of accuracy has not yet been obtained. Multi-temporal hyperspectral data contain valuable information about species and growth season differences, making it a promising tool for grassland classification. Transformer networks can directly extract long-sequence features, which is superior to other commonly used analysis methods. This study aims to explore the transformer network’s potential in the field of multi-temporal hyperspectral data by fine-tuning it and introducing it into high-powered grassland detection tasks. Subsequently, the multi-temporal hyperspectral classification of grassland samples using the transformer network (MHCgT) is proposed. To begin, a total of 16,800 multi-temporal hyperspectral data were collected from grassland samples at different growth stages over several years using a hyperspectral imager in the wavelength range of 400–1000 nm. Second, the MHCgT network was established, with a hierarchical architecture, which generates a multi-resolution representation that is beneficial for grass hyperspectral time series’ classification. The MHCgT employs a multi-head self-attention mechanism to extract features, avoiding information loss. Finally, an ablation study of MHCgT and comparative experiments with state-of-the-art methods were conducted. The results showed that the proposed framework achieved a high accuracy rate of 98.51% in identifying grassland multi-temporal hyperspectral which outperformed CNN, LSTM-RNN, SVM, RF, and DT by 6.42–26.23%. Moreover, the average classification accuracy of each species was above 95%, and the August mature period was easier to identify than the June growth stage. Overall, the proposed MHCgT framework shows great potential for precisely identifying multi-temporal hyperspectral species and has significant applications in sustainable grassland management and species diversity assessment. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Location of the study area within China (Inner Mongolia Autonomous Region, China) and the grass species located in the study site over a topographic map (<a href="https://ditu.amap.com" target="_blank">https://ditu.amap.com</a>, accessed on 23 January 2023).</p>
Full article ">Figure 2
<p>The overall architecture of the proposed MHCgT framework.</p>
Full article ">Figure 3
<p>Structure of the encoder block.</p>
Full article ">Figure 4
<p>Multi-temporal hyperspectral data of seven grass species, namely, (<b>a</b>) Medicago sativa, (<b>b</b>) Medicago ruthenica, (<b>c</b>) Elymus canadensis, (<b>d</b>) Hordeum brevisubulatum, (<b>e</b>) Medicago varia, (<b>f</b>) Onobrychis viciaefolia, (<b>g</b>) Bromus ciliatus.</p>
Full article ">Figure 5
<p>Relationship between epoch, accuracy, and loss in MHCgT network. (<b>a</b>) 202006, (<b>b</b>) 202008, (<b>c</b>) 202106, (<b>d</b>) 202108, acc: train accuracy, val_acc: validation accuracy, loss: train loss, val_loss: validation loss.</p>
Full article ">Figure 6
<p>The confusion matrix is used for seven classifications. Rows represent actual classes, and columns represent prediction classes (test set 10%).</p>
Full article ">Figure 7
<p>Confusion matrices of grassland multi-temporal hyperspectral data using MHCgT network (test set 10%). Rows indicate correct labels, and columns indicate predicted labels.</p>
Full article ">
26 pages, 712 KiB  
Article
Enabling Trust and Security in Digital Twin Management: A Blockchain-Based Approach with Ethereum and IPFS
by Austine Onwubiko, Raman Singh , Shahid Awan , Zeeshan Pervez and Naeem Ramzan 
Sensors 2023, 23(14), 6641; https://doi.org/10.3390/s23146641 - 24 Jul 2023
Cited by 11 | Viewed by 2717
Abstract
The emergence of Industry 5.0 has highlighted the significance of information usage, processing, and data analysis when maintaining physical assets. This has enabled the creation of the Digital Twin (DT). Information about an asset is generated and consumed during its entire life cycle. [...] Read more.
The emergence of Industry 5.0 has highlighted the significance of information usage, processing, and data analysis when maintaining physical assets. This has enabled the creation of the Digital Twin (DT). Information about an asset is generated and consumed during its entire life cycle. The main goal of DT is to connect and represent physical assets as close to reality as possible virtually. Unfortunately, the lack of security and trust among DT participants remains a problem as a result of data sharing. This issue cannot be resolved with a central authority when dealing with large organisations. Blockchain technology has been proposed as a solution for DT information sharing and security challenges. This paper proposes a Blockchain-based solution for digital twin using Ethereum blockchain with performance and cost analysis. This solution employs a smart contract for information management and access control for stakeholders of the digital twin, which is secure and tamper-proof. This implementation is based on Ethereum and IPFS. We use IPFS storage servers to store stakeholders’ details and manage information. A real-world use-case of a production line of a smartphone, where a conveyor belt is used to carry different parts, is presented to demonstrate the proposed system. The performance evaluation of our proposed system shows that it is secure and achieves performance improvement when compared with other methods. The comparison of results with state-of-the-art methods showed that the proposed system consumed fewer resources in a transaction cost, with an 8% decrease. The execution cost increased by 10%, but the cost of ether was 93% less than the existing methods. Full article
Show Figures

Figure 1

Figure 1
<p>Three principal components of digital twins.</p>
Full article ">Figure 2
<p>Digital twins with blockchain-based data storage.</p>
Full article ">Figure 3
<p>The system architecture showing how the DT interacts with the blockchain.</p>
Full article ">Figure 4
<p>Sequence demonstration showing all the interactions between the participants of the smart contract for the file upload process.</p>
Full article ">Figure 5
<p>Two-dimensional view of an industrial-sized conveyor.</p>
Full article ">Figure 6
<p>Three-dimensional view of an industrial-sized conveyor.</p>
Full article ">Figure 7
<p>Comparison of experimental results of proposed system and existing system on registra- tion transaction cost and execution cost.</p>
Full article ">Figure 8
<p>Comparison of experimental results of proposed system and existing system on deployment transaction cost.</p>
Full article ">Figure 9
<p>Comparison of experimental results of proposed system and existing system on deployment ether cost.</p>
Full article ">
19 pages, 3659 KiB  
Article
Enhancing Speech Emotion Recognition Using Dual Feature Extraction Encoders
by Ilkhomjon Pulatov, Rashid Oteniyazov, Fazliddin Makhmudov and Young-Im Cho
Sensors 2023, 23(14), 6640; https://doi.org/10.3390/s23146640 - 24 Jul 2023
Cited by 6 | Viewed by 3151
Abstract
Understanding and identifying emotional cues in human speech is a crucial aspect of human–computer communication. The application of computer technology in dissecting and deciphering emotions, along with the extraction of relevant emotional characteristics from speech, forms a significant part of this process. The [...] Read more.
Understanding and identifying emotional cues in human speech is a crucial aspect of human–computer communication. The application of computer technology in dissecting and deciphering emotions, along with the extraction of relevant emotional characteristics from speech, forms a significant part of this process. The objective of this study was to architect an innovative framework for speech emotion recognition predicated on spectrograms and semantic feature transcribers, aiming to bolster performance precision by acknowledging the conspicuous inadequacies in extant methodologies and rectifying them. To procure invaluable attributes for speech detection, this investigation leveraged two divergent strategies. Primarily, a wholly convolutional neural network model was engaged to transcribe speech spectrograms. Subsequently, a cutting-edge Mel-frequency cepstral coefficient feature abstraction approach was adopted and integrated with Speech2Vec for semantic feature encoding. These dual forms of attributes underwent individual processing before they were channeled into a long short-term memory network and a comprehensive connected layer for supplementary representation. By doing so, we aimed to bolster the sophistication and efficacy of our speech emotion detection model, thereby enhancing its potential to accurately recognize and interpret emotion from human speech. The proposed mechanism underwent a rigorous evaluation process employing two distinct databases: RAVDESS and EMO-DB. The outcome displayed a predominant performance when juxtaposed with established models, registering an impressive accuracy of 94.8% on the RAVDESS dataset and a commendable 94.0% on the EMO-DB dataset. This superior performance underscores the efficacy of our innovative system in the realm of speech emotion recognition, as it outperforms current frameworks in accuracy metrics. Full article
(This article belongs to the Special Issue Application of Semantic Technologies in Sensors and Sensing Systems)
Show Figures

Figure 1

Figure 1
<p>Modeling process of the proposed system.</p>
Full article ">Figure 2
<p>MFCC feature extraction procedure.</p>
Full article ">Figure 3
<p>Speech2Vec via skip-gram.</p>
Full article ">Figure 4
<p>Spectrogram feature encoder.</p>
Full article ">Figure 5
<p>RAVDESS dataset emotion distribution.</p>
Full article ">Figure 6
<p>EMO-DB dataset emotion distribution.</p>
Full article ">Figure 7
<p>The performance of the proposed model on the RAVDESS dataset.</p>
Full article ">Figure 8
<p>The performance of the proposed model on the EMO-DB dataset.</p>
Full article ">Figure 9
<p>The confusion matrices on the (<b>a</b>) RAVDESS and (<b>b</b>) EMO-DB datasets.</p>
Full article ">
26 pages, 7913 KiB  
Article
Contextual Cluster-Based Glow-Worm Swarm Optimization (GSO) Coupled Wireless Sensor Networks for Smart Cities
by P. S. Ramesh, P. Srivani, Miroslav Mahdal, Lingala Sivaranjani, Shafiqul Abidin, Shivakumar Kagi and Muniyandy Elangovan
Sensors 2023, 23(14), 6639; https://doi.org/10.3390/s23146639 - 24 Jul 2023
Cited by 3 | Viewed by 1657
Abstract
The cluster technique involves the creation of clusters and the selection of a cluster head (CH), which connects sensor nodes, known as cluster members (CM), to the CH. The CH receives data from the CM and collects data from sensor nodes, removing unnecessary [...] Read more.
The cluster technique involves the creation of clusters and the selection of a cluster head (CH), which connects sensor nodes, known as cluster members (CM), to the CH. The CH receives data from the CM and collects data from sensor nodes, removing unnecessary data to conserve energy. It compresses the data and transmits them to base stations through multi-hop to reduce network load. Since CMs only communicate with their CH and have a limited range, they avoid redundant information. However, the CH’s routing, compression, and aggregation functions consume power quickly compared to other protocols, like TPGF, LQEAR, MPRM, and P-LQCLR. To address energy usage in wireless sensor networks (WSNs), heterogeneous high-power nodes (HPN) are used to balance energy consumption. CHs close to the base station require effective algorithms for improvement. The cluster-based glow-worm optimization technique utilizes random clustering, distributed cluster leader selection, and link-based routing. The cluster head routes data to the next group leader, balancing energy utilization in the WSN. This algorithm reduces energy consumption through multi-hop communication, cluster construction, and cluster head election. The glow-worm optimization technique allows for faster convergence and improved multi-parameter selection. By combining these methods, a new routing scheme is proposed to extend the network’s lifetime and balance energy in various environments. However, the proposed model consumes more energy than TPGF, and other protocols for packets with 0 or 1 retransmission count in a 260-node network. This is mainly due to the short INFO packets during the neighbor discovery period and the increased hop count of the proposed derived pathways. Herein, simulations are conducted to evaluate the technique’s throughput and energy efficiency. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Cluster-based distributed WSN.</p>
Full article ">Figure 2
<p>Exit info from S2.</p>
Full article ">Figure 3
<p>Sensor S2 elected as CH.</p>
Full article ">Figure 4
<p>Sensor S2 broadcast release message.</p>
Full article ">Figure 5
<p>S1 becomes CH in the second set of sensors.</p>
Full article ">Figure 6
<p>Cluster development for non-CH.</p>
Full article ">Figure 7
<p>15 CH in WSN.</p>
Full article ">Figure 8
<p>Flowchart for the GSO algorithm.</p>
Full article ">Figure 9
<p>Ratio of average packet delivery.</p>
Full article ">Figure 10
<p>Average packets lost per day as a result of interference.</p>
Full article ">Figure 11
<p>Average packets lost by physical layer as a result of poor connection quality.</p>
Full article ">Figure 12
<p>Standard end-to-end delay.</p>
Full article ">Figure 13
<p>Average packet jitter.</p>
Full article ">Figure 14
<p>The average amount of energy used by the network.</p>
Full article ">Figure 15
<p>Hopping pattern in the initial path, on average.</p>
Full article ">Figure 16
<p>Hopping pattern in the second route, on average.</p>
Full article ">Figure 17
<p>Energy use of an average packet.</p>
Full article ">Figure 18
<p>Network lifespan, on average.</p>
Full article ">Figure 19
<p>The number of packets the sink received during the second round of the 1200s simulation.</p>
Full article ">Figure 20
<p>Standard end-to-end retransmission delays.</p>
Full article ">Figure 21
<p>Average retransmission jitter.</p>
Full article ">Figure 22
<p>Retransmission-related average network energy consumption.</p>
Full article ">Figure 23
<p>Average energy used for retransmission in a packet.</p>
Full article ">
18 pages, 4552 KiB  
Article
Adaptive Control Method for Gait Detection and Classification Devices with Inertial Measurement Unit
by Hyeonjong Kim, Ji-Won Kim and Junghyuk Ko
Sensors 2023, 23(14), 6638; https://doi.org/10.3390/s23146638 - 24 Jul 2023
Viewed by 1430
Abstract
Cueing and feedback training can be effective in maintaining or improving gait in individuals with Parkinson’s disease. We previously designed a rehabilitation assist device that can detect and classify a user’s gait at only the swing phase of the gait cycle, for the [...] Read more.
Cueing and feedback training can be effective in maintaining or improving gait in individuals with Parkinson’s disease. We previously designed a rehabilitation assist device that can detect and classify a user’s gait at only the swing phase of the gait cycle, for the ease of data processing. In this study, we analyzed the impact of various factors in a gait detection algorithm on the gait detection and classification rate (GDCR). We collected acceleration and angular velocity data from 25 participants (1 male and 24 females with an average age of 62 ± 6 years) using our device and analyzed the data using statistical methods. Based on these results, we developed an adaptive GDCR control algorithm using several equations and functions. We tested the algorithm under various virtual exercise scenarios using two control methods, based on acceleration and angular velocity, and found that the acceleration threshold was more effective in controlling the GDCR (average Spearman correlation −0.9996, p < 0.001) than the gyroscopic threshold. Our adaptive control algorithm was more effective in maintaining the target GDCR than the other algorithms (p < 0.001) with an average error of 0.10, while other tested methods showed average errors of 0.16 and 0.28. This algorithm has good scalability and can be adapted for future gait detection and classification applications. Full article
Show Figures

Figure 1

Figure 1
<p>Definition of elements used in previous research to detect and classify a user’s gait.</p>
Full article ">Figure 2
<p>Figure illustrating the role of the AT and GT. (<b>a</b>) The case when a gait was not detected (<b>b</b>) The case when a gait was detected and classified as a good gait (<b>c</b>) The case when a gait was detected but not classified as a good gait.</p>
Full article ">Figure 3
<p>Example results of previous research: (<b>a</b>) Results from non-PD subjects; and (<b>b</b>) Results from PD subjects.</p>
Full article ">Figure 4
<p>Equipment image and experimental setup: (<b>a</b>) equipment example; and (<b>b</b>) experimental setup.</p>
Full article ">Figure 5
<p>Shape of average GDCR and FG. Solid line curve is the average GDCR from experimental dataset. Dashed line curve which has GDCR range from 0 to 1 is normalized fitted curve from average GDCR curve. Gray-colored area means the average GDCR between slowest (line below of the average GDCR, 1.0 km/h) and fastest (line above of the average GDCR, 3.0 km/h) walking speed.</p>
Full article ">Figure 6
<p>(<b>a</b>) FG from the experimental results when GT was 15,000 and various sigmoid functions; and (<b>b</b>) average error of sigmoid functions from FG.</p>
Full article ">Figure 7
<p>Adaptive control algorithm flowchart.</p>
Full article ">Figure 8
<p>Various exercise scenarios and walking speeds. Each scenario is a combination of scenario elements: linear increase, linear decrease, random with upper and lower boundaries, sudden increase, and sudden decrease. The exercise procedure means exercise from start to end. #1: Linear increase. #2: Linear decrease. #3: Random with upper and lower boundaries. #4: Sudden decrease during linear increase. #5: Sudden increase during linear decrease. #6: Linear increase after linear decrease. #7: Linear decrease—random with upper and lower boundaries—linear increase. #8: Linear decrease—sudden increase—random with upper and lower boundaries. #9: Random with upper and lower boundaries after linear increase.</p>
Full article ">Figure 9
<p>Example of virtual simulation with scenarios. If the walking speed changed from 1.0 km/h, 2.0 km/h, and to 3.0 km/h, the example simulation in scenario one will follow the sequence as shown in this figure.</p>
Full article ">Figure 10
<p>Average GDCR according to legs used and walking speeds. The range of AT was from 1000 to 15,000: (<b>a</b>) The relation between the GDCR and used legs (<span class="html-italic">n</span> = 246; *** <span class="html-italic">p</span> &lt; 0.001); and (<b>b</b>) relation between the GDCR and walking speeds (<span class="html-italic">n</span> = 246; *** <span class="html-italic">p</span> &lt; 0.001).</p>
Full article ">Figure 11
<p>Average GDCR according to the GT and walking speeds. Entire GT range (from 15,000 to 25,000): (<b>a</b>) shape of the GDCR curve according to the GT; and (<b>b</b>) relation between Spearman’s correlation coefficients and walking speeds (<span class="html-italic">n</span> = 441; *** <span class="html-italic">p</span> &lt; 0.001 vs. GT, for each walking speed. In sum, <span class="html-italic">n</span> = 2205).</p>
Full article ">Figure 12
<p>Average GDCR according to the AT and walking speeds: (<b>a</b>) shape of the GDCR curve according to the AT. AT more than 15,000 was trimmed in the figure since there were no significant changes in values; and (<b>b</b>) relation between Spearman’s correlation coefficients and walking speeds. (<span class="html-italic">n</span> = 39,921; *** <span class="html-italic">p</span> &lt; 0.001 vs. AT, for each walking speed. In sum, <span class="html-italic">n</span> = 199,605).</p>
Full article ">Figure 13
<p>The average error of each control method. (<span class="html-italic">n</span> = 225; *** <span class="html-italic">p</span> &lt; 0.001, +++ <span class="html-italic">p</span> &lt; 0.001, vs. adaptive, ++ <span class="html-italic">p</span> &lt; 0.01, vs. adaptive). Errors from the GDCR control algorithm came from a narrower AT range than other results, e.g., those of <a href="#sensors-23-06638-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 14
<p>The average error of each control method on every scenario. The N of cases was 25 per each control method in one scenario. The N of cases in one scenario was 75 in total. Errors from GDCR control algorithm came from a narrower AT range, not the full range.</p>
Full article ">
24 pages, 3742 KiB  
Article
Gaussian-Filtered High-Frequency-Feature Trained Optimized BiLSTM Network for Spoofed-Speech Classification
by Hiren Mewada, Jawad F. Al-Asad, Faris A. Almalki, Adil H. Khan, Nouf Abdullah Almujally, Samir El-Nakla and Qamar Naith
Sensors 2023, 23(14), 6637; https://doi.org/10.3390/s23146637 - 24 Jul 2023
Cited by 4 | Viewed by 1744
Abstract
Voice-controlled devices are in demand due to their hands-free controls. However, using voice-controlled devices in sensitive scenarios like smartphone applications and financial transactions requires protection against fraudulent attacks referred to as “speech spoofing”. The algorithms used in spoof attacks are practically unknown; hence, [...] Read more.
Voice-controlled devices are in demand due to their hands-free controls. However, using voice-controlled devices in sensitive scenarios like smartphone applications and financial transactions requires protection against fraudulent attacks referred to as “speech spoofing”. The algorithms used in spoof attacks are practically unknown; hence, further analysis and development of spoof-detection models for improving spoof classification are required. A study of the spoofed-speech spectrum suggests that high-frequency features are able to discriminate genuine speech from spoofed speech well. Typically, linear or triangular filter banks are used to obtain high-frequency features. However, a Gaussian filter can extract more global information than a triangular filter. In addition, MFCC features are preferable among other speech features because of their lower covariance. Therefore, in this study, the use of a Gaussian filter is proposed for the extraction of inverted MFCC (iMFCC) features, providing high-frequency features. Complementary features are integrated with iMFCC to strengthen the features that aid in the discrimination of spoof speech. Deep learning has been proven to be efficient in classification applications, but the selection of its hyper-parameters and architecture is crucial and directly affects performance. Therefore, a Bayesian algorithm is used to optimize the BiLSTM network. Thus, in this study, we build a high-frequency-based optimized BiLSTM network to classify the spoofed-speech signal, and we present an extensive investigation using the ASVSpoof 2017 dataset. The optimized BiLSTM model is successfully trained with the least epoch and achieved a 99.58% validation accuracy. The proposed algorithm achieved a 6.58% EER on the evaluation dataset, with a relative improvement of 78% on a baseline spoof-identification system. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Time-domain speech signal of the same person’s (<b>upper</b>) genuine waveform and (<b>lower</b>) spoofed waveform.</p>
Full article ">Figure 2
<p>Spectrogram of speech signal under various attacks. (<b>a</b>) Genuine speech. (<b>b</b>) Spoofed speech using high-quality recording and playback. (<b>c</b>) Spoofed speech using high-quality recording and weak-quality playback. (<b>d</b>) Spoofed speech using mid-quality recording and low-quality playback.</p>
Full article ">Figure 3
<p>Block diagram representing extraction of Inverted MFCC features emphasizing high-frequency regions.</p>
Full article ">Figure 4
<p>Process flow for feature extraction from the speech signal.</p>
Full article ">Figure 5
<p>Features’ t-SNE visualization for training dataset.</p>
Full article ">Figure 6
<p>Process flow of the proposed algorithm.</p>
Full article ">Figure 7
<p>Minimization of the objective function over a number of iterations to find optimum parameters of BiLSTM network.</p>
Full article ">Figure 8
<p>Analysis of accuracy and loss over epochs in the training of optimised BiLSTM network using ASVSpoof2017 dataset.</p>
Full article ">Figure 9
<p>Confusion matrix for validation dataset.</p>
Full article ">Figure 10
<p>Confusion matrix for evaluation dataset including all attacks.</p>
Full article ">Figure 11
<p>Receiver operating characteristic curve.</p>
Full article ">Figure 12
<p>Performance of optimized BILSTM using different feature sets.</p>
Full article ">
14 pages, 4759 KiB  
Article
Estimation Method of an Electrical Equivalent Circuit for Sonar Transducer Impedance Characteristic of Multiple Resonance
by Jejin Jang, Jaehyuk Choi, Donghun Lee and Hyungsoo Mok
Sensors 2023, 23(14), 6636; https://doi.org/10.3390/s23146636 - 24 Jul 2023
Cited by 3 | Viewed by 1623
Abstract
Improving the operational efficiency and optimizing the design of sound navigation and ranging (sonar) systems require accurate electrical equivalent models within the operating frequency range. The power conversion system within the sonar system increases power efficiency through impedance-matching circuits. Impedance matching is used [...] Read more.
Improving the operational efficiency and optimizing the design of sound navigation and ranging (sonar) systems require accurate electrical equivalent models within the operating frequency range. The power conversion system within the sonar system increases power efficiency through impedance-matching circuits. Impedance matching is used to enhance the power transmission efficiency of the sonar system. Therefore, to increase the efficiency of the sonar system, an electrical-matching circuit is employed, and this necessitates an accurate equivalent circuit for the sonar transducer within the operating frequency range. In conventional equivalent circuit derivation methods, errors occur because they utilize the same number of RLC branches as the resonant frequency of the sonar transducer, based on its physical properties. Hence, this paper proposes an algorithm for deriving an equivalent circuit independent of resonance by employing multiple electrical components and particle swarm optimization (PSO). A comparative verification was also performed between the proposed and existing approaches using the Butterworth–van Dyke (BVD) model, which is a method for deriving electrical equivalent circuits. Full article
(This article belongs to the Special Issue Acoustic Sensors and Their Applications)
Show Figures

Figure 1

Figure 1
<p>Sonar transducer power system.</p>
Full article ">Figure 2
<p>BVD equivalent circuit.</p>
Full article ">Figure 3
<p>Proposed equivalent circuits, high-degree model.</p>
Full article ">Figure 4
<p>Conventional procedure for parameter estimation using PSO.</p>
Full article ">Figure 5
<p>Proposed procedure of PSO algorithm for parameter estimation of high-degree electrical equivalent circuit.</p>
Full article ">Figure 6
<p>Results of deriving the equivalent circuit of a single resonant sensor using the conventional method: (<b>a</b>) conventional equivalent circuit; and (<b>b</b>) electrical impedance characteristics.</p>
Full article ">Figure 7
<p>Results of deriving the equivalent circuit of a single resonant sensor using the proposed method: (<b>a</b>) high-degree equivalent circuit; (<b>b</b>) equivalent circuit after sorting unnecessary elements; (<b>c</b>) electrical impedance characteristics.</p>
Full article ">Figure 8
<p>Results of deriving the equivalent circuit of the dual resonance sensor using the conventional method: (<b>a</b>) equivalent circuit; and (<b>b</b>) electrical equivalent characteristics.</p>
Full article ">Figure 9
<p>Results of deriving the equivalent circuit of a single resonant sensor using the proposed method: (<b>a</b>) high-degree equivalent circuit; (<b>b</b>) equivalent circuit after sorting unnecessary elements; and (<b>c</b>) electrical impedance characteristics.</p>
Full article ">Figure 10
<p>Results of deriving the equivalent circuit of the multiple resonance sensor using the conventional method: (<b>a</b>) equivalent circuit; and (<b>b</b>) electrical equivalent characteristics.</p>
Full article ">Figure 11
<p>Results of deriving the equivalent circuit of the multiple resonant sensor using the proposed method: (<b>a</b>) high-degree equivalent circuit; (<b>b</b>) equivalent circuit after sorting unnecessary elements; and (<b>c</b>) electrical impedance characteristics.</p>
Full article ">
26 pages, 1198 KiB  
Review
Advancements in Forest Fire Prevention: A Comprehensive Survey
by Francesco Carta, Chiara Zidda, Martina Putzu, Daniele Loru, Matteo Anedda and Daniele Giusto
Sensors 2023, 23(14), 6635; https://doi.org/10.3390/s23146635 - 24 Jul 2023
Cited by 24 | Viewed by 15403
Abstract
Nowadays, the challenges related to technological and environmental development are becoming increasingly complex. Among the environmentally significant issues, wildfires pose a serious threat to the global ecosystem. The damages inflicted upon forests are manifold, leading not only to the destruction of terrestrial ecosystems [...] Read more.
Nowadays, the challenges related to technological and environmental development are becoming increasingly complex. Among the environmentally significant issues, wildfires pose a serious threat to the global ecosystem. The damages inflicted upon forests are manifold, leading not only to the destruction of terrestrial ecosystems but also to climate changes. Consequently, reducing their impact on both people and nature requires the adoption of effective approaches for prevention, early warning, and well-coordinated interventions. This document presents an analysis of the evolution of various technologies used in the detection, monitoring, and prevention of forest fires from past years to the present. It highlights the strengths, limitations, and future developments in this field. Forest fires have emerged as a critical environmental concern due to their devastating effects on ecosystems and the potential repercussions on the climate. Understanding the evolution of technology in addressing this issue is essential to formulate more effective strategies for mitigating and preventing wildfires. Full article
(This article belongs to the Special Issue Feature Papers in the 'Sensor Networks' Section 2023)
Show Figures

Figure 1

Figure 1
<p>An overview of the main techniques for fire monitoring and detection.</p>
Full article ">Figure 2
<p>Different topologies of wireless sensor network architectures. (<b>a</b>) Topology based on dense distribution of environmental sensors; (<b>b</b>) Topology based on dense distribution of wireless sensors in addition to wireless cameras; (<b>c</b>) Classifier based on the combination of different inputs; (<b>d</b>) Data classifier based on sensor values and alert management.</p>
Full article ">Figure 3
<p>Layout for the distribution of sensor nodes. (<b>a</b>) Typical square layout with 4 sensor nodes per cluster; (<b>b</b>) Sensor distribution for coverage area.</p>
Full article ">Figure 4
<p>IR Camera data from UAV.</p>
Full article ">
24 pages, 9446 KiB  
Review
Advancements in Triboelectric Nanogenerators (TENGs) for Intelligent Transportation Infrastructure: Enhancing Bridges, Highways, and Tunnels
by Arash Rayegani, Ali Matin Nazar and Maria Rashidi
Sensors 2023, 23(14), 6634; https://doi.org/10.3390/s23146634 - 24 Jul 2023
Cited by 5 | Viewed by 3297
Abstract
The development of triboelectric nanogenerators (TENGs) over time has resulted in considerable improvements to the efficiency, effectiveness, and sensitivity of self-powered sensing. Triboelectric nanogenerators have low restriction and high sensitivity while also having high efficiency. The vast majority of previous research has found [...] Read more.
The development of triboelectric nanogenerators (TENGs) over time has resulted in considerable improvements to the efficiency, effectiveness, and sensitivity of self-powered sensing. Triboelectric nanogenerators have low restriction and high sensitivity while also having high efficiency. The vast majority of previous research has found that accidents on the road can be attributed to road conditions. For instance, extreme weather conditions, such as heavy winds or rain, can reduce the safety of the roads, while excessive temperatures might make it unpleasant to be behind the wheel. Air pollution also has a negative impact on visibility while driving. As a result, sensing road surroundings is the most important technical system that is used to evaluate a vehicle and make decisions. This paper discusses both monitoring driving behavior and self-powered sensors influenced by triboelectric nanogenerators (TENGs). It also considers energy harvesting and sustainability in smart road environments such as bridges, tunnels, and highways. Furthermore, the information gathered in this study can help readers enhance their knowledge concerning the advantages of employing these technologies for innovative uses of their powers. Full article
(This article belongs to the Special Issue Advanced Sensing Technology for Intelligent Transportation Systems)
Show Figures

Figure 1

Figure 1
<p>Triboelectric nanogenerator modes: (<b>a</b>–<b>d</b>) contact separation mode, lateral sliding mode, single electrode mode, and free standing triboelectric layer mode [<a href="#B78-sensors-23-06634" class="html-bibr">78</a>]. (<b>e</b>) The number of TENG research articles published each year. (<b>f</b>) The number of citations of TENG research articles each year [<a href="#B79-sensors-23-06634" class="html-bibr">79</a>].</p>
Full article ">Figure 2
<p>Demonstration of self-powered sensor based on a TENG for bridges: (<b>a</b>) Application of MCL-TENG as self-powered sensor [<a href="#B20-sensors-23-06634" class="html-bibr">20</a>]. (<b>b</b>) The design principle of DFIB-TENG [<a href="#B107-sensors-23-06634" class="html-bibr">107</a>]. (<b>c</b>) An AC/DC-TENG’s operational mechanism [<a href="#B108-sensors-23-06634" class="html-bibr">108</a>].</p>
Full article ">Figure 3
<p>Demonstration of self-powered sensors based on a TENG for tunnels: (<b>a</b>) Design of a framework for a wireless system that is self-powered and measures traffic volume [<a href="#B109-sensors-23-06634" class="html-bibr">109</a>]. (<b>b</b>) Fabrication and design of a flexible TENG tree [<a href="#B110-sensors-23-06634" class="html-bibr">110</a>].</p>
Full article ">Figure 4
<p>Demonstration of self-powered sensor based on a TENG for highways: (<b>a</b>) Fabrication and structural design of M-TENG [<a href="#B111-sensors-23-06634" class="html-bibr">111</a>]. (<b>b</b>) Fabrication and various scenarios of the dual-mode TENG [<a href="#B112-sensors-23-06634" class="html-bibr">112</a>].</p>
Full article ">Figure 5
<p>Demonstration of self-powered sensor based on a TENG for highways: (<b>a</b>) Intelligent traffic control system hybrid NG illustration [<a href="#B61-sensors-23-06634" class="html-bibr">61</a>]. (<b>b</b>) FSS-TENG structure and design principles [<a href="#B113-sensors-23-06634" class="html-bibr">113</a>].</p>
Full article ">Figure 6
<p>Demonstration of smart infrastructure to harvest energy from roads: (<b>a</b>) Fabrication, structural design, and application of OD-HNG [<a href="#B114-sensors-23-06634" class="html-bibr">114</a>]. (<b>b</b>) Overspeed wake-up alarm system powered by triboelectric nanogenerators (SOWAS) [<a href="#B44-sensors-23-06634" class="html-bibr">44</a>].</p>
Full article ">Figure 7
<p>Demonstration of smart infrastructure to harvest energy from roads: (<b>a</b>) Application and design principal of OT-TENG [<a href="#B115-sensors-23-06634" class="html-bibr">115</a>]. (<b>b</b>) Structure and various scenarios of ml-TENG [<a href="#B116-sensors-23-06634" class="html-bibr">116</a>].</p>
Full article ">Figure 8
<p>Demonstration of self-powered vehicle sensors based on TENG for road intelligent systems: (<b>a</b>) Design principles of SPHVS sensor [<a href="#B117-sensors-23-06634" class="html-bibr">117</a>]. (<b>b</b>) Fabrication and design principal of V-TENG [<a href="#B118-sensors-23-06634" class="html-bibr">118</a>]. (<b>c</b>) The structural design and functioning mechanism of a BS-TENG [<a href="#B119-sensors-23-06634" class="html-bibr">119</a>].</p>
Full article ">Figure 9
<p>Demonstration of smart pedestrian crossing system based on triboelectric nanogenerators: (<b>a</b>) Application, principal, and fabrication of FR-TENGs [<a href="#B120-sensors-23-06634" class="html-bibr">120</a>]. (<b>b</b>) The working mechanism of cement-based TENG [<a href="#B121-sensors-23-06634" class="html-bibr">121</a>]. (<b>c</b>) Design principle and application of CBO-TENG [<a href="#B122-sensors-23-06634" class="html-bibr">122</a>]. (<b>d</b>) PBT’s illustration and mechanism of operation [<a href="#B123-sensors-23-06634" class="html-bibr">123</a>].</p>
Full article ">Figure 10
<p>Demonstration of self-powered sensors based on a TENG for monitoring driving behaviors: (<b>a</b>) Application and design principal of (SSAS) [<a href="#B124-sensors-23-06634" class="html-bibr">124</a>]. (<b>b</b>) The design principle of APU-TENG and AS-TENG [<a href="#B106-sensors-23-06634" class="html-bibr">106</a>].</p>
Full article ">Figure 11
<p>Demonstration of self-powered sensors based on a TENG for monitoring driving behaviors: (<b>a</b>) Application and design principal of an ST-TENG [<a href="#B125-sensors-23-06634" class="html-bibr">125</a>]. (<b>b</b>) Fabrication and application of DT-TENG [<a href="#B79-sensors-23-06634" class="html-bibr">79</a>].</p>
Full article ">Figure 12
<p>Demonstration of Challenges, perspective, and insight for self-powered sensors for intelligent road environments.</p>
Full article ">
16 pages, 8356 KiB  
Article
Detection of Sensor Faults with or without Disturbance Using Analytical Redundancy Methods: An Application to Orifice Flowmeter
by Vemulapalli Sravani and Santhosh Krishnan Venkata
Sensors 2023, 23(14), 6633; https://doi.org/10.3390/s23146633 - 24 Jul 2023
Viewed by 1752
Abstract
Sensors and transducers play a vital role in the productivity of any industry. A sensor that is frequently used in industries to monitor flow is an orifice flowmeter. In certain instances, faults can occur in the flowmeter, hindering the operation of other dependent [...] Read more.
Sensors and transducers play a vital role in the productivity of any industry. A sensor that is frequently used in industries to monitor flow is an orifice flowmeter. In certain instances, faults can occur in the flowmeter, hindering the operation of other dependent systems. Hence, the present study determines the occurrence of faults in the flowmeter with a model-based approach. To do this, the model of the system is developed from the transient data obtained from computational fluid dynamics. This second-order transfer function is further used for the development of linear-parameter-varying observers, which generates the residue for fault detection. With or without disturbance, the suggested method is capable of effectively isolating drift, open-circuit, and short-circuit defects in the orifice flowmeter. The outcomes of the LPV observer are compared with those of a neural network. The open- and short-circuit faults are traced within 1 s, whereas the minimum time duration for the detection of a drift fault is 5.2 s and the maximum time is 20 s for different combinations of threshold and slope. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Framework for fault diagnosis.</p>
Full article ">Figure 2
<p>FDI techniques for residual generation.</p>
Full article ">Figure 3
<p>Luenberger observer with fault.</p>
Full article ">Figure 4
<p>Cross-sectional view of the orifice flowmeter with a manometer with the pressure profile along the x-axis.</p>
Full article ">Figure 5
<p>Experimental flow station.</p>
Full article ">Figure 6
<p>Flowchart for CFD analysis.</p>
Full article ">Figure 7
<p>Experimental vs. simulation of C<sub>d</sub>, β = 0.42.</p>
Full article ">Figure 8
<p>The transient response for a beta ratio of 0.42.</p>
Full article ">Figure 9
<p>Experimental v/s model output.</p>
Full article ">Figure 10
<p>LPV-estimator-based FDI for the orifice flowmeter.</p>
Full article ">Figure 11
<p>Fault-free output from the LPV estimator.</p>
Full article ">Figure 12
<p>Histogram of RMS values for the fault-free case.</p>
Full article ">Figure 13
<p>Isolation of three different faults.</p>
Full article ">Figure 14
<p>FDI: drift fault: Slope = −20%.</p>
Full article ">Figure 15
<p>FDI using NN: open-circuit fault.</p>
Full article ">Figure 16
<p>Response of the estimator for the case of disturbance and no fault.</p>
Full article ">Figure 17
<p>Response of the estimator for disturbance + drift fault (slope 20%).</p>
Full article ">
16 pages, 12947 KiB  
Communication
Integrated Fiber Ring Laser Temperature Sensor Based on Vernier Effect with Lyot–Sagnac Interferometer
by Yuhui Liu, Weihao Lin, Jie Hu, Fang Zhao, Feihong Yu, Shuaiqi Liu, Jinna Chen, Huanhuan Liu, Perry Ping Shum and Xuming Zhang
Sensors 2023, 23(14), 6632; https://doi.org/10.3390/s23146632 - 24 Jul 2023
Cited by 3 | Viewed by 1538
Abstract
The Vernier effect created using an incorporated Lyot–Sagnac loop is used to create an ultra-high sensitivity temperature sensor based on a ring laser cavity. Unlike standard double Sagnac loop systems, the proposed sensor is fused into a single Sagnac loop by adjusting the [...] Read more.
The Vernier effect created using an incorporated Lyot–Sagnac loop is used to create an ultra-high sensitivity temperature sensor based on a ring laser cavity. Unlike standard double Sagnac loop systems, the proposed sensor is fused into a single Sagnac loop by adjusting the welding angle between two polarization-maintaining fibers (PMFs) to achieve effective temperature sensitivity amplification. The PMFs are separated into two arms of 0.8 m and 1 m in length, with a 45° angle difference between the fast axes. The sensor’s performance is examined both theoretically and experimentally. The experimental results reveal that the Vernier amplification effect can be achieved via PMF rotating shaft welding. The temperature sensitivity in the laser cavity can reach 2.391 nm/°C, which is increased by a factor of more than eight times compared with a single Sagnac loop structure (0.298 nm/°C) with a length of 0.8 m without the Vernier effect at temperatures ranging from 20 °C to 30 °C. Furthermore, unlike traditional optical fiber sensing that uses a broadband light source (BBS) for detection, which causes issues such as low signal-to-noise ratio and broad bandwidth, the Sagnac loop can be employed as a filter by inserting itself into the fiber ring laser (FRL) cavity. When the external parameters change, the laser is offset by the interference general modulation, allowing the external temperature to be monitored. The superior performance of signal-to-noise ratios of up to 50 dB and bandwidths of less than 0.2 nm is achieved. The proposed sensor has a simple structure and high sensitivity and is expected to play a role in biological cell activity monitoring. Full article
(This article belongs to the Special Issue Developments and Applications of Optical Fiber Sensors)
Show Figures

Figure 1

Figure 1
<p>Cross section diagram of polarization maintaining fiber is shown on the left, and the fiber splice joint diagram is on the right.</p>
Full article ">Figure 2
<p>Schematic diagram of the experimental setup for the temperature detection system in BBS.</p>
Full article ">Figure 3
<p>Schematic diagram of the experimental setup for the temperature detection system in the FRL system.</p>
Full article ">Figure 4
<p>The output interference spectrum changes with temperature under BBS in the traditional Sagnac loop.</p>
Full article ">Figure 5
<p>Linear regression equation of traditional Sagnac loop under temperature change in BBS at the temperature range from 20 °C to 30 °C.</p>
Full article ">Figure 6
<p>The correlation between the Lyot–Sagnac loop interference spectrum and its envelope.</p>
Full article ">Figure 7
<p>Broadband spectrum shift with temperature in the Lyot–Sagnac loop [blue: 20 °C, red: 22 °C, green: 24 °C].</p>
Full article ">Figure 8
<p>The output spectrum changes with temperature under BBS in the Lyot–Sagnac loop from 20 °C to 30° C.</p>
Full article ">Figure 9
<p>Linear regression equation of the Lyot–Sagnac loop under temperature change from 20 °C to 30 °C in BBS.</p>
Full article ">Figure 10
<p>The output spectrum of laser and generated vernier envelope.</p>
Full article ">Figure 11
<p>The output laser spectrum of traditional Sagnac loop structure at the temperature range of 20–30 °C in the FRL system.</p>
Full article ">Figure 12
<p>Linear regression equation of traditional Sagnac loop with a temperature change from 20 °C to 30 °C in the FRL system.</p>
Full article ">Figure 13
<p>Nonlinear regression equation of traditional Sagnac loop 20 °C to 30 °C in the FRL system.</p>
Full article ">Figure 14
<p>The Lyot–Sagnac loop structure output laser spectrum at the temperature range of 20–30 °C in the FRL system.</p>
Full article ">Figure 15
<p>Linear regression equation of the Lyot–Sagnac loop with temperatures from 20 °C to 30 °C in the FRL system.</p>
Full article ">Figure 16
<p>The stability of the Lyot–Sagnac loop as a temperature sensor in the FRL system is within a 2 h monitoring range. (When the temperature is 20 °C).</p>
Full article ">Figure 17
<p>The stability of the Lyot–Sagnac loop as a temperature sensor in the FRL system is within a 2 h monitoring range. (When the temperature is 30 °C).</p>
Full article ">
15 pages, 1187 KiB  
Article
Packets-to-Prediction: An Unobtrusive Mechanism for Identifying Coarse-Grained Sleep Patterns with WiFi MAC Layer Traffic
by Dheryta Jaisinghani and Nishtha Phutela
Sensors 2023, 23(14), 6631; https://doi.org/10.3390/s23146631 - 24 Jul 2023
Cited by 2 | Viewed by 1536
Abstract
A good night’s sleep is of the utmost importance for the seamless execution of our cognitive capabilities. Unfortunately, the research shows that one-third of the US adult population is severely sleep deprived. With college students as our focused group, we devised a contactless, [...] Read more.
A good night’s sleep is of the utmost importance for the seamless execution of our cognitive capabilities. Unfortunately, the research shows that one-third of the US adult population is severely sleep deprived. With college students as our focused group, we devised a contactless, unobtrusive mechanism to detect sleep patterns, which, contrary to existing sensor-based solutions, does not require the subject to put on any sensors on the body or buy expensive sleep sensing equipment. We named this mechanism Packets-to-Predictions(P2P) because we leverage the WiFi MAC layer traffic collected in the home and university environments to predict “sleep” and “awake” periods. We first manually established that extracting such patterns is feasible, and then, we trained various machine learning models to identify these patterns automatically. We trained six machine learning models—K nearest neighbors, logistic regression, random forest classifier, support vector classifier, gradient boosting classifier, and multilayer perceptron. K nearest neighbors gave the best performance with 87% train accuracy and 83% test accuracy. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Results of internal student sleep survey.</p>
Full article ">Figure 2
<p>Data Collection Setup.</p>
Full article ">Figure 3
<p>Types of WiFi Frames.</p>
Full article ">Figure 4
<p>Sleep and awake transition, along with the phone’s state.</p>
Full article ">Figure 5
<p>Manual “Sleep” and “Awake” period identification for normal and disturbed sleep for two nights.</p>
Full article ">Figure 6
<p>Training performance of machine learning models for all nights in the home and university environment.</p>
Full article ">Figure 7
<p>Testing performance of machine learning models for all nights in the home and university environment.</p>
Full article ">Figure 8
<p>ML model prediction of the sleep and awake periods in the real world.</p>
Full article ">
25 pages, 8646 KiB  
Article
Construction, Spectral Modeling, Parameter Inversion-Based Calibration, and Application of an Echelle Spectrometer
by Yuming Wang, Youshan Qu, Hui Zhao and Xuewu Fan
Sensors 2023, 23(14), 6630; https://doi.org/10.3390/s23146630 - 24 Jul 2023
Cited by 1 | Viewed by 1653
Abstract
We have developed a compact, asymmetric three-channel echelle spectrometer with remarkable high-spectral resolution capabilities. In order to achieve the desired spectral resolution, we initially establish a theoretical spectral model based on the two-dimensional coordinates of spot positions corresponding to each wavelength. Next, we [...] Read more.
We have developed a compact, asymmetric three-channel echelle spectrometer with remarkable high-spectral resolution capabilities. In order to achieve the desired spectral resolution, we initially establish a theoretical spectral model based on the two-dimensional coordinates of spot positions corresponding to each wavelength. Next, we present an innovative and refined method for precisely calibrating echelle spectrometers through parameter inversion. Our analysis delves into the complexities of the nonlinear two-dimensional echelle spectrogram. We employ a variety of optimization techniques, such as grid exploration, simulated annealing, genetic algorithms, and genetic simulated annealing (GSA) algorithms, to accurately invert spectrogram parameters. Our proposed GSA algorithm synergistically integrates the strengths of global and local searches, thereby enhancing calibration accuracy. Compared to the conventional grid exploration method, GSA reduces the error function by 22.8%, convergence time by 2.16 times, and calibration accuracy by 7.05 times. Experimental validation involves calibrating a low-pressure mercury lamp, resulting in an average spectral accuracy error of 0.0257 nm after performing crucial parameter inversion. Furthermore, the echelle spectrometer undergoes a laser-induced breakdown spectroscopy experiment, demonstrating exceptional spectral resolution and sub-10 ns time-resolved capability. Overall, our research offers a comprehensive and efficient solution for constructing, modeling, calibrating, and applying echelle spectrometers, significantly enhancing calibration accuracy and efficiency. This work contributes to the advancement of spectrometry and opens up new possibilities for high-resolution spectral analysis across various research and industry domains. Full article
(This article belongs to the Special Issue Optical Sensing and Technologies)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) 2D and (<b>b</b>) 3D schematic of the echelle grating.</p>
Full article ">Figure 2
<p>Schematic diagram of the principle of cross-dispersion. (<b>a</b>) Incident polychromatic light, (<b>b</b>) Light subjected to dispersion by a diffraction grating. (<b>c</b>) Light after undergoing cross-dispersion.</p>
Full article ">Figure 3
<p>Optical path through the designed spectrometer. 1: slit; 2: off-axis parabolic collimating mirror; 3, 6, and 10: folding mirror; 4 and 5: beam splitter; 7, 11, and 14: echelle grating; 8, 12, and 15: dispersive prism; 9, 13, and 16: camera.</p>
Full article ">Figure 4
<p>Theoretical Two-Dimensional Spectrum of the designed spectrometer. (<b>a</b>) Wavelength–diffraction angle–angular dispersion relationship of the echelle gratings. (<b>b</b>) Spectral model coordinates.</p>
Full article ">Figure 5
<p>Prototype of the Echelle spectrometer.</p>
Full article ">Figure 6
<p>(<b>a</b>) Raw image of a low-pressure mercury lamp with low gain. (<b>b</b>) Raw image of a low-pressure mercury lamp with high gain. (<b>c</b>) Raw image of a high-pressure mercury lamp.</p>
Full article ">Figure 7
<p>Flowchart of the SA algorithm.</p>
Full article ">Figure 8
<p>Crossover process in GA algorithms.</p>
Full article ">Figure 9
<p>Schematic of the mutation process in GA algorithms.</p>
Full article ">Figure 10
<p>Flowchart of the GA algorithm.</p>
Full article ">Figure 11
<p>Flowchart of the genetic simulated annealing algorithm.</p>
Full article ">Figure 12
<p>Relationship between the grating incidence angle deviation, deflection angle deviation, and different calibration error evaluation indicators. (<b>a</b>) Error 1: <math display="inline"><semantics><mrow><mi>Δ</mi><mi>x</mi></mrow></semantics></math>, (<b>b</b>) Error 2: <math display="inline"><semantics><mrow><mi>Δ</mi><mi>y</mi></mrow></semantics></math>, (<b>c</b>) Error 3:<math display="inline"><semantics><mrow><msqrt><mfrac><mrow><msup><mrow><mi>Δ</mi><mi>x</mi></mrow><mrow><mn>2</mn></mrow></msup><mo>+</mo><msup><mrow><mi>Δ</mi><mi>y</mi></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><mn>2</mn></mrow></mfrac></msqrt></mrow></semantics></math>, (<b>d</b>) Error 4: <math display="inline"><semantics><mrow><msqrt><mfrac><mrow><msup><mrow><mi>a</mi><mo>(</mo><mi>Δ</mi><mi>x</mi><mo>)</mo></mrow><mrow><mn>2</mn></mrow></msup><mo>+</mo><msup><mrow><mi>b</mi><mo>(</mo><mi>Δ</mi><mi>y</mi><mo>)</mo></mrow><mrow><mn>2</mn></mrow></msup></mrow><mrow><mn>2</mn></mrow></mfrac></msqrt></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>Δ</mi><mi>x</mi></mrow></semantics></math> and <math display="inline"><semantics><mrow><mi>Δ</mi><mi>y</mi></mrow></semantics></math> respectively signify the disparities between the factual x and y coordinates of the wavelength on the detector and their theoretical counterparts.</p>
Full article ">Figure 13
<p>Optimization results of 4 different algorithms. (<b>a</b>) Heatmap of the error function values obtained by grid search, (<b>b</b>) graph of the number of iterations versus the evaluation function in the SA algorithm, (<b>c</b>) graph of the number of iterations versus the evaluation function in the GA algorithm, and (<b>d</b>) graph of the number of iterations versus the evaluation function in the GSA algorithm.</p>
Full article ">Figure 14
<p>Curves of the number of iterations and the optimal evaluation function values for SA, GA, and GSA.</p>
Full article ">Figure 15
<p>Calibration results of different spectral reduction algorithms of Channel 1.</p>
Full article ">Figure 16
<p>Calibration results of different algorithms for each characteristic wavelength of Channel 1.</p>
Full article ">Figure 17
<p>Error values of each characteristic wavelength for different algorithms.</p>
Full article ">Figure 18
<p>LIBS experimental diagram based on Echelle spectrometer.</p>
Full article ">Figure 19
<p>LIBS spectra and elemental characteristic peaks of (<b>a</b>) Brass, (<b>b</b>) Tourmaline, and (<b>c</b>) Hematite detected by echelle spectrometer after calibration using the GSA algorithm.</p>
Full article ">Figure 20
<p>Time-resolved LIBS spectra of brass samples captured by the echelle spectrometer.</p>
Full article ">
17 pages, 6106 KiB  
Article
Quantitative Visualization of Buried Defects in GFRP via Microwave Reflectometry
by Ruonan Wang, Yang Fang, Qianxiang Gao, Yong Li, Xihan Yang and Zhenmao Chen
Sensors 2023, 23(14), 6629; https://doi.org/10.3390/s23146629 - 24 Jul 2023
Viewed by 1320
Abstract
Glass fiber-reinforced polymer (GFRP) is widely used in engineering fields involving aerospace, energy, transportation, etc. If internal buried defects occur due to hostile environments during fabrication and practical service, the structural integrity and safety of GFRP structures would be severely undermined. Therefore, it [...] Read more.
Glass fiber-reinforced polymer (GFRP) is widely used in engineering fields involving aerospace, energy, transportation, etc. If internal buried defects occur due to hostile environments during fabrication and practical service, the structural integrity and safety of GFRP structures would be severely undermined. Therefore, it is indispensable to carry out effective quantitative nondestructive testing (NDT) of internal defects buried within GFRP structures. Along with the development of composite materials, microwave NDT is promising in non-intrusive inspection of defects in GFRPs. In this paper, quantitative screening of the subsurface impact damage and air void in a unidirectional GFRP via microwave reflectometry was intensively investigated. The influence of the microwave polarization direction with respect to the GFRP fiber direction on the reflection coefficient was investigated by using the equivalent relative permittivity calculated with theoretical analysis. Following this, a microwave NDT system was built up for further investigation regarding the imaging and quantitative evaluation of buried defects in GFRPs. A direct-wave suppression method based on singular-value decomposition was proposed to obtain high-quality defect images. The defect in-plane area was subsequently assessed via a proposed defect-edge identification method. The simulation and experimental results revealed that (1) the testing sensitivity to buried defects was the highest when the electric-field polarization direction is parallel to the GFRP fiber direction; and (2) the averaged evaluation accuracy regarding the in-plane area of the buried defect reached approximately 90% by applying the microwave reflectometry together with the proposed processing methods. Full article
(This article belongs to the Special Issue Electromagnetic Non-destructive Testing and Evaluation)
Show Figures

Figure 1

Figure 1
<p>Electromagnetic wave propagation for microwave detection of buried defects: (<b>a</b>) electromagnetic wave propagation for the scenario with the back surface material loss; (<b>b</b>) electromagnetic wave propagation for the case with the internal hole.</p>
Full article ">Figure 2
<p>Two wave-polarization angles are involved in simulations: (<b>a</b>) the electric field direction parallel to the fiber direction; and (<b>b</b>) the electric field direction orthogonal to the fiber direction.</p>
Full article ">Figure 3
<p>Schematic illustration of the simulation model regarding the microwave testing of a GFRP slab subject to buried defects.</p>
Full article ">Figure 4
<p>Simulated signals: (<b>a</b>) parallel; (<b>b</b>) orthogonal.</p>
Full article ">Figure 5
<p>Schematic illustration of the experimental system: (<b>a</b>) the diagram; (<b>b</b>) the practical picture.</p>
Full article ">Figure 6
<p>The testing specimen: (<b>a</b>) the specimen picture; (<b>b</b>) the schematic illustration of the specimen.</p>
Full article ">Figure 7
<p>The testing signal for the flawless region and the center of the back surface material loss: (<b>a</b>) frequency–domain signals with the parallel case; (<b>b</b>) time–domain signals with the parallel case; (<b>c</b>) frequency–domain signals with the orthogonal case; (<b>d</b>) time–domain signals with the orthogonal case.</p>
Full article ">Figure 8
<p>Defect images with the (<b>a</b>) parallel case and (<b>b</b>) orthogonal case.</p>
Full article ">Figure 9
<p>The direct wave suppression method based on SVD.</p>
Full article ">Figure 10
<p>The testing signal after direct wave suppression: (<b>a</b>) frequency–domain signals with the parallel case; (<b>b</b>) time–domain signals with the parallel case; (<b>c</b>) frequency–domain signals with the orthogonal case; (<b>d</b>) time–domain signals with the orthogonal case.</p>
Full article ">Figure 10 Cont.
<p>The testing signal after direct wave suppression: (<b>a</b>) frequency–domain signals with the parallel case; (<b>b</b>) time–domain signals with the parallel case; (<b>c</b>) frequency–domain signals with the orthogonal case; (<b>d</b>) time–domain signals with the orthogonal case.</p>
Full article ">Figure 11
<p>The reproduced defect images with the (<b>a</b>) parallel case and (<b>b</b>) orthogonal case.</p>
Full article ">Figure 12
<p>The proposed defect edge identification method: (<b>a</b>) image interpolation with the parallel case; (<b>b</b>) feature enhancement with the parallel case; (<b>c</b>) defect edge recognition with the parallel case; (<b>d</b>) image interpolation with the orthogonal case; (<b>e</b>) feature enhancement with the orthogonal case; (<b>f</b>) defect edge recognition with the orthogonal case.</p>
Full article ">Figure 13
<p>Assessment results of the in-plane areas of buried defects in GFRP.</p>
Full article ">
17 pages, 3190 KiB  
Article
Novel Multi-Parametric Sensor System for Comprehensive Multi-Wavelength Photoplethysmography Characterization
by Joan Lambert Cause, Ángel Solé Morillo, Bruno da Silva, Juan C. García-Naranjo and Johan Stiens
Sensors 2023, 23(14), 6628; https://doi.org/10.3390/s23146628 - 24 Jul 2023
Cited by 5 | Viewed by 1893
Abstract
Photoplethysmography (PPG) is widely used to assess cardiovascular health. However, its usage and standardization are limited by the impact of variable contact force and temperature, which influence the accuracy and reliability of the measurements. Although some studies have evaluated the impact of these [...] Read more.
Photoplethysmography (PPG) is widely used to assess cardiovascular health. However, its usage and standardization are limited by the impact of variable contact force and temperature, which influence the accuracy and reliability of the measurements. Although some studies have evaluated the impact of these phenomena on signal amplitude, there is still a lack of knowledge about how these perturbations can distort the signal morphology, especially for multi-wavelength PPG (MW-PPG) measurements. This work presents a modular multi-parametric sensor system that integrates continuous and real-time acquisition of MW-PPG, contact force, and temperature signals. The implemented design solution allows for a comprehensive characterization of the effects of the variations in these phenomena on the contour of the MW-PPG signal. Furthermore, a dynamic DC cancellation circuitry was implemented to improve measurement resolution and obtain high-quality raw multi-parametric data. The accuracy of the MW-PPG signal acquisition was assessed using a synthesized reference PPG optical signal. The performance of the contact force and temperature sensors was evaluated as well. To determine the overall quality of the multi-parametric measurement, an in vivo measurement on the index finger of a volunteer was performed. The results indicate a high precision and accuracy in the measurements, wherein the capacity of the system to obtain high-resolution and low-distortion MW-PPG signals is highlighted. These findings will contribute to developing new signal-processing approaches, advancing the accuracy and robustness of PPG-based systems, and bridging existing gaps in the literature. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

Figure 1
<p>Functional block diagram of the proposed multi-parametric sensor system. The system comprises four key components: Optical Front-End (OFE), Analog Front-End (AFE), Auxiliary Sensors (AUX), and Digital Signal Processing (DSP).</p>
Full article ">Figure 2
<p>Experimental setup (<b>left</b>) and spatial distribution of OFE optical elements (<b>right</b>).</p>
Full article ">Figure 3
<p>Internal details of the experimental setup for measuring multi-parametric signals in vivo at the fingertips. The plastic casing that contains the PSoC and the rest of the circuitry is not shown.</p>
Full article ">Figure 4
<p>Electrical schematic of the multi-parametric sensor system. Within the dashed lines are the hardware solutions implemented inside the PSoC. The functional blocks are highlighted in different colors.</p>
Full article ">Figure 5
<p>Internal details of the PSoC circuit: AFE section for the conditioning and acquisition of the optical signals and the force sensor.</p>
Full article ">Figure 6
<p>Evaluation of the multi-parametric sensor system to capture reference signals that exhibit different levels of PI. The reference signal with the minimum PI (0.2%) is displayed on the left side, while the reference signal with the maximum PI (2.0%) is shown on the right side. The corresponding SNR values for each wavelength are in the upper part.</p>
Full article ">Figure 7
<p>Evaluation of the AFE performance at 525 nm wavelength. (<b>A</b>) Reference signal; (<b>B</b>) Acquired signal (grey) with 16-bit resolution and 2 ksps sampling frequency; and (<b>C</b>) Acquired signal (grey) with 20-bit resolution and 100 sps sampling frequency. The red line represents the low-pass filtered signal for (<b>A</b>,<b>B</b>). The Pearson correlation coefficients are also shown.</p>
Full article ">Figure 8
<p>(<b>a</b>) Details of the operation of the DDCC circuit while a MW-PPG signal is acquired in real time. The circuit is designed to keep the signals at a pre-configured dynamic voltage range. The point at which the correction circuit is activated to return the 940 nm PPG signals to the predetermined level is signaled. (<b>b</b>) MW-PPG signal after being reconstructed and high-pass filtered (Fc = 0.2 Hz).</p>
Full article ">Figure 9
<p>Evolution of the 940 nm PPG signal (blue waveform) compared to CF variations (orange waveform) across seven incremental CF levels.</p>
Full article ">Figure 10
<p>MW-PPG AC components signal contour and amplitude for CF levels of (<b>A</b>) 0.33, (<b>B</b>) 0.61, (<b>C</b>) 0.85, and (<b>D</b>) 0.96 N demonstrate the effect of CF on the signal.</p>
Full article ">Figure 11
<p>Waveform of the tonometric signal represented by the solid line. For comparison purposes, the AC component of the 940 nm PPG signal is also shown (dotted lines).</p>
Full article ">Figure 12
<p>MW-PPG AC components signal contour and amplitude differences comparison for temperature levels of 24.7 and 34.2 °C.</p>
Full article ">Figure 13
<p>The performance of the designed system was evaluated through in vivo measurements in real time, wherein multiple parameters were recorded simultaneously. The figure shows the AC component of the MW-PPG signal at wavelengths of 470 nm, 525 nm, 590 nm, 631 nm, and 940 nm, as well as the temperature readings, absolute CF, and tonometric signal.</p>
Full article ">
12 pages, 3413 KiB  
Article
Can Wearable Sensors Provide Accurate and Reliable 3D Tibiofemoral Angle Estimates during Dynamic Actions?
by Mirel Ajdaroski and Amanda Esquivel
Sensors 2023, 23(14), 6627; https://doi.org/10.3390/s23146627 - 24 Jul 2023
Viewed by 1475
Abstract
The ability to accurately measure tibiofemoral angles during various dynamic activities is of clinical interest. The purpose of this study was to determine if inertial measurement units (IMUs) can provide accurate and reliable angle estimates during dynamic actions. A tuned quaternion conversion (TQC) [...] Read more.
The ability to accurately measure tibiofemoral angles during various dynamic activities is of clinical interest. The purpose of this study was to determine if inertial measurement units (IMUs) can provide accurate and reliable angle estimates during dynamic actions. A tuned quaternion conversion (TQC) method tuned to dynamics actions was used to calculate Euler angles based on IMU data, and these calculated angles were compared to a motion capture system (our “gold” standard) and a commercially available sensor fusion algorithm. Nine healthy athletes were instrumented with APDM Opal IMUs and asked to perform nine dynamic actions; five participants were used in training the parameters of the TQC method, with the remaining four being used to test validity. Accuracy was based on the root mean square error (RMSE) and reliability was based on the Bland–Altman limits of agreement (LoA). Improvement across all three orthogonal angles was observed as the TQC method was able to more accurately (lower RMSE) and more reliably (smaller LoA) estimate an angle than the commercially available algorithm. No significant difference was observed between the TQC method and the motion capture system in any of the three angles (p < 0.05). It may be feasible to use this method to track tibiofemoral angles with higher accuracy and reliability than the commercially available sensor fusion algorithm. Full article
Show Figures

Figure 1

Figure 1
<p>Marker and IMU locations.</p>
Full article ">Figure 2
<p>Box-and-whisker plots comparing the values of the motion capture system (MCS), the sensor fusion algorithm of the IMU (IMU), and the tune quaternion conversion method (TQC) for flexion (<b>top</b>), abduction (<b>middle</b>), and rotation (<b>bottom</b>). Differences that were determined to be significant are denoted by *.</p>
Full article ">Figure 3
<p>Bland–Altman plots associated with the residuals for flexion (<b>top</b>), abduction (<b>middle</b>), and rotation (<b>bottom</b>) between the motion capture system (MCS) and either the sensor fusion algorithm (IMU; <b>left</b>) or the tuned quaternion conversion method (TQC; <b>right</b>). Residuals were plotted against the measured values of the MCS. Limits of agreement (LoAs) are shown in red while the line of zero difference is shown in black.</p>
Full article ">
10 pages, 2359 KiB  
Communication
Two-Photon Excited Fluorescence Lifetime Imaging of Tetracycline-Labeled Retinal Calcification
by Kavita R. Hegde, Krishanu Ray, Henryk Szmacinski, Sharon Sorto, Adam C. Puche, Imre Lengyel and Richard B. Thompson
Sensors 2023, 23(14), 6626; https://doi.org/10.3390/s23146626 - 24 Jul 2023
Cited by 2 | Viewed by 1324
Abstract
Deposition of calcium-containing minerals such as hydroxyapatite and whitlockite in the subretinal pigment epithelial (sub-RPE) space of the retina is linked to the development of and progression to the end-stage of age-related macular degeneration (AMD). AMD is the most common eye disease causing [...] Read more.
Deposition of calcium-containing minerals such as hydroxyapatite and whitlockite in the subretinal pigment epithelial (sub-RPE) space of the retina is linked to the development of and progression to the end-stage of age-related macular degeneration (AMD). AMD is the most common eye disease causing blindness amongst the elderly in developed countries; early diagnosis is desirable, particularly to begin treatment where available. Calcification in the sub-RPE space is also directly linked to other diseases such as Pseudoxanthoma elasticum (PXE). We found that these mineral deposits could be imaged by fluorescence using tetracycline antibiotics as specific stains. Binding of tetracyclines to the minerals was accompanied by increases in fluorescence intensity and fluorescence lifetime. The lifetimes for tetracyclines differed substantially from the known background lifetime of the existing natural retinal fluorophores, suggesting that calcification could be visualized by lifetime imaging. However, the excitation wavelengths used to excite these lifetime changes were generally shorter than those approved for retinal imaging. Here, we show that tetracycline-stained drusen in post mortem human retinas may be imaged by fluorescence lifetime contrast using multiphoton (infrared) excitation. For this pilot study, ten eyes from six anonymous deceased donors (3 female, 3 male, mean age 83.7 years, range 79–97 years) were obtained with informed consent from the Maryland State Anatomy Board with ethical oversight and approval by the Institutional Review Board. Full article
Show Figures

Figure 1

Figure 1
<p>Two-photon-excited fluorescence intensity micrograph (<b>left panel</b>) with color-coded intensities (arbitrary units) on the right side of the panel, and fluorescence lifetime micrograph of the same field (<b>right panel</b>) fit to a single component with time indicated by false colors with scale in nanoseconds on the right. Experiments were carried out on an infusion-stained, flat-mounted retina, following the removal of the RPE and neurosensory retina (97-year-old female donor; cause of death: cardiovascular disease). Sample was labelled with Cl-Tet. Approximate size of the field: 250 × 250 μm<sup>2</sup>; further details can be found in the Methods.</p>
Full article ">Figure 2
<p>Fluorescence intensity micrograph (<b>left panel</b>) of doxycycline-stained drusen in the retina of a 79-year-old white male donor (cause of death: chronic myelocytic anemia); the intensities are false colored according to the scale (arbitrary units) on the right of the image. The (<b>right panel</b>) shows the time-resolved fluorescence decay of the aggregated pixels in the indicated region of interest (pink); The purple dots indicate the individual time-resolved fluorescence intensity data points, the red line through the dots indicates the best fit to the data, the orange curve before two nanoseconds indicates the instrument response function, and the vertical lines at 1.3 and 10 nanoseconds indicate the beginning and end, respectively, of the data included in the fit. The solid purple line in the lower part of the right panel depicts the “residuals”: the differences between the actual measured data points and the values calculated for that time point by the best fit parameters.</p>
Full article ">Figure 3
<p>Close-ups of the druse in <a href="#sensors-23-06626-f002" class="html-fig">Figure 2</a> at higher (60×) magnification ((<b>upper left panel</b>): fluorescence intensity in false color with scale on right side in arbitrary units; (<b>lower left panel</b>): single component fluorescence lifetime in false color with scale on the right in nanoseconds) and best two component fits to the pink region of interest at X = −33 μm, Y = −17 μm in the upper left panel, with both lifetimes floating (<b>upper right panel</b>), and with one lifetime fixed to 3.7 ns (<b>lower right panel</b>). The upper and lower right panels depict the decays using the same conventions as described in the <a href="#sensors-23-06626-f002" class="html-fig">Figure 2</a> legend.</p>
Full article ">
16 pages, 2569 KiB  
Article
An L-Shaped Three-Level and Single Common Element Sparse Sensor Array for 2-D DOA Estimation
by Bo Du, Weijia Cui, Bin Ba, Haiyun Xu and Wubin Gao
Sensors 2023, 23(14), 6625; https://doi.org/10.3390/s23146625 - 23 Jul 2023
Cited by 3 | Viewed by 1141
Abstract
The degree of freedom (DOF) is an important performance metric for evaluating the design of a sparse array structure. Designing novel sparse arrays with higher degrees of freedom, while ensuring that the array structure can be mathematically represented, is a crucial research direction [...] Read more.
The degree of freedom (DOF) is an important performance metric for evaluating the design of a sparse array structure. Designing novel sparse arrays with higher degrees of freedom, while ensuring that the array structure can be mathematically represented, is a crucial research direction in the field of direction of arrival (DOA) estimation. In this paper, we propose a novel L-shaped sparse sensor array by adjusting the physical placement of the sensors in the sparse array. The proposed L-shaped sparse array consists of two sets of three-level and single-element sparse arrays (TSESAs), which estimate the azimuth and elevation angles, respectively, through one-dimensional (1-D) spatial spectrum search. Each TSESA is composed of a uniform linear subarray and two sparse subarrays, with one single common element in the two sparse subarrays. Compared to existing L-shaped sparse arrays, the proposed array achieves higher degrees of freedom, up to 4Q1Q2+8Q15, when estimating DOA using the received signal covariance. To facilitate the correct matching of azimuth and elevation angles, the cross-covariance between the two TSESA arrays is utilized for estimation. By comparing and analyzing performance parameters with commonly used L-shaped and other sparse arrays, it is found that the proposed L-shaped TSESA has higher degrees of freedom and array aperture, leading to improved two-dimensional (2-D) DOA estimation results. Finally, simulation experiments validate the excellent performance of the L-shaped TSESA in 2-D DOA estimation. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>The geometry of TSESA.</p>
Full article ">Figure 2
<p>The geometry of L-shaped TSESA.</p>
Full article ">Figure 3
<p>The spatial spectrum of azimuth and elevation angles.</p>
Full article ">Figure 4
<p>The spatial spectrum of azimuth angles for multi-signal estimation or underdetermined estimation.</p>
Full article ">Figure 5
<p>RMSE curves of DOA estimation versus SNR.</p>
Full article ">Figure 6
<p>RMSE curves of DOA estimation versus snapshots.</p>
Full article ">
15 pages, 287 KiB  
Article
Identifying Current Feelings of Mild and Moderate to High Depression in Young, Healthy Individuals Using Gait and Balance: An Exploratory Study
by Ali Boolani, Allison H. Gruber, Ahmed Ali Torad and Andreas Stamatis
Sensors 2023, 23(14), 6624; https://doi.org/10.3390/s23146624 - 23 Jul 2023
Cited by 3 | Viewed by 2125
Abstract
Depressive mood states in healthy populations are prevalent but often under-reported. Biases exist in self-reporting of depression in otherwise healthy individuals. Gait and balance control can serve as objective markers for identifying those individuals, particularly in real-world settings. We utilized inertial measurement units [...] Read more.
Depressive mood states in healthy populations are prevalent but often under-reported. Biases exist in self-reporting of depression in otherwise healthy individuals. Gait and balance control can serve as objective markers for identifying those individuals, particularly in real-world settings. We utilized inertial measurement units (IMU) to measure gait and balance control. An exploratory, cross-sectional design was used to compare individuals who reported feeling depressed at the moment (n = 49) with those who did not (n = 84). The Quality Assessment Tool for Observational Cohort and Cross-sectional Studies was employed to ensure internal validity. We recruited 133 participants aged between 18–36 years from the university community. Various instruments were used to evaluate participants’ present depressive symptoms, sleep, gait, and balance. Gait and balance variables were used to detect depression, and participants were categorized into three groups: not depressed, mild depression, and moderate–high depression. Participant characteristics were analyzed using ANOVA and Kruskal–Wallis tests, and no significant differences were found in age, height, weight, BMI, and prior night’s sleep between the three groups. Classification models were utilized for depression detection. The most accurate model incorporated both gait and balance variables, yielding an accuracy rate of 84.91% for identifying individuals with moderate–high depression compared to non-depressed individuals. Full article
(This article belongs to the Section Wearables)
24 pages, 9476 KiB  
Review
Research Progress of Vertical Channel Thin Film Transistor Device
by Benxiao Sun, Huixue Huang, Pan Wen, Meng Xu, Cong Peng, Longlong Chen, Xifeng Li and Jianhua Zhang
Sensors 2023, 23(14), 6623; https://doi.org/10.3390/s23146623 - 23 Jul 2023
Cited by 2 | Viewed by 4801
Abstract
Thin film transistors (TFTs) as the core devices for displays, are widely used in various fields including ultra-high-resolution displays, flexible displays, wearable electronic skins and memory devices, especially in terms of sensors. TFTs have now started to move towards miniaturization. Similarly to MOSFETs [...] Read more.
Thin film transistors (TFTs) as the core devices for displays, are widely used in various fields including ultra-high-resolution displays, flexible displays, wearable electronic skins and memory devices, especially in terms of sensors. TFTs have now started to move towards miniaturization. Similarly to MOSFETs problem, traditional planar structure TFTs have difficulty in reducing the channel’s length sub-1μm under the existing photolithography technology. Vertical channel thin film transistors (V-TFTs) are proposed. It is an effective solution to overcome the miniaturization limit of traditional planar TFTs. So, we summarize the different aspects of VTFTs. Firstly, this paper introduces the structure types, key parameters, and the impact of different preparation methods in devices of V-TFTs. Secondly, an overview of the research progress of V-TFTs’ active layer materials in recent years, the characteristics of V-TFTs and their application in examples has proved the enormous application potential of V-TFT in sensing. Finally, in addition to the advantages of V-TFTs, the current technical challenge and their potential solutions are put forward, and the future development trend of this new structure of V-TFTs is proposed. Full article
Show Figures

Figure 1

Figure 1
<p>Logical framework for V-TFT introduction.</p>
Full article ">Figure 2
<p>(<b>a</b>) Schematic diagram of mesa-shaped vertical structure [<a href="#B26-sensors-23-06623" class="html-bibr">26</a>]. (<b>b</b>) “Double gate” vertical structure sharing S and D electrodes [<a href="#B27-sensors-23-06623" class="html-bibr">27</a>]. (<b>c</b>) “Active-cut” vertical structure [<a href="#B28-sensors-23-06623" class="html-bibr">28</a>]. (<b>d</b>) Schematic Diagram of Trench Vertical Structure [<a href="#B29-sensors-23-06623" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>(<b>a</b>) The vertical structure of the gate electrode as a spacer layer [<a href="#B30-sensors-23-06623" class="html-bibr">30</a>]. (<b>b</b>) The active layer itself serves as the vertical structure of the spacer layer [<a href="#B32-sensors-23-06623" class="html-bibr">32</a>].</p>
Full article ">Figure 4
<p>FIB-SEM images using (<b>a</b>) PECVD-grown SiO<sub>2</sub> and (<b>b</b>) spin-coated PI spacers [<a href="#B28-sensors-23-06623" class="html-bibr">28</a>].</p>
Full article ">Figure 5
<p>(<b>a</b>) Cross sectional view of flexible V-TFT. (<b>b</b>) Preparing the layout designed for V-TFT. (<b>c</b>) Schematic diagram after delaminated from the glass substrate. (<b>d</b>) Device image of FIB-SEM [<a href="#B34-sensors-23-06623" class="html-bibr">34</a>].</p>
Full article ">Figure 6
<p>V−TFT transfer characteristic (I<sub>ds</sub>−V<sub>gs</sub>) prepared by different deposition methods (<b>a</b>) ALD deposition (<b>b</b>) sputtering deposition (<b>c</b>) transfer characteristic changes of 6 devices deposited by ALD (<b>d</b>) transfer characteristic changes of 6 devices deposited by sputtering [<a href="#B36-sensors-23-06623" class="html-bibr">36</a>].</p>
Full article ">Figure 7
<p>(<b>a</b>) Schematic diagram of the back channel region in V-TFT. (<b>b</b>) Schematic diagram of protection treatment for the back channel region [<a href="#B26-sensors-23-06623" class="html-bibr">26</a>].</p>
Full article ">Figure 8
<p>(<b>a</b>) Device type A transfer curves at different voltages. (<b>b</b>) Device type B transfer curve at different voltages. (<b>c</b>) Device type C transfer curve at different voltages. (<b>d</b>) Ratio of I<sub>on</sub> at V<sub>ds</sub> = 1 V to I<sub>on</sub> at V<sub>ds</sub> = 0.1 V for three devices [<a href="#B54-sensors-23-06623" class="html-bibr">54</a>].</p>
Full article ">Figure 9
<p>(<b>a</b>) TEM Image of V−TFT active layer with composite crystal ITO−ZnO. (<b>b</b>) Transfer characteristics of V−TFT active layer devices with composite crystal ITO−ZnO at different voltages [<a href="#B56-sensors-23-06623" class="html-bibr">56</a>].</p>
Full article ">Figure 10
<p>Cross section comparison of three different TFT structures: (<b>a</b>) V-TFT structure cross section diagram. (<b>b</b>) Back channel etching TFT structure cross section diagram. (<b>c</b>) Self-aligned TFT structure cross section diagram [<a href="#B53-sensors-23-06623" class="html-bibr">53</a>].</p>
Full article ">Figure 11
<p>V-TFT applied to: (<b>a</b>) schematic diagram of testing under bending conditions [<a href="#B58-sensors-23-06623" class="html-bibr">58</a>]. (<b>b</b>) PET flexible substrate. (<b>c</b>) prepared inverter. (<b>d</b>) diagram of the relationship between the input and output of the prepared inverter and its gain. (<b>e</b>) prepared NOR logic gate. (<b>f</b>) four typical input-output state diagrams of prepared NOR logic gate. (<b>g</b>) schematic diagram of prepared NAND logic gate. (<b>h</b>) four typical input and output state diagrams of the prepared NAND logic gate [<a href="#B32-sensors-23-06623" class="html-bibr">32</a>].</p>
Full article ">Figure 12
<p>(<b>a</b>) I<sub>ds</sub>−V<sub>gs</sub> characteristics and the (<b>b</b>) variations in MW width for the fabricated V−TFT charge−trap memory using sputtered and ALD−grown IGZO active channel layers. (<b>c</b>) Variations in the on- and off-programmed I<sub>ds</sub>’s of the fabricated V−TFT charge−trap memory (<b>d</b>) Variations in the on- and off-programmed I<sub>ds</sub>’s with a lapse of memory retention time for 10<sup>4</sup> s at RT [<a href="#B37-sensors-23-06623" class="html-bibr">37</a>].</p>
Full article ">Figure 13
<p>Application of TFT in flexible electronics sensing application [<a href="#B66-sensors-23-06623" class="html-bibr">66</a>].</p>
Full article ">Figure 14
<p>Schematic illustration of an a-IGZO coplanar dual-gate TFT transducer and SnO<sub>2</sub> EG sensing units. The dotted line represents the electrical connection between the two units [<a href="#B79-sensors-23-06623" class="html-bibr">79</a>].</p>
Full article ">Figure 15
<p>(<b>a</b>) Schematic illustration TFTs on plastic, with the electrodes and various layers labelled. (<b>b</b>) SEM image of an array of nanowire sensors based on TFT. Each device (horizontal strip) is contacted by two Ti electrodes (oriented vertically). Inset: Digital photograph of the flexible sensor chip. (<b>c</b>) Electrical response of a nanowire sensor to 20 p.p.m. (red curve), 2 p.p.m. (blue curve), 200 p.p.b. (green curve), and 20 p.p.b. (black curve) NO<sub>2</sub> diluted in N<sub>2</sub>. The gas is introduced to the sensing chamber after 1 min of flowing N<sub>2</sub>. Inset: An extended response of the sensor to 20 p.p.b. NO<sub>2</sub>; the gas is introduced after 20 min of flowing N<sub>2</sub> [<a href="#B80-sensors-23-06623" class="html-bibr">80</a>].</p>
Full article ">Figure 16
<p>(<b>a</b>) Schematic of the TFT integrated in the PDMS chemical sensor. (<b>b</b>) The schematic view of IGZO TFT based sensor. (<b>c</b>) Photograph of the PDMS chemical sensor flow system. (<b>d</b>) SEM image of Ag NW mesh top gate electrode on IGZO TFT based sensor [<a href="#B81-sensors-23-06623" class="html-bibr">81</a>].</p>
Full article ">Figure 17
<p>Schematic diagram of the working principle of a chemical sensor for the detection of different gases prepared by TFT integration. The desired gas mixture is prepared by four mass flow controllers (MFC 1–4), each connected to a different gas species [<a href="#B82-sensors-23-06623" class="html-bibr">82</a>].</p>
Full article ">Figure 18
<p>Gas detection system for chemical sensors and P3HT TFTs [<a href="#B83-sensors-23-06623" class="html-bibr">83</a>].</p>
Full article ">Figure 19
<p>(<b>a</b>) Schematic of a V-TFT nanopore device and translocation experiment setup that concurrently measure both ionic and V-TFT signals. (<b>b</b>) Close-up schematic of the V-TFT nanopore device [<a href="#B84-sensors-23-06623" class="html-bibr">84</a>].</p>
Full article ">
15 pages, 15368 KiB  
Article
Frequency-Domain Reverse-Time Migration with Analytic Green’s Function for the Seismic Imaging of Shallow Water Column Structures in the Arctic Ocean
by Seung-Goo Kang and U Geun Jang
Sensors 2023, 23(14), 6622; https://doi.org/10.3390/s23146622 - 23 Jul 2023
Viewed by 1270
Abstract
Seismic oceanography can provide a two- or three-dimensional view of the water column thermocline structure at a vertical and horizontal resolution from the multi-channel seismic dataset. Several seismic imaging methods and techniques for seismic oceanography have been presented in previous research. In this [...] Read more.
Seismic oceanography can provide a two- or three-dimensional view of the water column thermocline structure at a vertical and horizontal resolution from the multi-channel seismic dataset. Several seismic imaging methods and techniques for seismic oceanography have been presented in previous research. In this study, we suggest a new formulation of the frequency-domain reverse-time migration method for seismic oceanography based on the analytic Green’s function. For imaging thermocline structures in the water column from the seismic data, our proposed seismic reverse-time migration method uses the analytic Green’s function for numerically calculating the forward- and backward-modeled wavefield rather than the wave propagation modeling in the conventional algorithm. The frequency-domain reverse-time migration with analytic Green’s function does not require significant computational memory, resources, or a multifrontal direct solver to calculate the migration seismic images as like conventional reverse-time migration. The analytic Green’s function in our reverse-time method makes it possible to provide a high-resolution seismic water column image with a meter-scale grid size, consisting of full-band frequency components for a modest cost and in a low-memory environment for computation. Our method was applied to multi-channel seismic data acquired in the Arctic Ocean and successfully constructed water column seismic images containing the oceanographic reflections caused by thermocline structures of the water mass. From the numerical test, we note that the oceanographic reflections of the migrated seismic images reflected the distribution of Arctic waters in a shallow depth and showed good correspondence with the anomalies of measured temperatures and calculated reflection coefficients from each XCDT profile. Our proposed method has been verified for field data application and accuracy of imaging performance. Full article
(This article belongs to the Special Issue Advanced Sensor Applications in Marine Objects Recognition)
Show Figures

Figure 1

Figure 1
<p>Map of the multichannel seismic tracks and the XCTD measurement stations during the ARA08C expedition on the Canadian Beaufort shelf.</p>
Full article ">Figure 2
<p>Calculated travel time difference by wave propagation of the water column for sound velocity variation between field-measured and constant values (1420, 1425, 1430, 1435, and 1440 m/s).</p>
Full article ">Figure 3
<p>(<b>a</b>) The temperature–salinity profile and an oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD02; (<b>b</b>) the the temperature–salinity profile and oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD03; (<b>c</b>) the the temperature–salinity profile and oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD04.</p>
Full article ">Figure 3 Cont.
<p>(<b>a</b>) The temperature–salinity profile and an oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD02; (<b>b</b>) the the temperature–salinity profile and oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD03; (<b>c</b>) the the temperature–salinity profile and oceanographic seismic section for the BF05 line with the reflection coefficient for the location at XCTD04.</p>
Full article ">Figure 4
<p>(<b>a</b>) The temperature–salinity profile and oceanographic seismic section for the BF06 line with the reflection coefficient for the location at XCTD05; (<b>b</b>) the temperature–salinity profile and oceanographic seismic section for the BF06 line with the reflection coefficient for the location at XCTD06.</p>
Full article ">Figure 5
<p>(<b>a</b>) The temperature–salinity profiles of XCTD07~11; (<b>b</b>) the oceanographic seismic sections of the BF09 line for each XCTD station (XCTD07~11) with estimated reflection coefficients; (<b>c</b>) the zoomed seismic oceanographic image of the BF09 for the shallow water depth (under 300 m) with the reflection coefficient profiles for each XCTD station.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The temperature–salinity profiles of XCTD07~11; (<b>b</b>) the oceanographic seismic sections of the BF09 line for each XCTD station (XCTD07~11) with estimated reflection coefficients; (<b>c</b>) the zoomed seismic oceanographic image of the BF09 for the shallow water depth (under 300 m) with the reflection coefficient profiles for each XCTD station.</p>
Full article ">Figure 6
<p>The seismic oceanographic image of BF10 for shallow water depths (under 300 m) with the reflection coefficient profiles for each XCTD station.</p>
Full article ">Figure 7
<p>The seismic oceanographic image of BF11 for shallow water depths (under 300 m) with the reflection coefficient profile for the XCTD20 station.</p>
Full article ">Figure 8
<p>The seismic oceanographic image of BF12 for shallow water depths (under 300 m) with the reflection coefficient profiles for stations XCTD22 and 23.</p>
Full article ">
16 pages, 1707 KiB  
Article
A Size, Weight, Power, and Cost-Efficient 32-Channel Time to Digital Converter Using a Novel Wave Union Method
by Saleh M. Alshahry, Awwad H. Alshehry, Abdullah K. Alhazmi and Vamsy P. Chodavarapu
Sensors 2023, 23(14), 6621; https://doi.org/10.3390/s23146621 - 23 Jul 2023
Cited by 3 | Viewed by 1943
Abstract
We present a Tapped Delay Line (TDL)-based Time to Digital Converter (TDC) using Wave Union type A (WU-A) architecture for applications that require high-precision time interval measurements with low size, weight, power, and cost (SWaP-C) requirements. The proposed TDC is implemented on a [...] Read more.
We present a Tapped Delay Line (TDL)-based Time to Digital Converter (TDC) using Wave Union type A (WU-A) architecture for applications that require high-precision time interval measurements with low size, weight, power, and cost (SWaP-C) requirements. The proposed TDC is implemented on a low-cost Field-Programmable Gate Array (FPGA), Artix-7, from Xilinx. Compared to prior works, our high-precision multi-channel TDC has the lowest SWaP-C requirements. We demonstrate an average time precision of less than 3 ps and a Root Mean Square resolution of about 1.81 ps. We propose a novel Wave Union type A architecture where only the first multiplexer is used to generate the wave union pulse train at the arrival of the start signal to minimize the required computational processing. In addition, an auto-calibration algorithm is proposed to help improve the TDC performance by improving the TDC Differential Non-Linearity and Integral Non-Linearity. Full article
(This article belongs to the Special Issue Algorithms, Systems and Applications of Smart Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>Comparison of proposed and standard WU-A architecture. (<b>a</b>) Standard WU-A architectures (<b>i</b>,<b>ii</b>,<b>iii</b>). (<b>b</b>) Proposed WU-A architecture (<b>iv</b>).</p>
Full article ">Figure 2
<p>(<b>i</b>) Schematic diagram of complete TDC structure. (<b>ii</b>) Proposed WU-A architecture. (<b>iii</b>) Diagram of CARRY4 block of Xilinx Artix-7 FPGA.</p>
Full article ">Figure 3
<p>Implemention layouts of the WU-A TDL TDC. (<b>i</b>) Overview. (<b>ii</b>) Clock regions (X1Y0 X1Y1) containing TDC CH00. (<b>iii</b>) A single CLB for WU-A TDL TDC implementation.</p>
Full article ">Figure 4
<p>Automatic calibration functional block.</p>
Full article ">Figure 5
<p>Encoder process. (<b>a</b>) Block diagrams. (<b>b</b>) Non-thermometer code to one-out-of-N code converter (NTON). (<b>c</b>) The one-out-of-N code to binary code converter (ONBC).</p>
Full article ">Figure 6
<p>(<b>i</b>) Xilinx Arty-7 board. (<b>ii</b>) Floor plan of 32-channel WU-A TDC. (<b>iii</b>) Single-channel WU-A TDC implementation clock regions (X1Y1 X1Y0).</p>
Full article ">Figure 7
<p>Algorithmic flowcharts for (<b>a</b>) the averaging process, (<b>b</b>) finding the active bins.</p>
Full article ">Figure 8
<p>The graph shows the number of hits versus bins number.</p>
Full article ">Figure 9
<p>The graph shows the time interpolation linearity.</p>
Full article ">Figure 10
<p>The graph shows the delay time versus the bin number, and the red horizontal line represents the average precision value.</p>
Full article ">Figure 11
<p>The graph shows the bin width distribution.</p>
Full article ">Figure 12
<p>The graph shows code density and the linearities (DNL and INL) for the 32channels WU type A TDC (<b>a</b>) bin sizes tested at START bins, (<b>b</b>) bin sizes tested at 3000 bins.</p>
Full article ">Figure 13
<p>(<b>a</b>) Power analysis on-chip. (<b>b</b>) With the on-chip temperature around 66.5 °C.</p>
Full article ">Figure 13 Cont.
<p>(<b>a</b>) Power analysis on-chip. (<b>b</b>) With the on-chip temperature around 66.5 °C.</p>
Full article ">
12 pages, 1804 KiB  
Article
Changes in Heart Rate, Heart Rate Variability, Breathing Rate, and Skin Temperature throughout Pregnancy and the Impact of Emotions—A Longitudinal Evaluation Using a Sensor Bracelet
by Verena Bossung, Adrian Singer, Tiara Ratz, Martina Rothenbühler, Brigitte Leeners and Nina Kimmich
Sensors 2023, 23(14), 6620; https://doi.org/10.3390/s23146620 - 23 Jul 2023
Cited by 3 | Viewed by 3872
Abstract
(1) Background: Basic vital signs change during normal pregnancy as they reflect the adaptation of maternal physiology. Electronic wearables like fitness bracelets have the potential to provide vital signs continuously in the home environment of pregnant women. (2) Methods: We performed a prospective [...] Read more.
(1) Background: Basic vital signs change during normal pregnancy as they reflect the adaptation of maternal physiology. Electronic wearables like fitness bracelets have the potential to provide vital signs continuously in the home environment of pregnant women. (2) Methods: We performed a prospective observational study from November 2019 to November 2020 including healthy pregnant women, who recorded their wrist skin temperature, heart rate, heart rate variability, and breathing rate using an electronic wearable. In addition, eight emotions were assessed weekly using five-point Likert scales. Descriptive statistics and a multivariate model were applied to correlate the physiological parameters with maternal emotions. (3) Results: We analyzed data from 23 women using the electronic wearable during pregnancy. We calculated standard curves for each physiological parameter, which partially differed from the literature. We showed a significant association of several emotions like feeling stressed, tired, or happy with the course of physiological parameters. (4) Conclusions: Our data indicate that electronic wearables are helpful for closely observing vital signs in pregnancy and to establish modern curves for the physiological course of these parameters. In addition to physiological adaptation mechanisms and pregnancy disorders, emotions have the potential to influence the course of physiological parameters in pregnancy. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

Figure 1
<p>Flowchart of inclusion.</p>
Full article ">Figure 2
<p>Mean weekly measurements of physiological parameters throughout pregnancy for n = 23 women. The x-axis depicts the gestational weeks, and the y-axis depicts the mean weekly measurements; error bars show the standard deviation. The red curve is a smoothed mean curve, and the orange area depicts the confidence interval of the smoothed mean curve.</p>
Full article ">Figure 3
<p>Results from multivariate linear mixed models on the effect of subjective emotions on the trajectory of four physiological parameters over the course of pregnancy. Significant values are printed in bold. * denotes the interaction between the preceding and the subsequent variable.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop