[go: up one dir, main page]

Next Issue
Volume 23, August-2
Previous Issue
Volume 23, July-2
 
 
sensors-logo

Journal Browser

Journal Browser

Sensors, Volume 23, Issue 15 (August-1 2023) – 364 articles

Cover Story (view full-size image): The versatility of corrole complexes has raised interest in the use of these molecules as elements of chemical sensors. However, their scarce conductivity limits the development of simple conductometric sensors and requires the use of optical or mass transducers that are rather more cumbersome and less prone to be integrated into microelectronics systems. Herewith, we introduce two heterostructure sensors combining lutetium bisphthalocyanine with either pentafluorophenyl- or methoxyphenyl-substituted copper corrole complexes. The difference in electronic effects induces opposite responses with respect to ammonia, with n-type or p-type behaviors. Both devices are capable of detecting ammonia down to 10 ppm at room temperature, with a high but reversible sensitivity with respect to relative humidity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
20 pages, 45778 KiB  
Article
Unveiling the Dynamics of Thermal Characteristics Related to LULC Changes via ANN
by Yasir Hassan Khachoo, Matteo Cutugno, Umberto Robustelli and Giovanni Pugliano
Sensors 2023, 23(15), 7013; https://doi.org/10.3390/s23157013 - 7 Aug 2023
Cited by 6 | Viewed by 1708
Abstract
Continuous and unplanned urbanization, combined with negative alterations in land use land cover (LULC), leads to a deterioration of the urban thermal environment and results in various adverse ecological effects. The changes in LULC and thermal characteristics have significant implications for the economy, [...] Read more.
Continuous and unplanned urbanization, combined with negative alterations in land use land cover (LULC), leads to a deterioration of the urban thermal environment and results in various adverse ecological effects. The changes in LULC and thermal characteristics have significant implications for the economy, climate patterns, and environmental sustainability. This study focuses on the Province of Naples in Italy, examining LULC changes and the Urban Thermal Field Variance Index (UTFVI) from 1990 to 2022, predicting their distributions for 2030. The main objectives of this research are the investigation of the future seasonal thermal characteristics of the study area by characterizing land surface temperature (LST) through the UTFVI and analyzing LULC dynamics along with their correlation. To achieve this, Landsat 4-5 Thematic Mapper (TM) and Landsat 9 Operational Land Imager (OLI) imagery were utilized. LULC classification was performed using a supervised satellite image classification system, and the predictions were carried out using the cellular automata-artificial neural network (CA-ANN) algorithm. LST was calculated using the radiative transfer equation (RTE), and the same CA-ANN algorithm was employed to predict UTFVI for 2030. To investigate the multi-temporal correlation between LULC and UTFVI, a cross-tabulation technique was employed. The study’s findings indicate that between 2022 and 2030, there will be a 9.4% increase in built-up and bare-land areas at the expense of the vegetation class. The strongest UTFVI zone during summer is predicted to remain stable from 2022 to 2030, while winter UTFVI shows substantial fluctuations with a 4.62% decrease in the none UTFVI zone and a corresponding increase in the strongest UTFVI zone for the same period. The results of this study reveal a concerning trend of outward expansion in the built-up area of the Province of Naples, with central northern regions experiencing the highest growth rate, predominantly at the expense of vegetation cover. These predictions emphasize the urgent need for proactive measures to preserve and protect the diminishing vegetation cover, maintaining ecological balance, combating the urban heat island effect, and safeguarding biodiversity in the province. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the ROI: (<b>left panel</b>) shows the Italian peninsula with provincial boundaries, whereas (<b>right panel</b>) is a zoomed-in version focused on the Province of Naples reporting its DEM.</p>
Full article ">Figure 2
<p>Flow chart of the methodology utilized. The violet background box encloses the pre-processing steps. The left leg, contained in a light green background box depicts the LULC processing. The right leg, contained in a light blue background box, shows the LST processing. The background box colored in pink indicates the cross-tabulation stage and, lastly, the yellow background box shows the prediction stage. The inner boxes colored in grey are performed with ArcGIS Pro software, the one colored in blue is carried out in QGIS, and the one in orange is achieved with TerrSet software.</p>
Full article ">Figure 3
<p>LULC scenario of the ROI. (<b>Top-left panel</b>) and (<b>top-right panel</b>) refer to 1990 and 2000, respectively. (<b>Bottom-left panel</b>) and (<b>bottom-right panel</b>) to 2010 and 2022, respectively. Green refers to vegetation class. Red refers to built-up class. Beige refers to bare-land, while blue to water class.</p>
Full article ">Figure 4
<p>Changes in LULC classes from 1990 to 2022. On the x-axis are represented the four years analyzed (1990, 2000, 2010, and 2022), whereas the y-axis shows the corresponding cumulative distribution of LULC classes, in percentage. Green refers to vegetation class. Red refers to built-up class. Yellow refers to bare-land class, while blue to water class.</p>
Full article ">Figure 5
<p>Summer spatial distribution of UTFVI zones for 1990 (<b>top-left panel</b>), 2000 (<b>top-right panel</b>), 2010 (<b>bottom-left panel</b>), and 2022 (<b>bottom-right panel</b>). Dark blue refers to none UTFVI areas, pink color to strong UTFVI areas, orange to stronger UTFVI areas, and dark red to strongest UTFVI areas.</p>
Full article ">Figure 6
<p>Winter spatial distribution of UTFVI zones for 1990 (<b>top-left panel</b>), (<b>top-right panel</b>), 2010 (<b>bottom-left panel</b>), and 2022 (<b>bottom-right panel</b>). Dark blue refers to none UTFVI areas, while dark red color to strongest UTFVI areas.</p>
Full article ">Figure 7
<p>Cumulative distribution of UTFVI over LULC classes: (<b>top panel</b>) refers to summer seasons while (<b>bottom panel</b>) to winter seasons. On the x-axis are indicated the four LULC categories considered (bare-land, built-up, and vegetation) for each of the years considered (1990, 2000, 2010, and 2022). On the y-axis is represented the corresponding cumulative distribution of temperature zones, in percentage. Dark blue color refers to none UTFVI zone. Strong UTFVI class is represented in magenta, while stronger and strongest UTFVI classes are represented in red and dark red colors, respectively.</p>
Full article ">Figure 8
<p>Predicted LULC scenario of the Province of Naples for the year 2030. Green color refers to vegetation class. Red color refers to built-up class. Beige color refers to bare-land, while blue color refers to water class.</p>
Full article ">Figure 9
<p>(<b>Left panel</b>): predicted UTFVI scenario of the Province of Naples for summer 2030. (<b>Right panel</b>): predicted UTFVI scenario of the Province of Naples for winter 2030. The blue, beige, and brown colors in top panels refer to none, strong, and strongest UTFVI, respectively.</p>
Full article ">
21 pages, 1955 KiB  
Article
Optimized Classifier Learning for Face Recognition Performance Boost in Security and Surveillance Applications
by Jitka Poměnková and Tobiáš Malach
Sensors 2023, 23(15), 7012; https://doi.org/10.3390/s23157012 - 7 Aug 2023
Cited by 2 | Viewed by 1601
Abstract
Face recognition has become an integral part of modern security processes. This paper introduces an optimization approach for the quantile interval method (QIM), a promising classifier learning technique used in face recognition to create face templates and improve recognition accuracy. Our research offers [...] Read more.
Face recognition has become an integral part of modern security processes. This paper introduces an optimization approach for the quantile interval method (QIM), a promising classifier learning technique used in face recognition to create face templates and improve recognition accuracy. Our research offers a three-fold contribution to the field. Firstly, (i) we strengthened the evidence that QIM outperforms other contemporary template creation approaches. For this reason, we investigate seven template creation methods, which include four cluster description-based methods and three estimation-based methods. Further, (ii) we extended testing; we use a nearly four times larger database compared to the previous study, which includes a new set, and we report the recognition performance on this extended database. Additionally, we distinguish between open- and closed-set identification. Thirdly, (iii) we perform an evaluation of the cluster estimation-based method (specifically QIM) with an in-depth analysis of its parameter setup in order to make its implementation feasible. We provide instructions and recommendations for the correct parameter setup. Our research confirms that QIM’s application in template creation improves recognition performance. In the case of automatic application and optimization of QIM parameters, improvement recognition is about 4–10% depending on the dataset. In the case of a too general dataset, QIM also provides an improvement, but the incorporation of QIM into an automated algorithm is not possible, since QIM, in this case, requires manual setting of optimal parameters. This research contributes to the advancement of secure and accurate face recognition systems, paving the way for its adoption in various security applications. Full article
Show Figures

Figure 1

Figure 1
<p>Face recogniton system. Particular algorithms are highlighted with yellow color.</p>
Full article ">Figure 2
<p>Three-step algorithm of template creation and face recognition.</p>
Full article ">Figure 3
<p>Recognition performance of cluster description—based methods on test set A1: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 4
<p>Recognition performance of cluster description—based methods on test set A2: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 5
<p>Recognition performance of cluster description—based methods on test set B1: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 6
<p>ROC of quantile interval method with different quantile values. Upper figure—test set A1, middle figure—test set A2, bottom figure—test set B1.</p>
Full article ">Figure 7
<p>Recognition performance of cluster estimation-based methods on test set A1: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 8
<p>Recognition performance of cluster estimation-based methods on test set A2: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 9
<p>Recognition performance of cluster estimation-based methods on test set B1: ROC of open-set identification (<b>left</b>) and ROC of closed-set identification (<b>right</b>).</p>
Full article ">Figure 10
<p>Misclassifications of enrolled individuals on test set A2 for different templates. Templates which are so generic that they are too similar to other faces are marked with red box.</p>
Full article ">Figure 11
<p>Distances of test images to mismatching templates in test set A2, Upper figure: for Centroid method, middle figure: for QIM <math display="inline"><semantics><mrow><msub><mi>q</mi><mn>1</mn></msub><mo>=</mo><mn>0.45</mn></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi>q</mi><mn>2</mn></msub><mo>=</mo><mn>0.55</mn></mrow></semantics></math>, bottom figure: for QIM <math display="inline"><semantics><mrow><msub><mi>q</mi><mn>1</mn></msub><mo>=</mo><mn>0.1</mn></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi>q</mi><mn>2</mn></msub><mo>=</mo><mn>0.8</mn></mrow></semantics></math>; the red line is the lowest median; the blue line is the lowest 25% quantile.</p>
Full article ">Figure 12
<p>Recognition performance of selected methods. Upper figure—set A1, middle figure—set A2, bottom figure—set B1. Dashed lines show confidence intervals.</p>
Full article ">
23 pages, 1530 KiB  
Article
Enhanced Dual Convolutional Neural Network Model Using Explainable Artificial Intelligence of Fault Prioritization for Industrial 4.0
by Sekar Kidambi Raju, Seethalakshmi Ramaswamy, Marwa M. Eid, Sathiamoorthy Gopalan, Amel Ali Alhussan, Arunkumar Sukumar and Doaa Sami Khafaga
Sensors 2023, 23(15), 7011; https://doi.org/10.3390/s23157011 - 7 Aug 2023
Cited by 1 | Viewed by 1505
Abstract
Artificial intelligence (AI) systems are increasingly used in corporate security measures to predict the status of assets and suggest appropriate procedures. These programs are also designed to reduce repair time. One way to create an efficient system is to integrate physical repair agents [...] Read more.
Artificial intelligence (AI) systems are increasingly used in corporate security measures to predict the status of assets and suggest appropriate procedures. These programs are also designed to reduce repair time. One way to create an efficient system is to integrate physical repair agents with a computerized management system to develop an intelligent system. To address this, there is a need for a new technique to assist operators in interacting with a predictive system using natural language. The system also uses double neural network convolutional models to analyze device data. For fault prioritization, a technique utilizing fuzzy logic is presented. This strategy ranks the flaws based on the harm or expense they produce. However, the method’s success relies on ongoing improvement in spoken language comprehension through language modification and query processing. To carry out this technique, a conversation-driven design is necessary. This type of learning relies on actual experiences with the assistants to provide efficient learning data for language and interaction models. These models can be trained to have more natural conversations. To improve accuracy, academics should construct and maintain publicly usable training sets to update word vectors. We proposed the model dataset (DS) with the Adam (AD) optimizer, Ridge Regression (RR) and Feature Mapping (FP). Our proposed algorithm has been coined with an appropriate acronym DSADRRFP. The same proposed approach aims to leverage each component’s benefits to enhance the predictive model’s overall performance and precision. This ensures the model is up-to-date and accurate. In conclusion, an AI system integrated with physical repair agents is a useful tool in corporate security measures. However, it needs to be refined to extract data from the operating system and to interact with users in a natural language. The system also needs to be constantly updated to improve accuracy. Full article
(This article belongs to the Section Industrial Sensors)
Show Figures

Figure 1

Figure 1
<p>Architecture for Proposed Model.</p>
Full article ">Figure 2
<p>Dual CNN for Fault Diagnosis.</p>
Full article ">Figure 3
<p>Results for Accuracy (%) with Data Size.</p>
Full article ">Figure 4
<p>Results for Precision (%) with Data Size.</p>
Full article ">Figure 5
<p>Results for Recall (%) with Data Size.</p>
Full article ">Figure 6
<p>Results for F1-measure (%) with Data Size.</p>
Full article ">
26 pages, 4157 KiB  
Article
Investigation of Accuracy of TOA and SNR of Radio Pulsar Signals for Vehicles Navigation
by Hristo Kabakchiev, Vera Behar, Dorina Kabakchieva, Valentin Kisimov and Kamelia Stefanova
Sensors 2023, 23(15), 7010; https://doi.org/10.3390/s23157010 - 7 Aug 2023
Viewed by 1384
Abstract
It is known that X-ray and gamma-ray pulsars can only be observed by spacecraft because signals from these pulsars are impossible to be detected on the Earth’s surface due to their strong absorption by the Earth’s atmosphere. The article is devoted to the [...] Read more.
It is known that X-ray and gamma-ray pulsars can only be observed by spacecraft because signals from these pulsars are impossible to be detected on the Earth’s surface due to their strong absorption by the Earth’s atmosphere. The article is devoted to the theoretical aspects regarding the development of an autonomous radio navigation system for transport with a small receiving antenna, using radio signals from pulsars, similar to navigation systems for space navigation. Like GNSS systems (X-ray and radio), they use signals from four suitable pulsars to position the object. These radio pulsars (out of 50) are not uniformly distributed but are grouped in certain directions (at least 6 clusters can be determined). When using small antennas (with an area of up to tens of square meters) for pulsar navigation, the energy of the pulsar signals received within a few minutes is extremely insufficient to obtain the required level of SNR at the output of the receiver to form TOA estimation, ensuring positioning accuracy up to tens of kilometers. This is one of the scientific tasks that is solved in the paper by studying the relationship between the SNR of the receiver output, which depends on the size of the antenna, the type of signal processing, and the magnitude of the TOA accuracy estimate. The second scientific task that is solved in the paper is the adaptation of all the possible approaches and algorithms suggested in the statistical theory of radars in the suggested signal algorithm for antenna processing and to evaluate the parameters of the TOA and DS pulsar signals, in order to increase the SNR ratio at the receiver output, while preserving the dimensions of the antenna. In this paper, the functional structure of signal processing in a pulsar transport navigation system is proposed, and the choice of the observed second and millisecond pulsars for obtaining a more accurate TOA estimate is discussed. The proposed estimates of positioning accuracy (TOA only, no phase) in an autonomous pulsar vehicle navigation system would only be suitable for the navigation of large vehicles (sea, air, or land) that do not require accurate navigation at sea, air, or desert. Large-sized antennas with an area of tens of square meters to hundreds of square meters can be installed in such vehicles. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Positions in galactic coordinates.</p>
Full article ">Figure 2
<p>Ecliptic coordinates.</p>
Full article ">Figure 3
<p>E/m emission from a pulsar.</p>
Full article ">Figure 4
<p>A sequence of pulsar pulses.</p>
Full article ">Figure 5
<p>Average pulse of B0329+54.</p>
Full article ">Figure 6
<p>Average pulse of a human heart.</p>
Full article ">Figure 7
<p>The edge of the drop after irradiation.</p>
Full article ">Figure 8
<p>Element of the drop after irradiation.</p>
Full article ">Figure 9
<p>The radio observatory Westerbork.</p>
Full article ">Figure 10
<p>The radio observatory Dwingeloo.</p>
Full article ">Figure 11
<p>Signal after pulse accumulation (125 periods).</p>
Full article ">Figure 12
<p>Signals at the input/output of the matched filter (accumulated 125 periods).</p>
Full article ">Figure 13
<p>Functional structure of signal processing in a pulsar navigation system.</p>
Full article ">Figure 14
<p>Signal processing in the (<span class="html-italic">i,j</span>)-th channel of the correlation estimator of ∆<span class="html-italic">f</span> and <span class="html-italic">τ</span> in case of the random amplitude and phase.</p>
Full article ">
17 pages, 20287 KiB  
Article
Frequency-Selective Surface-Based MIMO Antenna Array for 5G Millimeter-Wave Applications
by Iftikhar Ud Din, Mohammad Alibakhshikenari, Bal S. Virdee, Renu Karthick Rajaguru Jayanthi, Sadiq Ullah, Salahuddin Khan, Chan Hwang See, Lukasz Golunski and Slawomir Koziel
Sensors 2023, 23(15), 7009; https://doi.org/10.3390/s23157009 - 7 Aug 2023
Cited by 26 | Viewed by 2420
Abstract
In this paper, a radiating element consisting of a modified circular patch is proposed for MIMO arrays for 5G millimeter-wave applications. The radiating elements in the proposed 2 × 2 MIMO antenna array are orthogonally configured relative to each other to mitigate mutual [...] Read more.
In this paper, a radiating element consisting of a modified circular patch is proposed for MIMO arrays for 5G millimeter-wave applications. The radiating elements in the proposed 2 × 2 MIMO antenna array are orthogonally configured relative to each other to mitigate mutual coupling that would otherwise degrade the performance of the MIMO system. The MIMO array was fabricated on Rogers RT/Duroid high-frequency substrate with a dielectric constant of 2.2, a thickness of 0.8 mm, and a loss tangent of 0.0009. The individual antenna in the array has a measured impedance bandwidth of 1.6 GHz from 27.25 to 28.85 GHz for S11 ≤ −10 dB, and the MIMO array has a gain of 7.2 dBi at 28 GHz with inter radiator isolation greater than 26 dB. The gain of the MIMO array was increased by introducing frequency-selective surface (FSS) consisting of 7 × 7 array of unit cells comprising rectangular C-shaped resonators, with one embedded inside the other with a central crisscross slotted patch. With the FSS, the gain of the MIMO array increased to 8.6 dBi at 28 GHz. The radiation from the array is directional and perpendicular to the plain of the MIMO array. Owing to the low coupling between the radiating elements in the MIMO array, its Envelope Correlation Coefficient (ECC) is less than 0.002, and its diversity gain (DG) is better than 9.99 dB in the 5G operating band centered at 28 GHz between 26.5 GHz and 29.5 GHz. Full article
(This article belongs to the Special Issue Millimeter-Wave Antennas for 5G)
Show Figures

Figure 1

Figure 1
<p>Evolution of the proposed radiating patch antenna. (<b>a</b>) Step #1. (<b>b</b>) Step #2. (<b>c</b>) Step #3. (<b>d</b>) Step #4.</p>
Full article ">Figure 2
<p>Front and back view of the proposed antenna.</p>
Full article ">Figure 3
<p>Reflection coefficient response of the modified circular patch antenna.</p>
Full article ">Figure 4
<p>The effect on the reflection coefficient response of the antenna (<b>a</b>) by the slot length (Lc) and (<b>b</b>) by the slot width. Units of the length and width are in millimeters.</p>
Full article ">Figure 4 Cont.
<p>The effect on the reflection coefficient response of the antenna (<b>a</b>) by the slot length (Lc) and (<b>b</b>) by the slot width. Units of the length and width are in millimeters.</p>
Full article ">Figure 5
<p>Front and back view of the proposed 2 × 2 MIMO antenna array.</p>
Full article ">Figure 6
<p>S-parameters of the proposed MIMO antenna array. (<b>a</b>) Reflection coefficient and (<b>b</b>) Transmission coefficient.</p>
Full article ">Figure 7
<p>(<b>a</b>) Steps taken to create the proposed FSS unit cell. (<b>b</b>) Parameters defining the FSS unit cell and simulation excitation ports, and (<b>c</b>) S-parameter responses of the proposed FSS unit cell.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Steps taken to create the proposed FSS unit cell. (<b>b</b>) Parameters defining the FSS unit cell and simulation excitation ports, and (<b>c</b>) S-parameter responses of the proposed FSS unit cell.</p>
Full article ">Figure 8
<p>(<b>a</b>) FSS array surface located under the MIMO antenna array, and (<b>b</b>) FSS reflector.</p>
Full article ">Figure 9
<p>Gain of FSS-based MIMO antenna array at different gaps. Units are in millimeters.</p>
Full article ">Figure 10
<p>Reflection coefficient at the four ports of the FSS-based MIMO antenna array.</p>
Full article ">Figure 11
<p>Transmission coefficients of the FSS-based MIMO antenna array.</p>
Full article ">Figure 12
<p>Fabricated prototype of the FSS-based MIMO antenna array, (<b>a</b>) front view, (<b>b</b>) FSS layer, and (<b>c</b>) back side view.</p>
Full article ">Figure 13
<p>The measured and simulated results of the FSS-based MIMO antenna array. (<b>a</b>) Reflection coefficient response. (<b>b</b>) Transmission coefficient response.</p>
Full article ">Figure 14
<p>Measured and simulated plots of (<b>a</b>) radiation patterns and (<b>b</b>) gain of the FSS-based MIMO antenna array.</p>
Full article ">Figure 15
<p>ECC and DG of the proposed FSS-based MIMO antenna array.</p>
Full article ">
19 pages, 38915 KiB  
Article
Crop Mapping Based on Sentinel-2 Images Using Semantic Segmentation Model of Attention Mechanism
by Meixiang Gao, Tingyu Lu and Lei Wang
Sensors 2023, 23(15), 7008; https://doi.org/10.3390/s23157008 - 7 Aug 2023
Cited by 2 | Viewed by 2240
Abstract
Using remote sensing images to identify crop plots and estimate crop planting area is an important part of agricultural remote sensing monitoring. High-resolution remote sensing images can provide rich information regarding texture, tone, shape, and spectrum of ground objects. With the advancement of [...] Read more.
Using remote sensing images to identify crop plots and estimate crop planting area is an important part of agricultural remote sensing monitoring. High-resolution remote sensing images can provide rich information regarding texture, tone, shape, and spectrum of ground objects. With the advancement of sensor and information technologies, it is now possible to categorize crops with pinpoint accuracy. This study defines crop mapping as a semantic segmentation problem; therefore, a deep learning method is proposed to identify the distribution of corn and soybean using the differences in the spatial and spectral features of crops. The study area is located in the southwest of the Great Lakes in the United States, where corn and soybean cultivation is concentrated. The proposed attention mechanism deep learning model, A2SegNet, was trained and evaluated using three years of Sentinel-2 data, collected between 2019 and 2021. The experimental results show that this method is able to fully extract the spatial and spectral characteristics of crops, and its classification effect is significantly better than that of the baseline method, and it has better classification performance than other deep learning models. We cross verified the trained model on the test sets of different years through transfer learning in both spatiotemporal and spatial dimensions. Proving the effectiveness of the attention mechanism in the process of knowledge transfer, A2SegNet showed better adaptability. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Location of the study area. The green box represents the coverage of the three Sentinel-2 images, while the blue box, yellow box, and red box represent the research area in 2019, 2020, and 2021, respectively. The resolution of the crop land cover data is 30 m.</p>
Full article ">Figure 2
<p>Average spectral curves of corn and soybeans in different years after data reconstruction.</p>
Full article ">Figure 3
<p>Composition of the multi-channel image, all bands are resampled to a resolution of 30 m.</p>
Full article ">Figure 4
<p>The study area in 2019. (<b>a</b>) Sentinel-2 true color image. (<b>b</b>) Ground truth of the CDL. (<b>c</b>) Extracted crop type label, where the data are stretched to 0−255.</p>
Full article ">Figure 5
<p>Construction of sample images from the extracted crop type label, where the label data is stretched to 0–255. The red line is the dividing line between the training set and the test.</p>
Full article ">Figure 6
<p>Strategy of ignoring edge prediction results.</p>
Full article ">Figure 7
<p>Channel attention: the module learns features in the channel dimension, different colors indicate a single channel or a combination of channels.</p>
Full article ">Figure 8
<p>Spatial attention: the module captures important regional features.</p>
Full article ">Figure 9
<p>Architecture of the CBAM attention module: channel attention and spatial attention are combined through concatenation.</p>
Full article ">Figure 10
<p>An illustration of the A<sub>2</sub>SegNet architecture; the network is composed of an encoder and a decoder.</p>
Full article ">Figure 11
<p>Feature visualization of Sentinel-2 image using t-SNE based on different models for corn and soybean in the years 2019, 2020, and 2021. The axes represent the spatial coordinates of the high-dimensional features projected onto a 2D plane.</p>
Full article ">Figure 12
<p>A randomly selected 256-pixel × 256-pixel area, with misclassified samples from different years in the same crop planting areas; blue represents areas of misclassification.</p>
Full article ">Figure 13
<p>The results of generalization ability test. (<b>a</b>) Trained using the 2020 and 2021 training set, tested using the 2019 testing set; (<b>b</b>) trains using the 2019 and 2021 training set, tested using the 2020 testing set; (<b>c</b>) trained using the 2019 and 2020 training set, tested using the 2021 testing set.</p>
Full article ">Figure 13 Cont.
<p>The results of generalization ability test. (<b>a</b>) Trained using the 2020 and 2021 training set, tested using the 2019 testing set; (<b>b</b>) trains using the 2019 and 2021 training set, tested using the 2020 testing set; (<b>c</b>) trained using the 2019 and 2020 training set, tested using the 2021 testing set.</p>
Full article ">
16 pages, 4234 KiB  
Article
Estimation of Reference Evapotranspiration in a Semi-Arid Region of Mexico
by Gerardo Delgado-Ramírez, Martín Alejandro Bolaños-González, Abel Quevedo-Nolasco, Adolfo López-Pérez and Juan Estrada-Ávalos
Sensors 2023, 23(15), 7007; https://doi.org/10.3390/s23157007 - 7 Aug 2023
Cited by 3 | Viewed by 1948
Abstract
Reference evapotranspiration (ET0) is the first step in calculating crop irrigation demand, and numerous methods have been proposed to estimate this parameter. FAO-56 Penman–Monteith (PM) is the only standard method for defining and calculating ET0. However, it requires radiation, [...] Read more.
Reference evapotranspiration (ET0) is the first step in calculating crop irrigation demand, and numerous methods have been proposed to estimate this parameter. FAO-56 Penman–Monteith (PM) is the only standard method for defining and calculating ET0. However, it requires radiation, air temperature, atmospheric humidity, and wind speed data, limiting its application in regions where these data are unavailable; therefore, new alternatives are required. This study compared the accuracy of ET0 calculated with the Blaney–Criddle (BC) and Hargreaves–Samani (HS) methods versus PM using information from an automated weather station (AWS) and the NASA-POWER platform (NP) for different periods. The information collected corresponds to Module XII of the Lagunera Region Irrigation District 017, a semi-arid region in the North of Mexico. The HS method underestimated the reference evapotranspiration (ET0) by 5.5% compared to the PM method considering the total ET0 of the study period (26 February to 9 August 2021) and yielded the best fit in the different evaluation periods (daily, 5-day mean, and 5-day cumulative); the latter showed the best values of inferential parameters. The information about maximum and minimum temperatures from the NP platform was suitable for estimating ET0 using the HS equation. This data source is a suitable alternative, particularly in semi-arid regions with limited climatological data from weather stations. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the automated weather station (AWS).</p>
Full article ">Figure 2
<p>Altitude map of Module XII.</p>
Full article ">Figure 3
<p>Reference evapotranspiration estimated with the FA0-56 Penman–Monteith method using AWS meteorological data (blue points) and rainfall recorded in the study period (red bars).</p>
Full article ">Figure 4
<p>Bias between observed (AWS) and reference (NP) data for the meteorological. (<b>a</b>) Bias in AWS and NP Data Tmax. (<b>b</b>) Bias in AWS and NP Data Tmin. (<b>c</b>) Bias in AWS and NP Data RH. (<b>d</b>) Bias in AWS and NP Data SR. (<b>e</b>) Bias in AWS and NP Data WS.</p>
Full article ">Figure 5
<p>Different ways to estimate ET<sub>0</sub> using empirical equations and the reference method (PM) during the study period. (<b>a</b>) Estimation Daily ET<sub>0</sub>. (<b>b</b>) Estimation 5-Day Mean ET<sub>0</sub>. (<b>c</b>) Estimation 5-Day Cumulative ET<sub>0</sub>.</p>
Full article ">Figure 6
<p>Dispersion plot of the calibrated HS method (HS<sub>_NP</sub>) relative to the FAO-56 Penman–Monteith (PM) reference method for the different ET<sub>0</sub> calculation periods: Daily (<b>a</b>), 5-Day Mean (<b>b</b>) 5-Day Cumulative (<b>c</b>).</p>
Full article ">Figure 7
<p>Linear relationship between ET<sub>0</sub> estimates with the HS equation using temperature data from the AWS and the NP platform for the different ET<sub>0</sub> calculation periods: Daily (<b>a</b>), 5-Day Mean (<b>b</b>) 5-Day Cumulative (<b>c</b>).</p>
Full article ">
16 pages, 1977 KiB  
Article
Analysis of the Correlation between Frontal Alpha Asymmetry of Electroencephalography and Short-Term Subjective Well-Being Changes
by Betty Wutzl, Kenji Leibnitz, Daichi Kominami, Yuichi Ohsita, Michiko Kaihotsu and Masayuki Murata
Sensors 2023, 23(15), 7006; https://doi.org/10.3390/s23157006 - 7 Aug 2023
Cited by 4 | Viewed by 1416
Abstract
Subjective well-being (SWB) describes how well people experience and evaluate their current condition. Previous studies with electroencephalography (EEG) have shown that SWB can be related to frontal alpha asymmetry (FAA). While those studies only considered a single SWB score for each experimental session, [...] Read more.
Subjective well-being (SWB) describes how well people experience and evaluate their current condition. Previous studies with electroencephalography (EEG) have shown that SWB can be related to frontal alpha asymmetry (FAA). While those studies only considered a single SWB score for each experimental session, our goal is to investigate such a correlation for individuals with a possibly different SWB every 60 or 30 s. Therefore, we conducted two experiments with 30 participants each. We used different temperature and humidity settings and asked the participants to periodically rate their SWB. We computed the FAA from EEG over different time intervals and associated the given SWB, leading to pairs of (FAA, SWB) values. After correcting the imbalance in the data with the Synthetic Minority Over-sampling Technique (SMOTE), we performed a linear regression and found a positive linear correlation between FAA and SWB. We also studied the best time interval sizes for determining FAA around each SWB score. We found that using an interval of 10 s before recording the SWB score yields the best results. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Overview of the setup of Experiment 2 with heaters and humidifiers, and (<b>b</b>) the layout of the custom-made temperature-humidity sensors.</p>
Full article ">Figure 2
<p>(<b>a</b>) EEG sensor layout. The frontal sensors AF3 (purple) and AF4 (orange) are the sensors used for the FAA calculation. (<b>b</b>) Overview of determining four pairs (FAA<sub>i</sub>, SWB<sub>i</sub>), i = 1, …, 4, from the pre-processed EEG channel data. The purple time series represents the AF3 channel and the orange time series stands for AF4. The time series shown in gray symbolize those of other unused channels. SWB scores were given by the participants every 60 s (Experiment 1), namely at 60 s, 120 s, 180 s, and so on (only the first 270 s are shown in the drawing). These time instants are indicated by the dashed vertical lines. The 60 s intervals were chosen so that the SWB scores were given in the middle of these intervals. The lower part of (<b>b</b>) shows the frequency spectra that result in FAA values corresponding to each SWB score.</p>
Full article ">Figure 3
<p>Illustration of choosing time intervals for Analysis 3. The time series continues until a maximum of 300 s and SWB is recorded every 30 s (Experiment 2). A different color represents a possibly different original SWB score SWB<sub>k</sub> for k = 1, …, 10. The gray boxes show how the six 45 s intervals are aligned within the 300 s and the corresponding associated SWB<sub>intk</sub> with k = 1, …, 6.</p>
Full article ">Figure 4
<p>Comparison of the linear regression of Experiment 2 and Analysis 1. The dots in each subplot represent the (FAA, SWB) pairs of an example participant used for calculating the linear regression which is shown as a straight line. On the upper left-hand side, the linear regression of the original imbalanced data is given. The other three plots each show the result of linear regression after correcting for the imbalanced dataset using SMOTE. The number of (FAA, SWB) pairs in the original dataset are: #(FAA, 3) = 3, #(FAA, 4) = 11, #(FAA, 5) = 12, and #(FAA, 6) = 3 and after SMOTE #(FAA, SWB) = 12 for SWB of 3, 4, 5, and 6. The difference in the number of (FAA, SWB) pairs is highlighted for SWB = 3, see circled areas in all four subplots. When considering the original data, we can see that the linear regression favors values of SWB = 4 or 5. The results after SMOTE all show, while being numerically slightly different, a regression that does not favor the values in the center anymore and thus has a steeper line.</p>
Full article ">
13 pages, 5834 KiB  
Article
Ionospheric Weather at Two Starlink Launches during Two-Phase Geomagnetic Storms
by Tamara Gulyaeva, Manuel Hernández-Pajares and Iwona Stanislawska
Sensors 2023, 23(15), 7005; https://doi.org/10.3390/s23157005 - 7 Aug 2023
Cited by 4 | Viewed by 2169
Abstract
The launch of a series of Starlink internet satellites on 3 February 2022 (S-36), and 7 July 2022 (S-49), coincided with the development of two-phase geomagnetic storms. The first launch S-36 took place in the middle of the moderate two-phase space weather storm, [...] Read more.
The launch of a series of Starlink internet satellites on 3 February 2022 (S-36), and 7 July 2022 (S-49), coincided with the development of two-phase geomagnetic storms. The first launch S-36 took place in the middle of the moderate two-phase space weather storm, which induced significant technological consequences. After liftoff on 3 February at 18:13 UT, all Starlink satellites reached an initial altitude of 350 km in perigee and had to reach an altitude of ~550 km after the maneuver. However, 38 of 49 launched spacecrafts did not reach the planned altitude, left orbit due to increased drag and reentered the atmosphere on 8 February. A geomagnetic storm on 3–4 February 2022 has increased the density of the neutral atmosphere up to 50%, increasing drag of the satellites and dooming most of them. The second launch of S-49 at 13:11 UT on 7 July 2022 was successful at the peak of the two-phase geomagnetic storm. The global ionospheric maps of the total electron content (GIM-TEC) have been used to produce the ionospheric weather GIM-W index maps and Global Electron Content (GEC). We observed a GEC increment from 10 to 24% for the storm peak after the Starlink launch at both storms, accompanying the neutral density increase identified earlier. GIM-TEC maps are available with a lag (delay) of 1–2 days (real-time GIMs have a lag less than 15 min), so the GIMs forecast is required by the time of the launch. Comparisons of different GIMs forecast techniques are provided including the Center for Orbit Determination in Europe (CODE), Beijing (BADG and CASG) and IZMIRAN (JPRG) 1- and 2-day forecasts, and the Universitat Politecnica de Catalunya (UPC-ionSAT) forecast for 6, 12, 18, 24 and 48 h in advance. We present the results of the analysis of evolution of the ionospheric parameters during both events. The poor correspondence between observed and predicted GIM-TEC and GEC confirms an urgent need for the industry–science awareness of now-casting/forecasting/accessibility of GIM-TECs during the space weather events. Full article
(This article belongs to the Special Issue Advances in GNSS Positioning and GNSS Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The space weather conditions from 2 to 5 February 2022 under which the Starlink S-36 was launched at 18:13 UT on 3 February (thick vertical line). From top to bottom: (<b>a</b>) magnetic field intensity <span class="html-italic">B</span>, nT; the southward component Bz, nT; the solar wind speed Vsw, km/s; the proton density Np, cm<sup>−3</sup>; the proton temperature Tp, K; the geomagnetic SYM/H index, nT; (<b>b</b>) GIM-W index maps based on UQRG GIM-TEC for 00:00 and 12:00 UT during the storm.</p>
Full article ">Figure 2
<p>The same as <a href="#sensors-23-07005-f001" class="html-fig">Figure 1</a>a,b but from 6 to 9 July 2022 related with the Starlink S-49 launch at 13:11 UT on 7 July.</p>
Full article ">Figure 3
<p>Global electron content (<span class="html-italic">GEC</span>) in daily–hourly UT frame produced from the UPC UQRG: (<b>a</b>) February 2022; (<b>b</b>) July 2022. Starlink S-36 and S-49 launches (star).</p>
Full article ">Figure 4
<p>Daily–hourly variation of <span class="html-italic">GEC</span> and geomagnetic indices for February 2022. The Starlink S-36 launch (thick vertical line). (<b>a</b>) JPLR—based ‘true’ <span class="html-italic">GEC</span>, 1- and 2-day forecast (JPLR1/JPLR2) produced by IZMIRAN, ‘true’ UQRG and UPC tomographic-kriging real-time UADG <span class="html-italic">GEC</span> products; (<b>b</b>) CODE ‘true’ <span class="html-italic">GEC</span> and CODE1/CODE2 forecast; (<b>c</b>) CASG ‘true’ data and CASG1/CASG2 forecast; (<b>d</b>) BUAG ‘true’ and B1PG/B2PG forecast; (<b>e</b>) geomagnetic Hpo index; (<b>f</b>) equatorial Dst index.</p>
Full article ">Figure 5
<p>Comparisons of 1- and 2-day forecast with ‘true’ <span class="html-italic">GEC</span> profiles during the storm from 2 to 5 February 2022 including the Starlink S-36 launch (thick vertical line): <span class="html-italic">GEC</span>—upper panel, detrended <span class="html-italic">GEC</span>—lower panel. (<b>a1</b>,<b>a2</b>) JPLR; (<b>b1</b>,<b>b2</b>) CODE; (<b>c1</b>,<b>c2</b>) BUAG; (<b>d1</b>,<b>d2</b>) CASG.</p>
Full article ">Figure 6
<p>The same as <a href="#sensors-23-07005-f005" class="html-fig">Figure 5</a> but from 6 to 9 July 2022, including the Starlink S-49 launch.</p>
Full article ">Figure 7
<p>Comparison of ‘true’ UQRG and real-time UADG <span class="html-italic">GEC</span> with Nearest-Neighbor (NN) ‘forecast’ for 6, 12, 18, 24 and 48 h in advance. Forecast starts at the time of the Starlink launch (thick vertical line) linked to the real-time UADG data: (<b>a1</b>,<b>a2</b>) S-36; (<b>b1</b>,<b>b2</b>) S-49 launch.</p>
Full article ">Figure 8
<p>The same as <a href="#sensors-23-07005-f007" class="html-fig">Figure 7</a> but forecast start at 00:00 UT on the day of the Starlink launch linked to the prestorm day of UQRG.</p>
Full article ">
27 pages, 8049 KiB  
Article
Critical Examination of Distance-Gain-Size (DGS) Diagrams of Ultrasonic NDE with Sound Field Calculations
by Kanji Ono and Hang Su
Sensors 2023, 23(15), 7004; https://doi.org/10.3390/s23157004 - 7 Aug 2023
Viewed by 1360
Abstract
Ultrasonic non-destructive evaluation, which has been used widely, can detect and size critical flaws in structures. Advances in sound field calculations can further improve its effectiveness. Two calculation methods were used to characterize the relevant sound fields of an ultrasonic transducer and the [...] Read more.
Ultrasonic non-destructive evaluation, which has been used widely, can detect and size critical flaws in structures. Advances in sound field calculations can further improve its effectiveness. Two calculation methods were used to characterize the relevant sound fields of an ultrasonic transducer and the results were applied to construct and evaluate Distance-Gain-Size (DGS) diagrams, which are useful in flaw sizing. Two published DGS diagrams were found to be deficient because the backward diffraction path was overly simplified and the third one included an arbitrary procedure. Newly constructed DGS diagrams exhibited transducer size dependence, revealing another deficiency in the existing DGS diagrams. However, the extent of the present calculations must be expanded to provide a catalog of DGS diagrams to cover a wide range of practical needs. Details of the new construction method are presented, incorporating two-way diffraction procedures. Full article
Show Figures

Figure 1

Figure 1
<p>Coaxial transmitter–receiver arrangements. (<b>a</b>) Equal-sized transmitter–receiver pair, <b>S</b> = 1. (<b>b</b>) Small receiver case, <b>S</b> &lt; 1. (<b>c</b>) Large receiver case, <b>S</b> &gt; 1.</p>
Full article ">Figure 2
<p>(<b>a</b>) Integrated sound pressure, <b>P</b>(<b>Z</b>) vs. <b>Z</b> (red dots) from Seki et al. [<a href="#B13-sensors-23-07004" class="html-bibr">13</a>]. The diffraction loss, <b>D</b>, from Equation (1) is also plotted (blue curve). (<b>b</b>) <b>P</b>(<b>Z</b>) vs. <b>Z</b> (blue <b>+</b>) from Khimunin [<a href="#B18-sensors-23-07004" class="html-bibr">18</a>] and <b>D</b> (red curve). (<b>c</b>) <b>P</b>(<b>Z</b>) vs. <b>Z</b> from Yamada and Fujii [<a href="#B17-sensors-23-07004" class="html-bibr">17</a>] (solid curves in green, <b>S</b> = 0.5, in blue, <b>S</b> = 1, in red, <b>S</b> = 2) and from the Torikai equations (dotted curves, same color code; see text).</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) Integrated sound pressure, <b>P</b>(<b>Z</b>) vs. <b>Z</b> (red dots) from Seki et al. [<a href="#B13-sensors-23-07004" class="html-bibr">13</a>]. The diffraction loss, <b>D</b>, from Equation (1) is also plotted (blue curve). (<b>b</b>) <b>P</b>(<b>Z</b>) vs. <b>Z</b> (blue <b>+</b>) from Khimunin [<a href="#B18-sensors-23-07004" class="html-bibr">18</a>] and <b>D</b> (red curve). (<b>c</b>) <b>P</b>(<b>Z</b>) vs. <b>Z</b> from Yamada and Fujii [<a href="#B17-sensors-23-07004" class="html-bibr">17</a>] (solid curves in green, <b>S</b> = 0.5, in blue, <b>S</b> = 1, in red, <b>S</b> = 2) and from the Torikai equations (dotted curves, same color code; see text).</p>
Full article ">Figure 3
<p>(<b>a</b>) Sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> for various <b>X</b> values. <b>X</b> = 0: blue dash curve, 0.05: green dash, 0.1: black, 0.25: red, 0.5: green, 0.75: purple, 1.0: blue, 1.5: blue dot, 2.0: red dot, 2.5: green dot, 3.0: purple dot. (<b>b</b>) Lommel limit in <b>Z</b> vs. transmitter radius in mm.</p>
Full article ">Figure 4
<p>Sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. normalized radius, <b>X</b>, at various values of <b>Z</b>. <b>Z</b> = 0.1: blue dash, 0.5: green dash, 1.0: blue, 2.0: red, 5.0: green, 10: black, 20: black dot, 50: black dash, 100: black dash-dot.</p>
Full article ">Figure 5
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. <b>Z</b>. <b>X′</b> = 0.035: brown curve, 0.065: light blue, 0.125: purple, 0.25: green, 0.4: red dot, 0.5: red, 0.8: blue dot, 1.0: blue, 1.5: blue dash, 2: red dash, 2.5: green dash, 3.0: light blue dash. <b>D</b>(<b>Z</b>) vs. <b>Z</b> in blue <b>+</b>.</p>
Full article ">Figure 6
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. radial integration limit, <b>X′</b>, for different <b>Z</b> values. <b>Z</b> = 0.1: orange dot, 0.3: gray dot, 1.1: red <b>+</b>, 3.4: blue <b>+</b>, 10.4: green dot, 30.9: blue dot, 103: red dot. Blue line through blue dots represents <b>X′</b><sup>2</sup> or the slope = 2.</p>
Full article ">Figure 7
<p>(<b>a</b>) Sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> for various <b>X</b> values. The Zemanek model with a transmitter radius of 3.2 mm. <b>X</b> = 0: blue dash curve, 0.125: black (hidden), 0.25: red, 0.5: green, 0.75: purple, 1.0: light blue, 1.5: blue dot, 2.0: red dot, 2.5: green dot, 3.0: purple dot. (<b>b</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with an <b>a<sub>ct</sub></b> of 5.0 mm. Color code for <b>X</b> same as in (<b>a</b>). (<b>c</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with an <b>a<sub>T</sub></b> of 9.5 mm. Same color code for <b>X</b> except <b>X</b> ≥ 2 not included. (<b>d</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with <b>a<sub>T</sub></b> of 19 mm. Same color code for <b>X</b> as in (<b>c</b>).</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> for various <b>X</b> values. The Zemanek model with a transmitter radius of 3.2 mm. <b>X</b> = 0: blue dash curve, 0.125: black (hidden), 0.25: red, 0.5: green, 0.75: purple, 1.0: light blue, 1.5: blue dot, 2.0: red dot, 2.5: green dot, 3.0: purple dot. (<b>b</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with an <b>a<sub>ct</sub></b> of 5.0 mm. Color code for <b>X</b> same as in (<b>a</b>). (<b>c</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with an <b>a<sub>T</sub></b> of 9.5 mm. Same color code for <b>X</b> except <b>X</b> ≥ 2 not included. (<b>d</b>) <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> with <b>a<sub>T</sub></b> of 19 mm. Same color code for <b>X</b> as in (<b>c</b>).</p>
Full article ">Figure 8
<p>Comparison of the sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. <b>Z</b> for <b>X</b> values of 0, 1, and 1.5 between the Torikai equations (solid curves, from <a href="#sensors-23-07004-f003" class="html-fig">Figure 3</a>a) and the Zemanek model with a transmitter radius of 19 mm (dashed curves, from <a href="#sensors-23-07004-f007" class="html-fig">Figure 7</a>d).</p>
Full article ">Figure 9
<p>Sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. normalized radius, <b>X</b>, at various values of <b>Z</b> for the Zemanek model. (<b>a</b>) <b>a<sub>T</sub></b> = 6.4 mm. (<b>b</b>) 12.7 mm. <b>Z</b> = 0.1: blue dash, 0.5: red dash, 1.0: blue, 2.0: red, 5.0: green, 10: black, 20: black dot.</p>
Full article ">Figure 10
<p>Experimental measurements of sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. normalized radius, <b>X</b>, with <b>a<sub>T</sub></b> = 6.35 mm at four values of <b>Z</b>. <b>Z</b> = 0.94 (blue dots), 1.89 (red), 3.75 (green)m and 16.10 (black). (<b>a</b>) Measured values were compared to the Torikai calculations at <b>Z</b> = 1, 2, 5, 10, 20, and 50 from <a href="#sensors-23-07004-f004" class="html-fig">Figure 4</a>. (<b>b</b>) Comparison to the Zemanek model at <b>Z</b> = 1, 2, 5, 10, and 20 with the matching a<sub>T</sub> of 6.4 mm (<a href="#sensors-23-07004-f009" class="html-fig">Figure 9</a>a).</p>
Full article ">Figure 10 Cont.
<p>Experimental measurements of sound pressure, <b>p</b>(<b>X</b>,<b>Z</b>) vs. normalized radius, <b>X</b>, with <b>a<sub>T</sub></b> = 6.35 mm at four values of <b>Z</b>. <b>Z</b> = 0.94 (blue dots), 1.89 (red), 3.75 (green)m and 16.10 (black). (<b>a</b>) Measured values were compared to the Torikai calculations at <b>Z</b> = 1, 2, 5, 10, 20, and 50 from <a href="#sensors-23-07004-f004" class="html-fig">Figure 4</a>. (<b>b</b>) Comparison to the Zemanek model at <b>Z</b> = 1, 2, 5, 10, and 20 with the matching a<sub>T</sub> of 6.4 mm (<a href="#sensors-23-07004-f009" class="html-fig">Figure 9</a>a).</p>
Full article ">Figure 11
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. <b>Z</b> using the Zemanek model. (<b>a</b>) <b>a<sub>T</sub></b> = 3.2 mm. <b>X′</b> = 0.125: purple, 0.25: green, 0.5: red, 1.0: blue, 1.5: blue dot, 2: red dot, 4: dark red dot. <b>D</b>(<b>Z</b>) vs. <b>Z</b> in blue <b>+</b>. (<b>b</b>) <b>a<sub>T</sub></b> = 6.4 mm. As in (<b>a</b>), plus <b>X′</b> = 0.065: light blue, 0.8: blue dash, 2.35: green dot. (<b>c</b>) <b>a<sub>T</sub></b> = 12.7 mm. As in (<b>b</b>), plus <b>X′</b> = 0.4: red dash. (<b>d</b>) <b>a<sub>T</sub></b> = 19 mm. As in (<b>b</b>), plus <b>X′</b> = 0.035: brown, 0.67: light blue dash.</p>
Full article ">Figure 11 Cont.
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. <b>Z</b> using the Zemanek model. (<b>a</b>) <b>a<sub>T</sub></b> = 3.2 mm. <b>X′</b> = 0.125: purple, 0.25: green, 0.5: red, 1.0: blue, 1.5: blue dot, 2: red dot, 4: dark red dot. <b>D</b>(<b>Z</b>) vs. <b>Z</b> in blue <b>+</b>. (<b>b</b>) <b>a<sub>T</sub></b> = 6.4 mm. As in (<b>a</b>), plus <b>X′</b> = 0.065: light blue, 0.8: blue dash, 2.35: green dot. (<b>c</b>) <b>a<sub>T</sub></b> = 12.7 mm. As in (<b>b</b>), plus <b>X′</b> = 0.4: red dash. (<b>d</b>) <b>a<sub>T</sub></b> = 19 mm. As in (<b>b</b>), plus <b>X′</b> = 0.035: brown, 0.67: light blue dash.</p>
Full article ">Figure 12
<p>Comparison of the integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. <b>Z</b> curves from the Torikai (dash curves) and Zemanek model (<b>a<sub>T</sub></b> = 19 mm, solid curves) calculations (<a href="#sensors-23-07004-f005" class="html-fig">Figure 5</a> and <a href="#sensors-23-07004-f011" class="html-fig">Figure 11</a>d). Four <b>X′</b> values were used—0.03: brown, 0.065: light blue, 0.125: purple, 0.25: green.</p>
Full article ">Figure 13
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. radial integration limit, <b>X′</b> for different <b>Z</b> values with <b>a<sub>T</sub></b> = 19 mm. <b>Z</b> = 0.01: blue dot, 0.05: blue triangle, 0.15: black <b>+</b>, 0.5: black X, 1: red <b>+</b>. 2: orange dot, 5: green dot, 10: red dot. Blue line through green dots represents <b>X′</b><sup>2</sup> or the slope = 2. Blue dotted line is for <b>X′<sup>3</sup></b> or the slope = 3.</p>
Full article ">Figure 14
<p>Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z</b>) vs. <b>Z</b> using the Torikai equations. (<b>a</b>) Forward path case. (<b>b</b>) Backward path case. <b>X′</b> = 1: black curve, 2: black dotted, 3: blue, 4: red, 5: green, 8: purple.</p>
Full article ">Figure 15
<p>DGS diagram using the Torikai equations. <b>S</b> values are 0.125 (purple curve), 0.2 (purple dotted), 0.25 (green), 0.33 (green dotted), 0.5 (red), and 1 (black). <b>D</b> vs. <b>Z</b> is plotted in the black dotted curve, taken as the backwall reflection.</p>
Full article ">Figure 16
<p>(<b>a</b>) Integrated sound pressure, <b>P</b>(<b>X′</b>,<b>Z<sub>B</sub></b>) vs. <b>Z<sub>B</sub></b> using the Zemanek model with <b>a<sub>T</sub></b> = 19 mm. Converted to backward path. <b>X′</b> = 1: black curve, 1.5: blue, 2: red, 4: green, 8: light blue, 16: brown. (<b>b</b>) DGS diagram using the Zemanek model with an <b>a<sub>T</sub></b> = 19 mm. <b>S</b> = 0.063: light blue curve, 0.125: purple, 0.25, green, 0.5: red, 0.67: blue, 1: black. <b>D</b> vs. <b>Z</b> in the black dotted curve. (<b>c</b>) The Zemanek DGS diagram with the Torikai DGS data co-plotted for <b>S</b> = 0.125, 0.25, and 0.5 using black, green, and red <b>+</b> symbols.</p>
Full article ">Figure 17
<p>The Zemanek DGS diagrams for different transmitter sizes. (<b>a</b>) <b>a<sub>T</sub></b> = 6.35 mm. <b>S</b> = 0.063: light blue curve, 0.125: purple, 0.25: green, 0.5: red, 1: black, 1.5: blue dotted, 2: red dotted. <b>D</b> vs. <b>Z</b> in the black dotted curve. (<b>b</b>) <b>a<sub>T</sub></b> = 9.5 mm. <b>S</b> = 0.08: light blue curve, 0.17: purple dash, 0.33: green dash, 0.5: red, 0.66: red dash, 1: black, 1.33: blue dotted. (<b>c</b>) <b>a<sub>T</sub></b> = 19 mm. <b>S</b> = 0.04: brown, the rest same as in a. (<b>d</b>) <b>a<sub>T</sub></b> = 25.4 mm. <b>S</b> = 0.05: brown, 0.1: light blue, 0.19: purple, 0.37: green, 0.5: red, 1: blue. 1.5: black dot.</p>
Full article ">Figure 17 Cont.
<p>The Zemanek DGS diagrams for different transmitter sizes. (<b>a</b>) <b>a<sub>T</sub></b> = 6.35 mm. <b>S</b> = 0.063: light blue curve, 0.125: purple, 0.25: green, 0.5: red, 1: black, 1.5: blue dotted, 2: red dotted. <b>D</b> vs. <b>Z</b> in the black dotted curve. (<b>b</b>) <b>a<sub>T</sub></b> = 9.5 mm. <b>S</b> = 0.08: light blue curve, 0.17: purple dash, 0.33: green dash, 0.5: red, 0.66: red dash, 1: black, 1.33: blue dotted. (<b>c</b>) <b>a<sub>T</sub></b> = 19 mm. <b>S</b> = 0.04: brown, the rest same as in a. (<b>d</b>) <b>a<sub>T</sub></b> = 25.4 mm. <b>S</b> = 0.05: brown, 0.1: light blue, 0.19: purple, 0.37: green, 0.5: red, 1: blue. 1.5: black dot.</p>
Full article ">Figure 18
<p>Comparison of the Zemanek DGS diagrams for different transmitter sizes. <b>a<sub>T</sub></b> = 6.35 mm. Solid curves, a<sub>T</sub> = 9.5 mm. Dotted, <b>a<sub>T</sub></b> = 19 mm. Dash, <b>a<sub>T</sub></b> = 25.4 mm. Dash-dotted. <b>S</b> = 0.063–0.08: light blue, 0.125–0.17: purple, 0.25–0.33: green, 0.5: red.</p>
Full article ">Figure 19
<p>Comparison of the DGS diagrams. (<b>a</b>) Data points of the Mundry DGS curves (connected by dash curves) and the general DGS curves from ISO 16811 (solid curves). <b>S</b> = 0.1: brown, 0.2: light blue, 0.3: purple, 0.4: orange, 0.6: green, 0.8: red, 1: blue. Backwall: black dotted. (<b>b</b>) Data points of the Mundry DGS curves in triangular symbols and quasi-DGS curves based on the Torikai calculation (solid curves). Same color codes as in (<b>a</b>).</p>
Full article ">Figure 19 Cont.
<p>Comparison of the DGS diagrams. (<b>a</b>) Data points of the Mundry DGS curves (connected by dash curves) and the general DGS curves from ISO 16811 (solid curves). <b>S</b> = 0.1: brown, 0.2: light blue, 0.3: purple, 0.4: orange, 0.6: green, 0.8: red, 1: blue. Backwall: black dotted. (<b>b</b>) Data points of the Mundry DGS curves in triangular symbols and quasi-DGS curves based on the Torikai calculation (solid curves). Same color codes as in (<b>a</b>).</p>
Full article ">Figure 20
<p>Comparison of the DGS diagrams. (<b>a</b>) Data points of the Kimura DGS curves (in dots for <b>S</b> ≤ 1 and <b>+</b> symbols for <b>S</b> &gt; 1) with maximized <b>G</b> values. The corresponding DGS curves for an <b>a<sub>T</sub></b> of 12.7 mm are given by the solid curves. <b>S</b> = 0.04: brown, 0.065/0.075: light blue, 0.125: purple, 0.25: green, 0.5: red, 1: black, 1.5: blue, 2: red dot. <b>D</b> vs. <b>Z</b> is shown by the black dots. (<b>b</b>) Data points of the Kimura DGS curves without <b>G</b> maximizing in dots; S = 0.05, 0.075, 0.1, 0.15, 0.2, 0.3, 0.5 (from low to high). Four Zemanek DGS curves for an <b>a<sub>T</sub></b> of 38 mm with <b>S</b> = 0.065, 0.125, 0.25, and 0.5 with solid curves (brown, purple, red, and blue, respectively). (<b>c</b>) Kimura DGS data as in (<b>b</b>), but the top <b>S</b> of 0.4 was used. Torikai DGS curves are given for <b>S</b> = 0.125, 0.2, and 0.4 (purple, green, blue, respectively).</p>
Full article ">Figure 20 Cont.
<p>Comparison of the DGS diagrams. (<b>a</b>) Data points of the Kimura DGS curves (in dots for <b>S</b> ≤ 1 and <b>+</b> symbols for <b>S</b> &gt; 1) with maximized <b>G</b> values. The corresponding DGS curves for an <b>a<sub>T</sub></b> of 12.7 mm are given by the solid curves. <b>S</b> = 0.04: brown, 0.065/0.075: light blue, 0.125: purple, 0.25: green, 0.5: red, 1: black, 1.5: blue, 2: red dot. <b>D</b> vs. <b>Z</b> is shown by the black dots. (<b>b</b>) Data points of the Kimura DGS curves without <b>G</b> maximizing in dots; S = 0.05, 0.075, 0.1, 0.15, 0.2, 0.3, 0.5 (from low to high). Four Zemanek DGS curves for an <b>a<sub>T</sub></b> of 38 mm with <b>S</b> = 0.065, 0.125, 0.25, and 0.5 with solid curves (brown, purple, red, and blue, respectively). (<b>c</b>) Kimura DGS data as in (<b>b</b>), but the top <b>S</b> of 0.4 was used. Torikai DGS curves are given for <b>S</b> = 0.125, 0.2, and 0.4 (purple, green, blue, respectively).</p>
Full article ">
15 pages, 2178 KiB  
Article
Characterizing Soil Profile Salinization in Cotton Fields Using Landsat 8 Time-Series Data in Southern Xinjiang, China
by Jiaqiang Wang, Bifeng Hu, Weiyang Liu, Defang Luo and Jie Peng
Sensors 2023, 23(15), 7003; https://doi.org/10.3390/s23157003 - 7 Aug 2023
Cited by 1 | Viewed by 1680
Abstract
Soil salinization is a major obstacle to land productivity, crop yield and crop quality in arid areas and directly affects food security. Soil profile salt data are key for accurately determining irrigation volumes. To explore the potential for using Landsat 8 time-series data [...] Read more.
Soil salinization is a major obstacle to land productivity, crop yield and crop quality in arid areas and directly affects food security. Soil profile salt data are key for accurately determining irrigation volumes. To explore the potential for using Landsat 8 time-series data to monitor soil salinization, 172 Landsat 8 images from 2013 to 2019 were obtained from the Alar Reclamation Area of Xinjiang, northwest China. The multiyear extreme dataset was synthesized from the annual maximum or minimum values of 16 vegetation indices, which were combined with the soil conductivity of 540 samples from soil profiles at 0~0.375 m, 0~0.75 m and 0~1.00 m depths in 30 cotton fields with varying degrees of salinization as investigated by EM38-MK2. Three remote sensing monitoring models for soil conductivity at different depths were constructed using the Cubist method, and digital mapping was carried out. The results showed that the Cubist model of soil profile electrical conductivity from 0 to 0.375 m, 0 to 0.75 m and 0 to 1.00 m showed high prediction accuracy, and the determination coefficients of the prediction set were 0.80, 0.74 and 0.72, respectively. Therefore, it is feasible to use a multiyear extreme value for the vegetation index combined with a Cubist modeling method to monitor soil profile salinization at a regional scale. Full article
(This article belongs to the Special Issue Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The geographical location of the study area and the distribution of survey samples.</p>
Full article ">Figure 2
<p>Distribution of EM38-MK2 measuring points and soil sample collection points.</p>
Full article ">Figure 3
<p>Distribution of soil salinization at different soil profile depths.</p>
Full article ">
13 pages, 3653 KiB  
Article
Period Estimation of Spread Spectrum Codes Based on ResNet
by Han-Qing Gu, Xia-Xia Liu, Lu Xu, Yi-Jia Zhang and Zhe-Ming Lu
Sensors 2023, 23(15), 7002; https://doi.org/10.3390/s23157002 - 7 Aug 2023
Viewed by 1284
Abstract
In order to more effectively monitor and interfere with enemy signals, it is particularly important to accurately and efficiently identify the intercepted signals and estimate their parameters in the increasingly complex electromagnetic environment. Therefore, in non-cooperative situations, it is of great practical significance [...] Read more.
In order to more effectively monitor and interfere with enemy signals, it is particularly important to accurately and efficiently identify the intercepted signals and estimate their parameters in the increasingly complex electromagnetic environment. Therefore, in non-cooperative situations, it is of great practical significance to study how to accurately detect direct sequence spread spectrum (DSSS) signals in real time and estimate their parameters. The traditional time-delay correlation algorithm encounters the challenges such as peak energy leakage and false peak interference. As an alternative, this paper introduces a Pseudo-Noise (PN) code period estimation method utilizing a one-dimensional (1D) convolutional neural network based on the residual network (CNN-ResNet). This method transforms the problem of spread spectrum code period estimation into a multi-classification problem of spread spectrum code length estimation. Firstly, the In-phase/Quadrature(I/Q) two-way of the received DSSS signals is directly input into the CNN-ResNet model, which will automatically learn the characteristics of the DSSS signal with different PN code lengths and then estimate the PN code length. Simulation experiments are conducted using a data set with DSSS signals ranging from −20 to 10 dB in terms of signal-to-noise ratios (SNRs). Upon training and verifying the model using BPSK modulation, it is then put to the test with QPSK-modulated signals, and the estimation performance was analyzed through metrics such as loss function, accuracy rate, recall rate, and confusion matrix. The results demonstrate that the 1D CNN-ResNet proposed in this paper is capable of effectively estimating the PN code period of the non-cooperative DSSS signal, exhibiting robust generalization abilities. Full article
(This article belongs to the Special Issue Feature Papers in Communications Section 2023)
Show Figures

Figure 1

Figure 1
<p>Overall framework diagram of PN code period estimation based on ResNet.</p>
Full article ">Figure 2
<p>Structure of residual module in this paper.</p>
Full article ">Figure 3
<p>Network model framework diagram.</p>
Full article ">Figure 4
<p>Training process. (<b>a</b>) Accuracy curve, (<b>b</b>) Loss function curve.</p>
Full article ">Figure 5
<p>Recall rate versus signal-to-noise ratio curve.</p>
Full article ">Figure 6
<p>Recall rate versus signal-to-noise ratio curve.</p>
Full article ">Figure 7
<p>Confusion matrix. (<b>a</b>) Confusion matrix of BPSK-DSSS signals with different PN code lengths at −12 dB. (<b>b</b>) Confusion matrix of BPSK-DSSS signals with different PN code lengths at −14 dB. (<b>c</b>) Confusion matrix of QPSK-DSSS signals with different PN code lengths at −8 dB. (<b>d</b>) Confusion matrix of QPSK-DSSS signals with different PN code lengths at −10 dB.</p>
Full article ">Figure 8
<p>Comparison of spread spectrum code period estimation performance.</p>
Full article ">
17 pages, 10332 KiB  
Article
An RFID Tag Movement Trajectory Tracking Method Based on Multiple RF Characteristics for Electronic Vehicle Identification ITS Applications
by Ruoyu Pan, Zhao Han, Tuo Liu, Honggang Wang, Jinyue Huang and Wenfeng Wang
Sensors 2023, 23(15), 7001; https://doi.org/10.3390/s23157001 - 7 Aug 2023
Cited by 1 | Viewed by 2211
Abstract
Intelligent transportation systems (ITS) urgently need to realize vehicle identification, dynamic monitoring, and traffic flow monitoring under high-speed motion conditions. Vehicle tracking based on radio frequency identification (RFID) and electronic vehicle identification (EVI) can obtain continuous observation data for a long period of [...] Read more.
Intelligent transportation systems (ITS) urgently need to realize vehicle identification, dynamic monitoring, and traffic flow monitoring under high-speed motion conditions. Vehicle tracking based on radio frequency identification (RFID) and electronic vehicle identification (EVI) can obtain continuous observation data for a long period of time, and the acquisition accuracy is relatively high, which is conducive to the discovery of rules. The data can provide key information for urban traffic decision-making research. In this paper, an RFID tag motion trajectory tracking method based on RF multiple features for ITS is proposed to analyze the movement trajectory of vehicles at important checkpoints. The method analyzes the accurate relationship between the RSSI, phase differences, and driving distances of the tag. It utilizes the information weight method to obtain the weights of multiple RF characteristics at different distances. Then, it calculates the center point of the common area where the vehicle may move under multi-antenna conditions, confirming the actual position of the vehicle. The experimental results show that the average positioning error of moving RFID tags based on dual-frequency signal phase differences and RSSI is less than 17 cm. This method can provide real-time, high-precision vehicle positioning and trajectory tracking solutions for ITS application scenarios such as parking guidance, unmanned vehicle route monitoring, and vehicle lane change detection. Full article
Show Figures

Figure 1

Figure 1
<p>The tag return power with antenna–tag distance at a reader transmit power of 30 dBm.</p>
Full article ">Figure 2
<p>Characteristics of RSSI intensity distribution at different antenna–tag distances.</p>
Full article ">Figure 3
<p>Diagram of the backscatter radio link of the RFID communication.</p>
Full article ">Figure 4
<p>Variation characteristics of phase with antenna–tag distance.</p>
Full article ">Figure 5
<p>Measured phase distribution at different antenna–tag distances.</p>
Full article ">Figure 6
<p>Relationship between phase difference and distance at different frequency differences.</p>
Full article ">Figure 7
<p>UHF RFID vehicle trajectory tracking model.</p>
Full article ">Figure 8
<p>Distance incremental compensation chart: (<b>a</b>) is a tag that makes a uniform linear motion; (<b>b</b>) is a tag that makes a uniform curve motion.</p>
Full article ">Figure 9
<p>Measured and theoretical values of tag parameters: (<b>a</b>) represents a comparison of RSSI measured value and theoretical value; (<b>b</b>) represents phase difference variation at 20 MHz frequency difference. The classification result of the measured phase follows the measured RSSI.</p>
Full article ">Figure 10
<p>Position area of the mobile tag: (<b>b</b>) is the enlarged view of (<b>a</b>).</p>
Full article ">Figure 11
<p>Antenna placement. (<b>a</b>) represents the placement of two antennas; (<b>b</b>) represents the placement of three antennas; and (<b>c</b>) represents the placement of four antennas. Specific coordinate is shown in <a href="#sensors-23-07001-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 12
<p>Experimental setup.</p>
Full article ">Figure 13
<p>Read ranges of the tag: the read range is measured using the Voyantic Tagformance Pro comprehensive tester.</p>
Full article ">Figure 14
<p>Tag uniform linear motion.</p>
Full article ">Figure 15
<p>Tag uniform curve motion.</p>
Full article ">Figure 16
<p>Tag uniform turnaround motion.</p>
Full article ">
15 pages, 4959 KiB  
Article
1,2,3-Triazoles: Controlled Switches in Logic Gate Applications
by Debanjana Ghosh, Austin Atkinson, Jaclyn Gibson, Harini Subbaiahgari, Weihua Ming, Clifford Padgett, Karelle S. Aiken and Shainaz M. Landge
Sensors 2023, 23(15), 7000; https://doi.org/10.3390/s23157000 - 7 Aug 2023
Cited by 6 | Viewed by 2438
Abstract
A 1,2,3-triazole-based chemosensor is used for selective switching in logic gate operations through colorimetric and fluorometric response mechanisms. The molecular probe synthesized via “click chemistry” resulted in a non-fluorescent 1,4-diaryl-1,2,3-triazole with a phenol moiety (PTP). However, upon sensing fluoride, it TURNS [...] Read more.
A 1,2,3-triazole-based chemosensor is used for selective switching in logic gate operations through colorimetric and fluorometric response mechanisms. The molecular probe synthesized via “click chemistry” resulted in a non-fluorescent 1,4-diaryl-1,2,3-triazole with a phenol moiety (PTP). However, upon sensing fluoride, it TURNS ON the molecule’s fluorescence. The TURN-OFF order occurs through fluorescence quenching of the sensor when metal ions, e.g., Cu2+, and Zn2+, are added to the PTP-fluoride ensemble. A detailed characterization using Nuclear Magnetic Resonance (NMR) spectroscopy in a sequential titration study substantiated the photophysical characteristics of PTP through UV-Vis absorption and fluorescence profiles. A combination of fluorescence OFF-ON-OFF sequences provides evidence of 1,2,3-triazoles being controlled switches applicable to multimodal logic operations. The “INH” gate was constructed based on the fluorescence output of PTP when the inputs are F and Zn2+. The “IMP” and “OR” gates were created on the colorimetric output responses using the probe’s absorption with multiple inputs (F and Zn2+ or Cu2+). The PTP sensor is the best example of the “Write-Read-Erase-Read” mimic. Full article
(This article belongs to the Section Chemical Sensors)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A view of the molecular structure of <b>PTP</b>, displacement ellipsoids are drawn at the 50% probability level.</p>
Full article ">Figure 2
<p>UV-Vis absorption spectral titrations of <b>PTP</b> (~1.7 × 10<sup>−5</sup> mol/L) with the addition of Zn (II) perchlorate to <b>PTP</b>-TBAF ensemble in acetonitrile (TBAF was in 12 mole equivalents with respect to <b>PTP</b>). Concentrations of Zn (II) (0.12→1.4 mole equivalents of Zn<sup>2+</sup> with respect to <b>PTP</b>) are provided in the legends. Inset shows the absorption spectra of <b>PTP</b> (black line) and <b>PTP</b> with fluoride (red line).</p>
Full article ">Figure 3
<p>Fluorescence spectral variation of <b>PTP</b> (~1.7 × 10<sup>−5</sup> mol/L) with the addition of Zn (II) perchlorate to <b>PTP</b>–TBAF mixture in acetonitrile. The concentrations of Zn<sup>2+</sup> (0.12→5.3 mole equivalents of Zn<sup>2+</sup> with respect to <b>PTP</b>) are provided in the legends, λ<sub>exc</sub> = 290 nm. Image in the inset represents the fluorescence of <b>PTP</b>, <b>PTP</b>-F<sup>−</sup>, and <b>PTP</b>-F<sup>−</sup>-Zn<sup>2+</sup> under the long-wavelength (365 nm) UV lamp.</p>
Full article ">Figure 4
<p>Absorption spectral variation of: (<b>a</b>) <b>PTP</b> (~1.7 × 10<sup>−5</sup> mol/L) with the addition of copper (II) perchlorate hexahydrate (0→47 mole equivalents of Cu<sup>2+</sup> with respect to <b>PTP</b>) in acetonitrile; and (<b>b</b>) <b>PTP</b> + TBAF + Cu<sup>2+</sup> (0.48→52 mole equivalents of Cu<sup>2+</sup> with respect to <b>PTP</b>). Copper salt concentrations are provided in the legends.</p>
Full article ">Figure 4 Cont.
<p>Absorption spectral variation of: (<b>a</b>) <b>PTP</b> (~1.7 × 10<sup>−5</sup> mol/L) with the addition of copper (II) perchlorate hexahydrate (0→47 mole equivalents of Cu<sup>2+</sup> with respect to <b>PTP</b>) in acetonitrile; and (<b>b</b>) <b>PTP</b> + TBAF + Cu<sup>2+</sup> (0.48→52 mole equivalents of Cu<sup>2+</sup> with respect to <b>PTP</b>). Copper salt concentrations are provided in the legends.</p>
Full article ">Figure 5
<p>OFF-ON-OFF cycle of <b>PTP</b> with TBAF and Cu (II) perchlorate under (<b>a</b>) ambient light and (<b>b</b>) UV lamp. (<b>c</b>) Fluorescence spectra of <b>PTP</b> (~5.0 × 10<sup>−5</sup> mol/L) with the addition of Cu (II) perchlorate (0.48→1.9 mole equivalents of Cu<sup>2+</sup> with respect to <b>PTP</b>) to <b>PTP</b>–TBAF mixture in acetonitrile. The concentrations of Cu<sup>2+</sup> are provided in the legends, λ<sub>exc</sub> = 290 nm.</p>
Full article ">Figure 6
<p>Stacked <sup>1</sup>H-NMR (partial, 400 MHz, 1.8 × 10<sup>−1</sup> mol/L in CD<sub>3</sub>CN, RT, from bottom to top) spectra for pure <b>PTP</b> ((<b>a</b>), bottom), <b>PTP</b> with TBAF ((<b>b</b>), 1 eq.), and PTP with TBAF and Zinc perchlorate ((<b>c</b>), immediate) (1 eq.), <b>PTP</b> with TBAF and Zinc perchlorate ((<b>d</b>), overnight) (1 eq.), and <b>PTP</b> and Zinc perchlorate only ((<b>e</b>), top).</p>
Full article ">Figure 7
<p>Stacked <sup>13</sup>C-NMR (partial, 400 MHz, 1.8 × 10<sup>−1</sup> mol/L in CD<sub>3</sub>CN, RT, from bottom to top) spectra for pure <b>PTP</b> ((<b>a</b>), bottom), <b>PTP</b> with TBAF ((<b>b</b>), 1 eq.), and <b>PTP</b> with TBAF and Zinc perchlorate ((<b>c</b>), 1 eq., top).</p>
Full article ">Figure 8
<p>Stacked <sup>1</sup>H-NMR (partial, 400 MHz, 1.8 × 10<sup>−1</sup> mol/L in CD<sub>3</sub>CN, RT, from bottom to top) spectra for pure <b>PTP</b> ((<b>a</b>), bottom), <b>PTP</b> with TBAF ((<b>b</b>), 1 eq.), <b>PTP</b> with TBAF and Copper perchlorate ((<b>c</b>), 1 eq.), and <b>PTP</b> and Copper (II) perchlorate only ((<b>d</b>), top, 1 eq.).</p>
Full article ">Figure 9
<p>Stacked <sup>1</sup>H-NMR (partial, 400 MHz, 1.8 × 10<sup>−1</sup> mol/L in CD<sub>3</sub>CN, RT, from bottom to top) spectra for pure <b>PTP</b> (bottom), <b>PTP</b> with TBAF (1 eq.); and <b>PTP</b> with TBAF and upon addition of 1.0 equivalent amounts of selected metal perchlorates (Ag<sup>+</sup>, Al<sup>3+</sup>, Cd<sup>2+</sup>, Cr<sup>2+</sup>, Cu<sup>2+</sup>, Fe<sup>2+</sup>, Fe<sup>3+</sup>, Zn<sup>2+</sup> (immediate), Zn<sup>2+</sup> (top, overnight)).</p>
Full article ">Figure 10
<p>For the <b>PTP</b>–fluoride–zinc system, (<b>a</b>) the INH gate was constructed with <b>PTP</b>’s absorption at 345 nm, emission at 430 nm, and NMR signals (<span class="html-italic">δ</span> 8.95 ppm for <sup>1</sup>H and <span class="html-italic">δ</span> 156.2 ppm for <sup>13</sup>C). IMP gate was developed using the absorption output of <b>PTP</b> at 290 nm and the NMR signals (<span class="html-italic">δ</span> 8.65 ppm for <sup>1</sup>H and <span class="html-italic">δ</span> 149.1 ppm for <sup>13</sup>C). For the <b>PTP</b>–fluoride–copper system, (<b>b</b>) the OR gate represents the absorption outputs of <b>PTP</b> at 345 nm and 408 nm. Corresponding truth tables are depicted beside each function. (<b>c</b>) Feedback loop with sequential and reversible logic operations presenting “Write-Read-Erase-Read” behavior.</p>
Full article ">Scheme 1
<p>The proposed binding mode of PTP and fluoride anion and its reversibility after adding Zn (II) cation.</p>
Full article ">
23 pages, 10022 KiB  
Article
Research on Mechanical Equipment Fault Diagnosis Method Based on Deep Learning and Information Fusion
by Dongnian Jiang and Zhixuan Wang
Sensors 2023, 23(15), 6999; https://doi.org/10.3390/s23156999 - 7 Aug 2023
Cited by 4 | Viewed by 2255
Abstract
Solving the problem of the transmission of mechanical equipment is complicated, and the interconnection between equipment components in a complex industrial environment can easily lead to faults. A multi-scale-sensor information fusion method is proposed, overcoming the shortcomings of fault diagnosis methods based on [...] Read more.
Solving the problem of the transmission of mechanical equipment is complicated, and the interconnection between equipment components in a complex industrial environment can easily lead to faults. A multi-scale-sensor information fusion method is proposed, overcoming the shortcomings of fault diagnosis methods based on the analysis of one signal, in terms of diagnosis accuracy and efficiency. First, different sizes of convolution kernels are applied to extract multi-scale features from original signals using a multi-scale one-dimensional convolutional neural network (1DCNN); this not only improves the learning ability of the features but also enables the fine characterization of the features. Then, using Dempster–Shafer (DS) evidence theory, improved by multi-sensor information fusion strategy, the feature signals extracted by the multi-scale 1DCNN are fused to realize the fault detection and location. Finally, the experimental results of fault detection on a flash furnace show that the accuracy of the proposed method is more than 99.65% and has better fault diagnosis, which proves the feasibility and effectiveness of the proposed method. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of 1DCNN.</p>
Full article ">Figure 2
<p>Structure diagram of multi-scale 1DCNN.</p>
Full article ">Figure 3
<p>Structure diagram of multi-sensor fusion model.</p>
Full article ">Figure 4
<p>Flow chart of network model training.</p>
Full article ">Figure 5
<p>Architecture diagram of the system.</p>
Full article ">Figure 6
<p>Accuracy in the training process. (<b>a</b>) Accuracy of vibration sensor; (<b>b</b>) accuracy of acoustic sensor; (<b>c</b>) accuracy of temperature sensor.</p>
Full article ">Figure 7
<p>Loss value in the training process. (<b>a</b>) Loss value of vibration sensor; (<b>b</b>) loss value of acoustic sensor; (<b>c</b>) loss value of temperature sensor.</p>
Full article ">Figure 8
<p>Confusion matrix for fault diagnosis. (<b>a</b>) Confusion matrix of vibration sensor; (<b>b</b>) confusion matrix of acoustic sensor; (<b>c</b>) confusion matrix of temperature sensor.</p>
Full article ">Figure 9
<p>t-SNE dimensional reduction visualization. (<b>a</b>) t-SNE of original vibration signal; (<b>b</b>) t-SNE of original acoustic signal; (<b>c</b>) t-SNE of original temperature signal; (<b>d</b>) t-SNE of single scale vibration signal; (<b>e</b>) t-SNE of single scale acoustic signal; (<b>f</b>) t-SNE of single scale temperature signal; (<b>g</b>) t-SNE of multi-scale vibration signal; (<b>h</b>) t-SNE of multi-scale acoustic signal; (<b>i</b>) t-SNE of multi-scale temperature signal; (<b>j</b>) t-SNE of gearbox fault; (<b>k</b>) t-SNE of bearing fault; (<b>l</b>) t-SNE of generator fault.</p>
Full article ">Figure 9 Cont.
<p>t-SNE dimensional reduction visualization. (<b>a</b>) t-SNE of original vibration signal; (<b>b</b>) t-SNE of original acoustic signal; (<b>c</b>) t-SNE of original temperature signal; (<b>d</b>) t-SNE of single scale vibration signal; (<b>e</b>) t-SNE of single scale acoustic signal; (<b>f</b>) t-SNE of single scale temperature signal; (<b>g</b>) t-SNE of multi-scale vibration signal; (<b>h</b>) t-SNE of multi-scale acoustic signal; (<b>i</b>) t-SNE of multi-scale temperature signal; (<b>j</b>) t-SNE of gearbox fault; (<b>k</b>) t-SNE of bearing fault; (<b>l</b>) t-SNE of generator fault.</p>
Full article ">Figure 10
<p>Accuracy of the LSTM model.</p>
Full article ">Figure 11
<p>LSTM confusion matrix.</p>
Full article ">Figure 12
<p>t-SNE visualization for LSTM.</p>
Full article ">Figure 13
<p>Comparison of precision.</p>
Full article ">Figure 14
<p>Comparison of recall.</p>
Full article ">Figure 15
<p>Comparison of specificity.</p>
Full article ">
13 pages, 1600 KiB  
Article
An Investigation of Surface EMG Shorts-Derived Training Load during Treadmill Running
by Kurtis Ashcroft, Tony Robinson, Joan Condell, Victoria Penpraze, Andrew White and Stephen P. Bird
Sensors 2023, 23(15), 6998; https://doi.org/10.3390/s23156998 - 7 Aug 2023
Cited by 1 | Viewed by 1548
Abstract
The purpose of this study was two-fold: (1) to determine the sensitivity of the sEMG shorts-derived training load (sEMG-TL) during different running speeds; and (2) to investigate the relationship between the oxygen consumption, heart rate (HR), rating of perceived exertion (RPE), accelerometry-based PlayerLoad [...] Read more.
The purpose of this study was two-fold: (1) to determine the sensitivity of the sEMG shorts-derived training load (sEMG-TL) during different running speeds; and (2) to investigate the relationship between the oxygen consumption, heart rate (HR), rating of perceived exertion (RPE), accelerometry-based PlayerLoadTM (PL), and sEMG-TL during a running maximum oxygen uptake (V˙O2max) test. The study investigated ten healthy participants. On day one, participants performed a three-speed treadmill test at 8, 10, and 12 km·h−1 for 2 min at each speed. On day two, participants performed a V˙O2max test. Analysis of variance found significant differences in sEMG-TL at all three speeds (p < 0.05). A significantly weak positive relationship between sEMG-TL and %V˙O2max (r = 0.31, p < 0.05) was established, while significantly strong relationships for 8 out of 10 participants at the individual level (r = 0.72–0.97, p < 0.05) were found. Meanwhile, the accelerometry PL was not significantly related to %V˙O2max (p > 0.05) and only demonstrated significant correlations in 3 out of 10 participants at the individual level. Therefore, the sEMG shorts-derived training load was sensitive in detecting a work rate difference of at least 2 km·h−1. sEMG-TL may be an acceptable metric for the measurement of internal loads and could potentially be used as a surrogate for oxygen consumption. Full article
(This article belongs to the Special Issue Wearable Sensors for Health and Physiological Monitoring)
Show Figures

Figure 1

Figure 1
<p>Athos<sup>TM</sup> unit anterior view (<b>a</b>) and posterior view (set of contacts) (<b>b</b>). Exterior right leg (<b>c</b>) and interior left leg (<b>d</b>) view of sEMG shorts. Note: sEMG dry electrodes and electrode leads are composed of an inkjet-printed conductive polymer comprising an ether-based conductive thermoplastic polyurethane material. The electrodes are overlaid with a soft conductive silicone, which increases the stability of the electrode–skin interface.</p>
Full article ">Figure 2
<p>Movements for the sEMG calibration protocol to establish sEMG amplitude thresholds. Movements include prone knee flexion (<b>a</b>), prone hip extension (<b>b</b>), seated knee extension (<b>c</b>), and supine leg raise (<b>d</b>).</p>
Full article ">Figure 3
<p>Boxplot showing the distribution of sEMG-TL across different running speeds. sEMG-TL = surface electromyography training Load; a.u. = arbitrary units; low = 8 km⋅h<sup>−1</sup>; mod = 10 km⋅h<sup>−1</sup>; high = 12 km⋅h<sup>−1</sup>; black line = median, and black dots = 1 participant with a very high sEMG-TL.</p>
Full article ">Figure 4
<p>Scatterplot matrix of associations between variables: %<math display="inline"><semantics><mrow><mover accent="true"><mrow><mi mathvariant="normal">V</mi></mrow><mo mathvariant="normal">˙</mo></mover></mrow></semantics></math>O<sub>2max</sub> = percentage of maximum oxygen uptake; sEMG-TL = surface electromyography training load; RPE = rating of perceived exertion; HR = heart rate; PL = PlayerLoad. Each blue dot corresponds to individual measurements at each one-minute stage during the treadmill running test. Solid lines are the least-squares derived best-fitting lines. * <span class="html-italic">p</span> &lt; 0.05; ** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">
20 pages, 3420 KiB  
Article
Multi-Camera-Based Human Activity Recognition for Human–Robot Collaboration in Construction
by Youjin Jang, Inbae Jeong, Moein Younesi Heravi, Sajib Sarkar, Hyunkyu Shin and Yonghan Ahn
Sensors 2023, 23(15), 6997; https://doi.org/10.3390/s23156997 - 7 Aug 2023
Cited by 8 | Viewed by 3959
Abstract
As the use of construction robots continues to increase, ensuring safety and productivity while working alongside human workers becomes crucial. To prevent collisions, robots must recognize human behavior in close proximity. However, single, or RGB-depth cameras have limitations, such as detection failure, sensor [...] Read more.
As the use of construction robots continues to increase, ensuring safety and productivity while working alongside human workers becomes crucial. To prevent collisions, robots must recognize human behavior in close proximity. However, single, or RGB-depth cameras have limitations, such as detection failure, sensor malfunction, occlusions, unconstrained lighting, and motion blur. Therefore, this study proposes a multiple-camera approach for human activity recognition during human–robot collaborative activities in construction. The proposed approach employs a particle filter, to estimate the 3D human pose by fusing 2D joint locations extracted from multiple cameras and applies long short-term memory network (LSTM) to recognize ten activities associated with human and robot collaboration tasks in construction. The study compared the performance of human activity recognition models using one, two, three, and four cameras. Results showed that using multiple cameras enhances recognition performance, providing a more accurate and reliable means of identifying and differentiating between various activities. The results of this study are expected to contribute to the advancement of human activity recognition and utilization in human–robot collaboration in construction. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>MediaPipe pose detector: (<b>a</b>) all defined human 2D joint locations; (<b>b</b>) 2D joint locations for extracted for the experiment.</p>
Full article ">Figure 2
<p>Network architecture of LSTM model for human activity recognition using 3D joints location.</p>
Full article ">Figure 3
<p>Laboratory experimental settings.</p>
Full article ">Figure 4
<p>Number of frames captured for each activity.</p>
Full article ">Figure 5
<p>Comparison of 3D pose estimation results. <b>Top</b>: using a single camera; <b>bottom</b>: using multiple cameras.</p>
Full article ">Figure 6
<p>Precision–recall curve for the LSTM classification model.</p>
Full article ">Figure 7
<p>Confusion matrix for the LSTM classification model.</p>
Full article ">Figure 8
<p>Overall performance metrics comparison.</p>
Full article ">
20 pages, 4009 KiB  
Article
A Feedback Control Sensing System of an Electrorheological Brake to Exert a Constant Pressing Force on an Object
by Tomasz Spotowski, Karol Osowski, Ireneusz Musiałek, Artur Olszak, Andrzej Kęsy, Zbigniew Kęsy and SeungBok Choi
Sensors 2023, 23(15), 6996; https://doi.org/10.3390/s23156996 - 7 Aug 2023
Cited by 2 | Viewed by 1411
Abstract
The paper presents the application of a strain gauge sensor and a viscous brake filled with an electrorheological (ER) fluid, which is a smart material with controlled rheological properties, by an electric field to the fluid domain. For experimental tests, a cylindrical viscous [...] Read more.
The paper presents the application of a strain gauge sensor and a viscous brake filled with an electrorheological (ER) fluid, which is a smart material with controlled rheological properties, by an electric field to the fluid domain. For experimental tests, a cylindrical viscous brake was designed. The tests were carried out on a test stand especially prepared for this purpose and suitable for the examination of the impact of the rotational speed of the input shaft and the value of the electric voltage supplied to the viscous brake on pressing forces, taking into account the ER fluid temperature and brake fluid filling level. On the basis of the experimental research results, a viscous brake control system to exert constant pressing forces with feedback from a strain gauge sensor, based on the programmable logic controller, was designed and implemented. This system, using its own control algorithm, ensured a control pressing force within the assumed range, both during the constant and follow-up control. The measurement results obtained during the tests of the viscous brake designed to exert a force were presented in the form of time courses, showing the changes of the pressing force, the electric voltage applied to the brake and the rotational speed of the brake input shaft. The developed ER fluid brake control system with feedback was tested for constant and follow-up control, taking into account the impact of the working fluid temperature. During the test it was possible to obtain a maximum pressing force equal to 50 N for an electric voltage limited to 2.5 kV. The resultant error was lower than 1 N, wherein the adjustment time after changing the desired value of the force was around 1.5 s. The correct operation of both the brake and the control system, as well as the compatibility of the pressing force value and time adjustment, were determined. The main technical contribution described in this article is the design of a new type of DECPF and a new method for its control with the use of a specifically programmed programmable logic controller which simulates the proportional-integral controllers’ operation. Full article
Show Figures

Figure 1

Figure 1
<p>DECPF construction scheme: 1—cylindrical viscous brake, 2—lever, 3—force sensor, 4—object.</p>
Full article ">Figure 2
<p>Construction scheme of a cylindrical viscous brake: 1—input shaft, 2—slip rings, 3—sealing ring, 4—housing, 5—ER fluid, 6—cylinders connected to input shaft, 7—slip rings, 8—cylinders connected to output shaft, 9—bearings, 10—output shaft, 11—supports, 12—temperature sensor.</p>
Full article ">Figure 3
<p>Slip rings and brushes: (<b>a</b>)—the driving part, (<b>b</b>)—the driven part.</p>
Full article ">Figure 4
<p>Test bench scheme: 1—PLC controller, 2—PC, 3—inverter, 4—electric motor, 5—high-voltage power supply, 6—viscous brake with ER fluid, 7—connecting coupling, 8—slip rings and brushes, 9—temperature sensor, 10—lever, 11—force sensor.</p>
Full article ">Figure 5
<p>The course of changes over time <span class="html-italic">t</span> of: (<b>a</b>)—angular velocity <span class="html-italic">ω</span>, (<b>b</b>)—pressing force <span class="html-italic">F</span>, (<b>c</b>)—temperature <span class="html-italic">T</span>.</p>
Full article ">Figure 6
<p>The course of changes over time <span class="html-italic">t</span> of: (<b>a</b>)—angular velocity <span class="html-italic">ω</span>, (<b>b</b>)—pressing force <span class="html-italic">F</span>.</p>
Full article ">Figure 7
<p>Course of changes in time <span class="html-italic">t</span> for temperature <span class="html-italic">T</span> = 50 °C of: (<b>a</b>)—angular velocity <span class="html-italic">ω</span>, (<b>b</b>)—high voltage <span class="html-italic">U</span>, (<b>c</b>)—pressing force Δ<span class="html-italic">F</span>.</p>
Full article ">Figure 7 Cont.
<p>Course of changes in time <span class="html-italic">t</span> for temperature <span class="html-italic">T</span> = 50 °C of: (<b>a</b>)—angular velocity <span class="html-italic">ω</span>, (<b>b</b>)—high voltage <span class="html-italic">U</span>, (<b>c</b>)—pressing force Δ<span class="html-italic">F</span>.</p>
Full article ">Figure 8
<p>Course of changes in time <span class="html-italic">t</span> of the pressing force <span class="html-italic">F</span> for a step change in the electric voltage <span class="html-italic">U</span>.</p>
Full article ">Figure 9
<p>The dependence of the pressing force <span class="html-italic">F</span> on the electric voltage <span class="html-italic">U</span> for the temperature ranges: (<b>a</b>)—25 °C to 38 °C, (<b>b</b>)—35 °C to 45 °C.</p>
Full article ">Figure 10
<p>The course of the voltage <span class="html-italic">U</span> in time <span class="html-italic">t</span>: 1—correct, 2—electric breakdowns, 3—desired electric voltage.</p>
Full article ">Figure 11
<p>The dependence of the angular velocity <span class="html-italic">ω<sub>F</sub></span> on the desired pressing force <span class="html-italic">F<sub>d</sub></span>.</p>
Full article ">Figure 12
<p>Control system scheme of DECPF: 1—dependence of <span class="html-italic">ω<sub>F</sub></span> on <span class="html-italic">F<sub>d</sub></span>, 2—PI controller, 3—electric motor, 4—high-voltage power supply, 5—force sensor, 6—brake with the ER fluid, <span class="html-italic">F<sub>d</sub></span>—reference torque, <span class="html-italic">e</span>—resultant error, <span class="html-italic">u</span>—control signal, <span class="html-italic">U</span>—control voltage.</p>
Full article ">Figure 13
<p>Course of changes in time <span class="html-italic">t</span> when controlling: (<b>a</b>)—pressing force <span class="html-italic">F</span>, 1—desired force, 2—measured force; (<b>b</b>)—angular velocity <span class="html-italic">ω</span>; (<b>c</b>)—high voltage <span class="html-italic">U</span>.</p>
Full article ">Figure 13 Cont.
<p>Course of changes in time <span class="html-italic">t</span> when controlling: (<b>a</b>)—pressing force <span class="html-italic">F</span>, 1—desired force, 2—measured force; (<b>b</b>)—angular velocity <span class="html-italic">ω</span>; (<b>c</b>)—high voltage <span class="html-italic">U</span>.</p>
Full article ">Figure 14
<p>Corrected control system scheme of DECPF: 1—dependence of <span class="html-italic">ω<sub>F</sub></span> on <span class="html-italic">F<sub>d</sub></span>, 2—PI controller, 3—electric motor, 4—high-voltage power supply, 5—force sensor, 6—brake with the ER fluid, 7—correction block, <span class="html-italic">F<sub>d</sub></span>—reference torque, <span class="html-italic">e</span>—resultant error, <span class="html-italic">u</span>—control signal, <span class="html-italic">U</span>—control voltage.</p>
Full article ">
13 pages, 5772 KiB  
Article
Colorimetric Chemosensor for Cu2+ and Fe3+ Based on a meso-Triphenylamine-BODIPY Derivative
by Sónia C. S. Pinto, Raquel C. R. Gonçalves, Susana P. G. Costa and M. Manuela M. Raposo
Sensors 2023, 23(15), 6995; https://doi.org/10.3390/s23156995 - 7 Aug 2023
Cited by 4 | Viewed by 1469
Abstract
Optical chemosensors are a practical tool for the detection and quantification of important analytes in biological and environmental fields, such as Cu2+ and Fe3+. To the best of our knowledge, a BODIPY derivative capable of detecting Cu2+ and Fe [...] Read more.
Optical chemosensors are a practical tool for the detection and quantification of important analytes in biological and environmental fields, such as Cu2+ and Fe3+. To the best of our knowledge, a BODIPY derivative capable of detecting Cu2+ and Fe3+ simultaneously through a colorimetric response has not yet been described in the literature. In this work, a meso-triphenylamine-BODIPY derivative is reported for the highly selective detection of Cu2+ and Fe3+. In the preliminary chemosensing study, this compound showed a significant color change from yellow to blue–green in the presence of Cu2+ and Fe3+. With only one equivalent of cation, a change in the absorption band of the compound and the appearance of a new band around 700 nm were observed. Furthermore, only 10 equivalents of Cu2+/Fe3+ were needed to reach the absorption plateau in the UV-visible titrations. Compound 1 showed excellent sensitivity toward Cu2+ and Fe3+ detection, with LODs of 0.63 µM and 1.06 µM, respectively. The binding constant calculation indicated a strong complexation between compound 1 and Cu2+/Fe3+ ions. The 1H and 19F NMR titrations showed that an increasing concentration of cations induced a broadening and shifting of the aromatic region peaks, as well as the disappearance of the original fluorine peaks of the BODIPY core, which suggests that the ligand–metal (1:2) interaction may occur through the triphenylamino group and the BODIPY core. Full article
(This article belongs to the Special Issue Colorimetric Sensors: Methods and Applications)
Show Figures

Figure 1

Figure 1
<p>Structure of BODIPY <b>1</b>.</p>
Full article ">Figure 2
<p>Preliminary chemosensory study of BODIPY <b>1</b> in ACN upon interaction with different cations under natural light (<b>above</b>), and UV radiation at λ<sub>max</sub> = 312 nm (<b>below</b>).</p>
Full article ">Figure 3
<p>(<b>A</b>) Absorption spectra of compound <b>1</b> with 10 equivalents of several ions in ACN (<b>B</b>) Absorbance data at 670 nm of the interest ions in a bar diagram.</p>
Full article ">Figure 4
<p>(<b>A</b>) Normalized UV-vis absorption titration of <b>1</b> (1 × 10<sup>−5</sup> M) with Fe<sup>3+</sup> (0–100 μM) in ACN. (The insert represents the normalized absorption at 492 nm as a function of Fe<sup>3+</sup> equivalents). (<b>B</b>) The linear relationship between the Fe<sup>3+</sup> concentration and the absorbance of <b>1</b> (at 697 nm).</p>
Full article ">Figure 5
<p>(<b>A</b>) Normalized UV-vis absorption titration of <b>1</b> (1 × 10<sup>−5</sup> M) with Cu<sup>2+</sup> (0–100 μM) in ACN. (The insert represents the normalized absorption at 492 nm as a function of Cu<sup>2+</sup> equivalents). (<b>B</b>) The linear relationship between the Cu<sup>2+</sup> concentration and the absorbance of <b>1</b> (at 700 nm).</p>
Full article ">Figure 6
<p>(<b>A</b>) A Job’s plot of <b>1</b>-Fe<sup>3+</sup> in ACN. The absorbance was recorded at 693 nm. (<b>B</b>) A Benesi-Hildebrand diagram based on a spectrophotometric titration of <b>1</b> with Fe<sup>3+</sup> at 697 nm.</p>
Full article ">Figure 7
<p>(<b>A</b>) Job’s plot of <b>1</b>-Cu<sup>2+</sup> in ACN. The absorbance was recorded at 700 nm. (<b>B</b>) Benesi–Hildebrand diagram from spectrophotometric titration of <b>1</b> with Cu<sup>2+</sup> at 700 nm.</p>
Full article ">Figure 8
<p><sup>1</sup>H NMR spectra of <b>1</b> in the presence of increased amounts of Fe<sup>3+</sup> and Cu<sup>2+</sup> in DMSO-<span class="html-italic">d</span><sub>6</sub>.</p>
Full article ">Figure 9
<p><sup>19</sup>F NMR spectra of BODIPY <b>1</b> in the absence and presence of increase amount of Cu<sup>2+</sup> in ACN-<span class="html-italic">d</span><sub>3</sub>.</p>
Full article ">Scheme 1
<p>Synthesis of BODIPY derivative <b>1</b>.</p>
Full article ">
24 pages, 6678 KiB  
Article
Acoustic Emission-Based Analysis of Damage Mechanisms in Filament Wound Fiber Reinforced Composite Tubes
by Parsa Ghahremani, Mehdi Ahmadi Najafabadi, Sajad Alimirzaei and Mohammad Fotouhi
Sensors 2023, 23(15), 6994; https://doi.org/10.3390/s23156994 - 7 Aug 2023
Viewed by 1738
Abstract
This study investigates the mechanical behavior and damage mechanisms of thin-walled glass/epoxy filament wound tubes under quasi-static lateral loads. The novelty is that the tubes are reinforced in critical areas using strip composite patches to provide a topology-optimized tube, and their damage mechanisms [...] Read more.
This study investigates the mechanical behavior and damage mechanisms of thin-walled glass/epoxy filament wound tubes under quasi-static lateral loads. The novelty is that the tubes are reinforced in critical areas using strip composite patches to provide a topology-optimized tube, and their damage mechanisms and mechanical performance are compared to that of un-reinforced (reference) tubes. To detect the types of damage mechanisms and their progression, the Acoustic Emission (AE) method is employed, accompanied by data clustering analysis. The loading conditions are simulated using the finite element method, and the results are validated through experimental testing. The findings confirm that the inclusion of reinforcing patches improves the stress distribution, leading to enhanced load carrying capacity, stiffness, and energy absorption. Compared to the reference tubes, the reinforced tubes exhibit a remarkable increase of 23.25% in the load carrying capacity, 33.46% in the tube’s stiffness, and 23.67% in energy absorption. The analysis of the AE results reveals that both the reference and reinforced tubes experience damage mechanisms such as matrix cracking, fiber-matrix debonding, delamination, and fiber fracture. However, after matrix cracking, delamination becomes dominant in the reinforced tubes, while fiber failure prevails in the reference tubes. Moreover, by combining the AE energy and mechanical energy using the Sentry function, it is observed that the reinforced tubes exhibit a lower rate of damage propagation, indicating superior resistance to damage propagation compared to the reference tubes. Full article
Show Figures

Figure 1

Figure 1
<p>A schematic of the test setup and the location of the reinforcing patches.</p>
Full article ">Figure 2
<p>Different steps of the sample preparation and characterization.</p>
Full article ">Figure 3
<p>Force-displacement curves of the reference samples.</p>
Full article ">Figure 4
<p>Schematic of how to calculate the bending stiffness of the composite tube.</p>
Full article ">Figure 5
<p>Force-displacement curves of the reinforced samples.</p>
Full article ">Figure 6
<p>Comparison of the experimental and numerical force-displacement curves for the reference and reinforced samples.</p>
Full article ">Figure 7
<p>Comparing the deformation process of a reference sample with the FE simulation results.</p>
Full article ">Figure 8
<p>Comparing the deformation process of a reinforced sample with the FE simulation results.</p>
Full article ">Figure 9
<p>The σ<sub>x</sub> and σ<sub>y</sub> stress distributions: (<b>a</b>) the reference sample, (<b>b</b>) the reinforced sample.</p>
Full article ">Figure 10
<p>Different Hashin damage parameters in the reference sample: (<b>a</b>) Matrix compressive mechanism, (<b>b</b>) Matrix tensile mechanism, (<b>c</b>) fiber compressive mechanism, (<b>d</b>) fiber tensile mechanism.</p>
Full article ">Figure 11
<p>Different Hashin damage parameters in the reinforced sample (<b>a</b>) Matrix compressive mechanism, (<b>b</b>) Matrix tensile mechanism, (<b>c</b>) fiber compressive mechanism, (<b>d</b>) fiber tensile mechanism.</p>
Full article ">Figure 12
<p>Experimental and numerical comparison of the patch separation moment.</p>
Full article ">Figure 13
<p>Clustering map: (<b>a</b>) for the reference samples, (<b>b</b>) for the reinforced samples.</p>
Full article ">Figure 14
<p>Sentry function graph for a typical reference sample (S4) and reinforced sample (P2).</p>
Full article ">
16 pages, 3728 KiB  
Article
Designing a Low-Cost System to Monitor the Structural Behavior of Street Lighting Poles in Smart Cities
by Antonino Quattrocchi, Francesco Martella, Valeria Lukaj, Rocco De Leo, Massimo Villari and Roberto Montanini
Sensors 2023, 23(15), 6993; https://doi.org/10.3390/s23156993 - 7 Aug 2023
Cited by 3 | Viewed by 1934
Abstract
The structural collapse of a street lighting pole represents an aspect that is often underestimated and unpredictable, but of relevant importance for the safety of people and things. These events are complex to evaluate since several sources of damage are involved. In addition, [...] Read more.
The structural collapse of a street lighting pole represents an aspect that is often underestimated and unpredictable, but of relevant importance for the safety of people and things. These events are complex to evaluate since several sources of damage are involved. In addition, traditional inspection methods are ineffective, do not correctly quantify the residual life of poles, and are inefficient, requiring enormous costs associated with the vastness of elements to be investigated. An advantageous alternative is to adopt a distributed type of Structural Health Monitoring (SHM) technique based on the Internet of Things (IoT). This paper proposes the design of a low-cost system, which is also easy to integrate in current infrastructures, for monitoring the structural behavior of street lighting poles in Smart Cities. At the same time, this device collects previous structural information and offers some secondary functionalities related to its application, such as meteorological information. Furthermore, this paper intends to lay the foundations for the development of a method that is able to avoid the collapse of the poles. Specifically, the implementation phase is described in the aspects concerning low-cost devices and sensors for data acquisition and transmission and the strategies of information technologies (ITs), such as Cloud/Edge approaches, for storing, processing and presenting the achieved measurements. Finally, an experimental evaluation of the metrological performance of the sensing features of this system is reported. The main results highlight that the employment of low-cost equipment and open-source software has a double implication. On one hand, they entail advantages such as limited costs and flexibility to accommodate the specific necessities of the interested user. On the other hand, the used sensors require an indispensable metrological evaluation of their performance due to encountered issues relating to calibration, reliability and uncertainty. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Representative schema of the electrical and electronical components of the monitoring system.</p>
Full article ">Figure 2
<p>Installation and details of the monitoring system.</p>
Full article ">Figure 3
<p>Characterization setup for the sensors of (<b>a</b>) ambient temperature and humidity, (<b>b</b>) visible light intensity, (<b>c</b>) acceleration and (<b>d</b>) tilt of the pole.</p>
Full article ">Figure 4
<p>Schematic representation of the IT architecture.</p>
Full article ">Figure 5
<p>Schematic representation of the deployed scenario.</p>
Full article ">Figure 6
<p>Typical interface, (<b>a</b>) indicators and (<b>c</b>) graphs customed by the uses on Grafana for desktop terminals, and typical alert for smartphone on Telegram app (<b>b</b>).</p>
Full article ">Figure 7
<p>Calibration curves with bars of standard deviation and comparison between the trends of the uncertainties computed by the standard deviations of the collected measurements and by the declared accuracy for (<b>a</b>,<b>b</b>) ambient temperature at an RH of 50% and (<b>c</b>,<b>d</b>) ambient humidity at 25 °C, and for (<b>e</b>,<b>f</b>) visible light intensity at 50% RH and 25 °C of the proposed system.</p>
Full article ">Figure 8
<p>Calibration curves with bars of standard deviation and comparison between the trends of the uncertainties computed by the standard deviations of the collected measurements and by the declared accuracy for the MPU-6050 sensor in the GY-521 module for the vertical acceleration (<b>a</b>,<b>b</b>) and tilt (<b>c</b>,<b>d</b>) of the pole at 50% RH and 25 °C.</p>
Full article ">Figure 9
<p>Calibration curves with bars of standard deviation and comparison between the trends of the uncertainties computed by the standard deviations of the collected measurements and by the declared accuracy for the MPU-6050 sensor in the GY-521_N2 module for the vertical acceleration (<b>a</b>,<b>b</b>) and tilt (<b>c</b>,<b>d</b>) of the pole at 50% RH and 25 °C.</p>
Full article ">
19 pages, 15305 KiB  
Article
Neural-Network-Based Localization Method for Wi-Fi Fingerprint Indoor Localization
by Hui Zhu, Li Cheng, Xuan Li and Haiwen Yuan
Sensors 2023, 23(15), 6992; https://doi.org/10.3390/s23156992 - 7 Aug 2023
Cited by 4 | Viewed by 3089
Abstract
Despite the high demand for Internet location service applications, Wi-Fi indoor localization often suffers from time- and labor-intensive data collection processes. This study proposes a novel indoor localization model that utilizes fingerprinting technology based on a convolutional neural network to address this issue. [...] Read more.
Despite the high demand for Internet location service applications, Wi-Fi indoor localization often suffers from time- and labor-intensive data collection processes. This study proposes a novel indoor localization model that utilizes fingerprinting technology based on a convolutional neural network to address this issue. The aim is to enhance Wi-Fi indoor localization by streamlining the data collection process. The proposed indoor localization model leverages a 3D ray-tracing technique to simulate the wireless received signal strength intensity (RSSI) across the field. By incorporating this advanced technique, the model aims to improve the accuracy and efficiency of Wi-Fi indoor localization. In addition, an RSSI heatmap fingerprint dataset generated from the ray-tracing simulation is trained on the proposed indoor localization model. To optimize and evaluate the model’s performance in real-world scenarios, experiments were conducted using simulated datasets obtained from the publicly available databases of UJIIndoorLoc and Wireless InSite. The results show that the new approach solves the problem of resource limitation while achieving a verification accuracy of up to 99.09%. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Traditional indoor positioning process based on fingerprinting technology.</p>
Full article ">Figure 2
<p>Proposed indoor fingerprint-based localization process.</p>
Full article ">Figure 3
<p>Structure of a convolutional neural network.</p>
Full article ">Figure 4
<p>Indoor localization process for large-scale scenarios.</p>
Full article ">Figure 5
<p>Indoor localization process for small-scale scenarios.</p>
Full article ">Figure 6
<p>Fingerprint grayscale maps of two different access points in the localization area.</p>
Full article ">Figure 7
<p>Fingerprint heatmaps of two different access points in the localization area.</p>
Full article ">Figure 8
<p>Structure of proposed model of an indoor positioning network for training offline dataset.</p>
Full article ">Figure 9
<p>RSSI changes at different APs throughout the day.</p>
Full article ">Figure 10
<p>Simulated maps of rooms 407 (<b>left</b>) and 512 (<b>right</b>).</p>
Full article ">Figure 11
<p>Real images of rooms 407 (<b>left</b>) and 512 (<b>right</b>).</p>
Full article ">Figure 12
<p>Path loss in real and simulated scenarios.</p>
Full article ">Figure 13
<p>Average floor accuracy of each localization method with the UJIIndoorLoc datasets.</p>
Full article ">Figure 14
<p>Accuracy and loss of the proposed method for 30 iterations of training and validation sets from the UJIIndoorLoc dataset.</p>
Full article ">Figure 15
<p>Accuracy of traditional CNN and MobileNet models for 30 iterations of training and validation sets from the UJIIndoorLoc dataset.</p>
Full article ">Figure 16
<p>Accuracy and loss of the proposed method for 30 iterations of training and validation sets with the simulated dataset.</p>
Full article ">Figure 17
<p>Accuracies of the traditional CNN and MobileNet for 30 iterations of training and validation sets with the simulated dataset.</p>
Full article ">Figure 18
<p>Accuracy and loss of the proposed method for 30 iterations of training and validation sets with the measured dataset.</p>
Full article ">Figure 19
<p>Accuracies of the traditional CNN and MobileNet for 30 iterations of training and validation sets with the measured dataset.</p>
Full article ">
20 pages, 477 KiB  
Article
On Scalability of FDD-Based Cell-Free Massive MIMO Framework
by Beenish Hassan, Sobia Baig and Saad Aslam
Sensors 2023, 23(15), 6991; https://doi.org/10.3390/s23156991 - 7 Aug 2023
Viewed by 1456
Abstract
Cell-free massive multiple-input multiple-output (MIMO) systems have the potential of providing joint services, including joint initial access, efficient clustering of access points (APs), and pilot allocation to user equipment (UEs) over large coverage areas with reduced interference. In cell-free massive MIMO, a large [...] Read more.
Cell-free massive multiple-input multiple-output (MIMO) systems have the potential of providing joint services, including joint initial access, efficient clustering of access points (APs), and pilot allocation to user equipment (UEs) over large coverage areas with reduced interference. In cell-free massive MIMO, a large coverage area corresponds to the provision and maintenance of the scalable quality of service requirements for an infinitely large number of UEs. The research in cell-free massive MIMO is mostly focused on time division duplex mode due to the availability of channel reciprocity which aids in avoiding feedback overhead. However, the frequency division duplex (FDD) protocol still dominates the current wireless standards, and the provision of angle reciprocity aids in reducing this overhead. The challenge of providing a scalable cell-free massive MIMO system in an FDD setting is also prevalent, since computational complexity regarding signal processing tasks, such as channel estimation, precoding/combining, and power allocation, becomes prohibitively high with an increase in the number of UEs. In this work, we consider an FDD-based scalable cell-free network with angular reciprocity and a dynamic cooperation clustering approach. We have proposed scalability for our FDD cell-free and performed a comparative analysis with reference to channel estimation, power allocation, and precoding/combining techniques. We present expressions for scalable spectral efficiency, angle-based precoding/combining schemes and provide a comparison of overhead between conventional and scalable angle-based estimation as well as combining schemes. Simulations confirm that the proposed scalable cell-free network based on an FDD scheme outperforms the conventional matched filtering scheme based on scalable precoding/combining schemes. The angle-based LP-MMSE in the FDD cell-free network provides 14.3% improvement in spectral efficiency and 11.11% improvement in energy efficiency compared to the scalable MF scheme. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>DCC framework for overlapping clustering scheme.</p>
Full article ">Figure 2
<p>Comparison of spectral efficiency for angle-based LP-MMSE with increase/decrease in transmit power.</p>
Full article ">Figure 3
<p>Comparison of downlink power control for angle-based MF, LP-MMSE, and MMSE schemes.</p>
Full article ">Figure 4
<p>Comparison of uplink power control for angle-based MF, LP-MMSE, and MMSE schemes.</p>
Full article ">Figure 5
<p>CDF of spectral efficiency for uplink, 100 APs and <span class="html-italic">M</span> = 8 antennas/AP.</p>
Full article ">Figure 6
<p>CDF of spectral efficiency for uplink, <span class="html-italic">N</span> = 200 APs and <span class="html-italic">M</span> = 4 antennas/AP.</p>
Full article ">Figure 7
<p>CDF of spectral efficiency for downlink, <span class="html-italic">N</span> = 100 APs and <span class="html-italic">M</span> = 8 antennas/AP.</p>
Full article ">Figure 8
<p>CDF of spectral efficiency for downlink, <span class="html-italic">N</span> = 200 APs and <span class="html-italic">M</span> = 4 antennas/AP.</p>
Full article ">Figure 9
<p>Comparison of energy efficiency for angle-based MF and angle-based LP-MMSE.</p>
Full article ">
22 pages, 7006 KiB  
Article
Cyclic Generative Attention-Adversarial Network for Low-Light Image Enhancement
by Tong Zhen, Daxin Peng and Zhihui Li
Sensors 2023, 23(15), 6990; https://doi.org/10.3390/s23156990 - 7 Aug 2023
Cited by 1 | Viewed by 1607
Abstract
Images captured under complex conditions frequently have low quality, and image performance obtained under low-light conditions is poor and does not satisfy subsequent engineering processing. The goal of low-light image enhancement is to restore low-light images to normal illumination levels. Although many methods [...] Read more.
Images captured under complex conditions frequently have low quality, and image performance obtained under low-light conditions is poor and does not satisfy subsequent engineering processing. The goal of low-light image enhancement is to restore low-light images to normal illumination levels. Although many methods have emerged in this field, they are inadequate for dealing with noise, color deviation, and exposure issues. To address these issues, we present CGAAN, a new unsupervised generative adversarial network that combines a new attention module and a new normalization function based on cycle generative adversarial networks and employs a global–local discriminator trained with unpaired low-light and normal-light images and stylized region loss. Our attention generates feature maps via global and average pooling, and the weights of different feature maps are calculated by multiplying learnable parameters and feature maps in the appropriate order. These weights indicate the significance of corresponding features. Specifically, our attention is a feature map attention mechanism that improves the network’s feature-extraction ability by distinguishing the normal light domain from the low-light domain to obtain an attention map to solve the color bias and exposure problems. The style region loss guides the network to more effectively eliminate the effects of noise. The new normalization function we present preserves more semantic information while normalizing the image, which can guide the model to recover more details and improve image quality even further. The experimental results demonstrate that the proposed method can produce good results that are useful for practical applications. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Comparison of our method with other methods after enhancement. Our method preserves color information and is closest to the real image in the area denoted by red boxes, yielding good results. (<b>a</b>) Input, (<b>b</b>) DALE, (<b>c</b>) DRBN, (<b>d</b>) RUAS, (<b>e</b>) Ground Truth, (<b>f</b>) ours.</p>
Full article ">Figure 2
<p>The CGAAN network architecture. <math display="inline"><semantics><mi>X</mi></semantics></math> and <math display="inline"><semantics><mi>Y</mi></semantics></math> represent the low-light and normal-light images, respectively.<math display="inline"><semantics><mrow><msub><mi>G</mi><mrow><mi>X</mi><mo>→</mo><mi>Y</mi></mrow></msub></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi>G</mi><mrow><mi>Y</mi><mo>→</mo><mi>X</mi></mrow></msub></mrow></semantics></math> are two generators that represent, respectively, the generation of a normal-light image from a low-light image and the generation of a low-light image from a normal-light image. <math display="inline"><semantics><mrow><msub><mi>D</mi><mrow><mi>X</mi><mo>→</mo><mi>Y</mi></mrow></msub></mrow></semantics></math> and <math display="inline"><semantics><mrow><msub><mi>D</mi><mrow><mi>Y</mi><mo>→</mo><mi>X</mi></mrow></msub></mrow></semantics></math> are two differentiators. They are the discriminators for images with normal illumination and images with low illumination, respectively.</p>
Full article ">Figure 3
<p>The structure of our generator. It consists of three modules: encoding, adaptive attention, and decoding.</p>
Full article ">Figure 4
<p>Visual comparison with other advanced methods on the EnlightenGAN dataset. The image’s key details are indicated in red boxes. (<b>a</b>) Input, (<b>b</b>) DALE, (<b>c</b>) DRBN, (<b>d</b>) DSLR, (<b>e</b>) EnlightenGAN, (<b>f</b>) RUAS, (<b>g</b>) Zero-DCE, (<b>h</b>) SGZ, (<b>i</b>) SCI, (<b>j</b>) ours.</p>
Full article ">Figure 5
<p>Visual comparison with other advanced methods on the EnlightenGAN dataset. The image’s key details are indicated in red boxes. (<b>a</b>) Input, (<b>b</b>) DALE, (<b>c</b>) DRBN, (<b>d</b>) DSLR, (<b>e</b>) EnlightenGAN, (<b>f</b>) RUAS, (<b>g</b>) Zero-DCE, (<b>h</b>) SGZ, (<b>i</b>) SCI, (<b>j</b>) ours.</p>
Full article ">Figure 6
<p>Visual comparison with other methods on the LIME dataset. The image’s key details are indicated in red boxes. (<b>a</b>) Input, (<b>b</b>) DSLR, (<b>c</b>) RUAS, (<b>d</b>) ours.</p>
Full article ">Figure 7
<p>Visual comparison with other methods on the VV dataset. The image’s key details are indicated in red boxes. (<b>a</b>) Input, (<b>b</b>) DSLR, (<b>c</b>) RUAS, (<b>d</b>) ours.</p>
Full article ">Figure 8
<p>A comparison of the running times of the methods used on different datasets. (<b>a</b>) Comparison of our method’s running time with other state-of-the-art methods, (<b>b</b>) Comparison of our method’s average running time with other state-of-the-art methods.</p>
Full article ">Figure 9
<p>On images of varying brightness, the evaluation metrics of our method were compared to those of other cutting-edge methods. (<b>a</b>) PSNR, (<b>b</b>) SSIM, (<b>c</b>) NIQE.</p>
Full article ">Figure 10
<p>A visual comparison of our ablation technique. Our approach works well. (<b>a</b>) Input, from left to right, (<b>b</b>) standard, (<b>c</b>) our approach, (<b>d</b>) insufficient adaptive attention, (<b>e</b>) there will be no loss of style across regions, (<b>f</b>) no function for normalization.</p>
Full article ">Figure 11
<p>EnlightenGAN dataset results from the Google Cloud Vision API. The detected targets are denoted by the green box. (<b>a</b>) The raw results of low-light image detection; (<b>b</b>) the enhanced image’s detection results.</p>
Full article ">
19 pages, 8546 KiB  
Article
A Miniaturized Tri-Band Implantable Antenna for ISM/WMTS/Lower UWB/Wi-Fi Frequencies
by Anupma Gupta, Vipan Kumar, Shonak Bansal, Mohammed H. Alsharif, Abu Jahid and Ho-Shin Cho
Sensors 2023, 23(15), 6989; https://doi.org/10.3390/s23156989 - 7 Aug 2023
Cited by 13 | Viewed by 1763
Abstract
This study aims to design a compact antenna structure suitable for implantable devices, with a broad frequency range covering various bands such as the Industrial Scientific and Medical band (868–868.6 MHz, 902–928 MHz, 5.725–5.875 GHz), the Wireless Medical Telemetry Service (WMTS) band, a [...] Read more.
This study aims to design a compact antenna structure suitable for implantable devices, with a broad frequency range covering various bands such as the Industrial Scientific and Medical band (868–868.6 MHz, 902–928 MHz, 5.725–5.875 GHz), the Wireless Medical Telemetry Service (WMTS) band, a subset of the unlicensed 3.5–4.5 GHz ultra-wideband (UWB) that is free of interference, and various Wi-Fi spectra (3.6 GHz, 4.9 GHz, 5 GHz, 5.9 GHz, 6 GHz). The antenna supports both low and high frequencies for efficient data transfer and is compatible with various communication technologies. The antenna features an asynchronous-meandered radiator, a parasitic patch, and an open-ended square ring-shaped ground plane. The antenna is deployed deep inside the muscle layer of a rectangular phantom below the skin and fat layer at a depth of 7 mm for numerical simulation. Furthermore, the antenna is deployed in a cylindrical phantom and bent to check the suitability for different organs. A prototype of the antenna is created, and its reflection coefficient and radiation patterns are measured in fresh pork tissue. The proposed antenna is considered a suitable candidate for implantable technology compared to other designs reported in the literature. It can be observed that the proposed antenna in this study has the smallest volume (75 mm3) and widest bandwidth (181.8% for 0.86 GHz, 9.58% for 1.43 GHz, and 285.7% for the UWB subset and Wi-Fi). It also has the highest gain (−26 dBi for ISM, −14 dBi for WMTS, and −14.2 dBi for UWB subset and Wi-Fi) compared to other antennas in the literature. In addition, the SAR values for the proposed antenna are well below the safety limits prescribed by IEEE Std C95.1-1999, with SAR values of 0.409 W/Kg for 0.8 GHz, 0.534 W/Kg for 1.43 GHz, 0.529 W/Kg for 3.5 GHz, and 0.665 W/Kg for 5.5 GHz when the applied input power is 10 mW. Overall, the proposed antenna in this study demonstrates superior performance compared to existing tri-band implantable antennas in terms of size, bandwidth, gain, and SAR values. Full article
(This article belongs to the Special Issue Smart Antennas for Future Communications)
Show Figures

Figure 1

Figure 1
<p>Simulation model of 3-layered tissue phantom, (<b>a</b>) rectangular phantom where antenna is implanted in muscle, (<b>b</b>) cylindrical phantom antenna implant in skin, (<b>c</b>) antenna bent across 30 mm radius.</p>
Full article ">Figure 2
<p>Antenna configuration: (<b>a</b>) front view, (<b>b</b>) back view, (<b>c</b>) antenna implanted in tissue, (<b>d</b>) cross-sectional view of antenna.</p>
Full article ">Figure 3
<p>Step-wise geometry of the antenna.</p>
Full article ">Figure 4
<p>(<b>a</b>) |<span class="html-italic">S</span>11| plot (<b>b</b>) VSWR plot for the designed steps.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) |<span class="html-italic">S</span>11| plot (<b>b</b>) VSWR plot for the designed steps.</p>
Full article ">Figure 5
<p>Surface current (<b>a</b>) 0.88 GHz (step1), (<b>b</b>) 1.66 GHz (step1), (<b>c</b>) 0.88 GHz (step2), (<b>d</b>) 1.46 GHz (step2).</p>
Full article ">Figure 6
<p>|<span class="html-italic">S</span>11| plot for parametric sweep for resonator length from left edge.</p>
Full article ">Figure 7
<p>|<span class="html-italic">S</span>11| plot for parametric sweep for resonator length from right edge.</p>
Full article ">Figure 8
<p>Surface current distribution at different frequencies.</p>
Full article ">Figure 9
<p>Photographs during measurement: (<b>a</b>) antenna without superstrate, (<b>b</b>) with superstrate, (<b>c</b>) antenna in animal tissue, (<b>d</b>) antenna in anechoic chamber.</p>
Full article ">Figure 10
<p>(<b>a</b>) |<span class="html-italic">S</span>11| plot for simulated and measured values, (<b>b</b>) |<span class="html-italic">S</span>11| plot for different bending radii and size of tissue.</p>
Full article ">Figure 11
<p>Simulated and measured 2-D and simulated 3-D radiation plots (<b>a</b>) at 0.86 GHz, (<b>b</b>) at 1.46 GHz, (<b>c</b>) at 3.5 GHz, (<b>d</b>) 5.5 GHz.</p>
Full article ">Figure 11 Cont.
<p>Simulated and measured 2-D and simulated 3-D radiation plots (<b>a</b>) at 0.86 GHz, (<b>b</b>) at 1.46 GHz, (<b>c</b>) at 3.5 GHz, (<b>d</b>) 5.5 GHz.</p>
Full article ">Figure 12
<p>3-D radiation pattern in bent state (<b>a</b>) at 0.86 GHz, (<b>b</b>) at 1.46 GHz, (<b>c</b>) at 3.5 GHz, (<b>d</b>) 5.5 GHz.</p>
Full article ">Figure 13
<p>Plot for gain.</p>
Full article ">Figure 14
<p>Plot for radiation efficiency.</p>
Full article ">Figure 15
<p>SAR plot at (<b>a</b>) 0.86 GHz, (<b>b</b>) 1.43 GHz, (<b>c</b>) 3.5 GHz, (<b>d</b>) 5.5 GHz.</p>
Full article ">Figure 16
<p>SAR plot in bent state at (<b>a</b>) 0.86 GHz, (<b>b</b>) 1.43 GHz, (<b>c</b>) 3.5 GHz, (<b>d</b>) 5.5 GHz.</p>
Full article ">
15 pages, 3672 KiB  
Article
A Comparative Study of the Typing Performance of Two Mid-Air Text Input Methods in Virtual Environments
by Yueyang Wang, Yahui Wang, Xiaoqiong Li, Chengyi Zhao, Ning Ma and Zixuan Guo
Sensors 2023, 23(15), 6988; https://doi.org/10.3390/s23156988 - 6 Aug 2023
Cited by 3 | Viewed by 2223
Abstract
Inputting text is a prevalent requirement among various virtual reality (VR) applications, including VR-based remote collaboration. In order to eliminate the need for complex rules and handheld devices for typing within virtual environments, researchers have proposed two mid-air input methods—the trace and tap [...] Read more.
Inputting text is a prevalent requirement among various virtual reality (VR) applications, including VR-based remote collaboration. In order to eliminate the need for complex rules and handheld devices for typing within virtual environments, researchers have proposed two mid-air input methods—the trace and tap methods. However, the specific impact of these input methods on performance in VR remains unknown. In this study, typing tasks were used to compare the performance, subjective report, and cognitive load of two mid-air input methods in VR. While the trace input method was more efficient and novel, it also entailed greater frustration and cognitive workload. Fortunately, the levels of frustration and cognitive load associated with the trace input method could be reduced to the same level as those of the tap input method via familiarity with VR. These findings could aid the design of virtual input methods, particularly for VR applications with varying text input demands. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Three gestures.</p>
Full article ">Figure 2
<p>The tap input method (<b>a</b>) and the trace input method (<b>b</b>).</p>
Full article ">Figure 3
<p>Virtual environment.</p>
Full article ">Figure 4
<p>Experimental system schematic representation.</p>
Full article ">Figure 5
<p>Words per minute. Error bars represent 95% confidence interval (CI).</p>
Full article ">Figure 6
<p>Subjective workload (according to the six facets of the NASA-TLX scale) of participants when using different text entry methods. Error bars represent 95% CI. The symbol * indicates a statistically significant main effect.</p>
Full article ">Figure 7
<p>Mean pupil diameter. Error bars represent 95% CI.</p>
Full article ">
24 pages, 8459 KiB  
Article
Robust Localization of Industrial Park UGV and Prior Map Maintenance
by Fanrui Luo, Zhenyu Liu, Fengshan Zou, Mingmin Liu, Yang Cheng and Xiaoyu Li
Sensors 2023, 23(15), 6987; https://doi.org/10.3390/s23156987 - 6 Aug 2023
Cited by 2 | Viewed by 1662
Abstract
The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase [...] Read more.
The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase the real-time localization accuracy and efficiency of UGVs, and to improve the robustness of UGVs’ odometry within industrial parks—thereby addressing issues related to UGVs’ motion control discontinuity and odometry drift—this paper proposes a tightly coupled LiDAR-IMU odometry method based on FAST-LIO2, integrating ground constraints and a novel feature extraction method. Additionally, a novel maintenance method of prior maps is proposed. The front-end module acquires the prior pose of the UGV by combining the detection and correction of relocation with point cloud registration. Then, the proposed maintenance method of prior maps is used to hierarchically and partitionally segregate and perform the real-time maintenance of the prior maps. At the back-end, real-time localization is achieved by the proposed tightly coupled LiDAR-IMU odometry that incorporates ground constraints. Furthermore, a feature extraction method based on the bidirectional-projection plane slope difference filter is proposed, enabling efficient and accurate point cloud feature extraction for edge, planar and ground points. Finally, the proposed method is evaluated, using self-collected datasets from industrial parks and the KITTI dataset. Our experimental results demonstrate that, compared to FAST-LIO2 and FAST-LIO2 with the curvature feature extraction method, the proposed method improved the odometry accuracy by 30.19% and 48.24% on the KITTI dataset. The efficiency of odometry was improved by 56.72% and 40.06%. When leveraging prior maps, the UGV achieved centimeter-level localization accuracy. The localization accuracy of the proposed method was improved by 46.367% compared to FAST-LIO2 on self-collected datasets, and the located efficiency was improved by 32.33%. The z-axis-located accuracy of the proposed method reached millimeter-level accuracy. The proposed prior map maintenance method reduced RAM usage by 64% compared to traditional methods. Full article
Show Figures

Figure 1

Figure 1
<p>Overall system framework.</p>
Full article ">Figure 2
<p>The results of feature extraction. (<b>a</b>) is the feature extraction effect of the near-ground end. (<b>b</b>) is the feature extraction effect of the far end.</p>
Full article ">Figure 3
<p>Map maintenance diagram.</p>
Full article ">Figure 4
<p>The UGV for collecting indoor and outdoor datasets.</p>
Full article ">Figure 5
<p>The results of three methods on the KITTI dataset. (<b>a</b>) is the result of the combination of proposed odometry method and bidirectional projection plane slope difference filter method. (<b>b</b>) is the result of FAST-LIO2. (<b>c</b>) is the result of the combination of FAST-LIO2 and the curvature method. (<b>d</b>–<b>f</b>) are the corresponding local results.</p>
Full article ">Figure 6
<p>The trajectory map generated by EVO. (<b>a</b>) is a 3D trajectory. (<b>b</b>) is a three-axis trajectory. (<b>c</b>) is a vertical planar trajectory. (<b>d</b>) is a horizontal planar trajectory. The black trajectory represents the ground truth trajectory. The blue trajectory corresponds to the trajectory obtained using the proposed method. The green trajectory corresponds to the trajectory obtained using FAST-LIO2 combined with curvature-based feature extraction. The red trajectory corresponds to the trajectory obtained using FAST-LIO2.</p>
Full article ">Figure 7
<p>The number of feature points and the curve of time consumption for each part of the KITTI dataset. (<b>a</b>) is the curve of the number of feature points of each frame. (<b>b</b>) is the curve of the feature extraction time of each frame. (<b>c</b>) is the curve of the odometry processing time of each frame.</p>
Full article ">Figure 8
<p>The results of combining different methods with prior maps on indoor and outdoor self-collected datasets. (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) are the overall and local figures of the indoor and outdoor results of the proposed odometry method combined with the proposed prior map maintenance method and proposed feature extraction method. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>) are the overall and local figures of the indoor and outdoor results of FAST-LIO2 combined with the proposed prior map maintenance method.</p>
Full article ">
21 pages, 1262 KiB  
Review
Multimodal Federated Learning: A Survey
by Liwei Che, Jiaqi Wang, Yao Zhou and Fenglong Ma
Sensors 2023, 23(15), 6986; https://doi.org/10.3390/s23156986 - 6 Aug 2023
Cited by 27 | Viewed by 8445
Abstract
Federated learning (FL), which provides a collaborative training scheme for distributed data sources with privacy concerns, has become a burgeoning and attractive research area. Most existing FL studies focus on taking unimodal data, such as image and text, as the model input and [...] Read more.
Federated learning (FL), which provides a collaborative training scheme for distributed data sources with privacy concerns, has become a burgeoning and attractive research area. Most existing FL studies focus on taking unimodal data, such as image and text, as the model input and resolving the heterogeneity challenge, i.e., the challenge of non-identical distribution (non-IID) caused by a data distribution imbalance related to data labels and data amount. In real-world applications, data are usually described by multiple modalities. However, to the best of our knowledge, only a handful of studies have been conducted to improve system performance utilizing multimodal data. In this survey paper, we identify the significance of this emerging research topic of multimodal federated learning (MFL) and present a literature review on the state-of-art MFL methods. Furthermore, we categorize multimodal federated learning into congruent and incongruent multimodal federated learning based on whether all clients possess the same modal combinations. We investigate the feasible application tasks and related benchmarks for MFL. Lastly, we summarize the promising directions and fundamental challenges in this field for future research. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Illustration of traditional unimodal FL v.s. multimodal FL.</p>
Full article ">Figure 2
<p>Taxonomy of multimodal federated learning (MFL).</p>
Full article ">Figure 3
<p>A diagram of the screening process.</p>
Full article ">Figure 4
<p>Illustration of horizontal multimodal federated learning and vertical multimodal federated learning. (<b>Left</b>): horizontal multimodal federated learning involving two clients. Both hold image and text data. (<b>Right</b>): the vertical multimodal federated learning example includes two clients with exclusive modalities. Client <span class="html-italic">a</span> has audio and video data, while client <span class="html-italic">b</span> holds heat rate and acceleration sensor data.</p>
Full article ">Figure 5
<p>Illustration of multimodal federated transfer learning and hybrid multimodal federated learning. (<b>Left</b>): multimodal federated transfer learning involving two hospitals as clients. One holds MRI and PET data, the other holds MRI and CT data. (<b>Right</b>): hybrid multimodal federated learning including three clients with different modality combinations. The system contains both unimodal and multimodal clients.</p>
Full article ">
13 pages, 48963 KiB  
Article
A Novel Monopole Ultra-Wide-Band Multiple-Input Multiple-Output Antenna with Triple-Notched Characteristics for Enhanced Wireless Communication and Portable Systems
by Shahid Basir, Ubaid Ur Rahman Qureshi, Fazal Subhan, Muhammad Asghar Khan, Syed Agha Hassnain Mohsan, Yazeed Yasin Ghadi, Khmaies Ouahada, Habib Hamam and Fazal Noor
Sensors 2023, 23(15), 6985; https://doi.org/10.3390/s23156985 - 6 Aug 2023
Cited by 5 | Viewed by 1614
Abstract
This study introduces a monopole 4 × 4 Ultra-Wide-Band (UWB) Multiple-Input Multiple-Output (MIMO) antenna system with a novel structure and outstanding performance. The proposed design has triple-notched characteristics due to CSRR etching and a C-shaped curve. The notching occurs in 4.5 GHz, 5.5 [...] Read more.
This study introduces a monopole 4 × 4 Ultra-Wide-Band (UWB) Multiple-Input Multiple-Output (MIMO) antenna system with a novel structure and outstanding performance. The proposed design has triple-notched characteristics due to CSRR etching and a C-shaped curve. The notching occurs in 4.5 GHz, 5.5 GHz, and 8.8 GHz frequencies in the C-band, WLAN band, and satellite network, respectively. Complementary Split-Ring Resonators (CSRR) are etched at the feed line and ground plane, and a C-shaped curve is used to reduce interference between the ultra-wide band and narrowband. The mutual coupling of CSRR enables the MIMO architecture to achieve high isolation and polarisation diversity. With prototype dimensions of (60.4 × 60.4) mm2, the proposed antenna design is small. The simulated and measured results show good agreement, indicating the effectiveness of the UWB-MIMO antenna for wireless communication and portable systems. Full article
Show Figures

Figure 1

Figure 1
<p>The geometrical representation of the proposed antenna: (<b>a</b>) front and back, (<b>b</b>) CSRR and arch, (<b>c</b>) MIMO antenna system, (<b>d</b>) prototype of antenna.</p>
Full article ">Figure 2
<p>The simulated result of C-band rejection.</p>
Full article ">Figure 3
<p>The simulated result of WLAN band rejection.</p>
Full article ">Figure 4
<p>The simulated result of WLAN band rejection.</p>
Full article ">Figure 5
<p>The simulated and measured result of the reflection coefficient at the C, WLAN, and satellite bands.</p>
Full article ">Figure 6
<p>The simulated and measured result of the gain.</p>
Full article ">Figure 7
<p>The simulated and measured results of radiation patterns at different frequencies: (<b>a</b>) C-band notch frequency 4.45 GHz, (<b>b</b>) WLAN notch frequency 5.5 GHz, (<b>c</b>) in-band frequency 6.25 GHz, (<b>d</b>) satellite notch 8.5 GHz.</p>
Full article ">Figure 8
<p>Surface current distribution at (<b>a</b>) 5.5 GHz, (<b>b</b>) 8.8 GHz.</p>
Full article ">Figure 9
<p>Simulated and measured transmission coefficient of MIMO system.</p>
Full article ">Figure 10
<p>Diversity parameters ECC, DG, MEG, and CCL of MIMO system.</p>
Full article ">
12 pages, 2662 KiB  
Communication
Direction of Arrival Estimation of Coherent Wideband Sources Using Nested Array
by Yawei Tang, Weiming Deng, Jianfeng Li and Xiaofei Zhang
Sensors 2023, 23(15), 6984; https://doi.org/10.3390/s23156984 - 6 Aug 2023
Viewed by 1461
Abstract
Due to their ability to achieve higher DOA estimation accuracy and larger degrees of freedom (DOF) using a fixed number of antennas, sparse arrays, etc., nested and coprime arrays have attracted a lot of attention in relation to research into direction of arrival [...] Read more.
Due to their ability to achieve higher DOA estimation accuracy and larger degrees of freedom (DOF) using a fixed number of antennas, sparse arrays, etc., nested and coprime arrays have attracted a lot of attention in relation to research into direction of arrival (DOA) estimation. However, the usage of the sparse array is based on the assumption that the signals are independent of each other, which is hard to guarantee in practice due to the complex propagation environment. To address the challenge of sparse arrays struggling to handle coherent wideband signals, we propose the following method. Firstly, we exploit the coherent signal subspace method (CSSM) to focus the wideband signals on the reference frequency and assist in the decorrelation process, which can be implemented without any pre-estimations. Then, we virtualize the covariance matrix of sparse array due to the decorrelation operation. Next, an enhanced spatial smoothing algorithm is applied to make full use of the information available in the data covariance matrix, as well as to improve the decorrelation effect, after which stage the multiple signal classification (MUSIC) algorithm is used to obtain DOA estimations. In the simulation, with reference to the root mean square error (RMSE) that varies in tandem with the signal-to-noise ratio (SNR), the algorithm achieves satisfactory results compared to other state-of-the-art algorithms, including sparse arrays using the traditional incoherent signal subspace method (ISSM), the coherent signal subspace method (CSSM), spatial smoothing algorithms, etc. Furthermore, the proposed method is also validated via real data tests, and the error value is only 0.2 degrees in real data tests, which is lower than those of the other methods in real data tests. Full article
Show Figures

Figure 1

Figure 1
<p>Two-level nested array.</p>
Full article ">Figure 2
<p>Overlapping subarrays used in the spatial smoothing method.</p>
Full article ">Figure 3
<p>Comparison between the spatial spectra of different algorithms.</p>
Full article ">Figure 4
<p>RMSE versus SNR.</p>
Full article ">Figure 5
<p>(<b>a</b>) is the sparse array used in the experiment, which used a tripod and a spirit level to ensure the stability of the support frame used for observation. (<b>b</b>) is the signal generator used in the experiment.</p>
Full article ">Figure 6
<p>Experiment scene designed to satisfy the conditions required for coherent signals; we performed experiments inside of the room.</p>
Full article ">Figure 7
<p>Spectrum of the proposed method for real data.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop