[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,163)

Search Parameters:
Keywords = Sensor observation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 6231 KiB  
Article
Towards Cleaner Cities: Estimating Vehicle-Induced PM2.5 with Hybrid EBM-CMA-ES Modeling
by Saleh Alotaibi, Hamad Almujibah, Khalaf Alla Adam Mohamed, Adil A. M. Elhassan, Badr T Alsulami, Abdullah Alsaluli and Afaq Khattak
Toxics 2024, 12(11), 827; https://doi.org/10.3390/toxics12110827 (registering DOI) - 19 Nov 2024
Abstract
In developing countries, vehicle emissions are a major source of atmospheric pollution, worsened by aging vehicle fleets and less stringent emissions regulations. This results in elevated levels of particulate matter, contributing to the degradation of urban air quality and increasing concerns over the [...] Read more.
In developing countries, vehicle emissions are a major source of atmospheric pollution, worsened by aging vehicle fleets and less stringent emissions regulations. This results in elevated levels of particulate matter, contributing to the degradation of urban air quality and increasing concerns over the broader effects of atmospheric emissions on human health. This study proposes a Hybrid Explainable Boosting Machine (EBM) framework, optimized using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES), to predict vehicle-related PM2.5 concentrations and analyze contributing factors. Air quality data were collected from Open-Seneca sensors installed along the Nairobi Expressway, alongside meteorological and traffic data. The CMA-ES-tuned EBM model achieved a Mean Absolute Error (MAE) of 2.033 and an R2 of 0.843, outperforming other models. A key strength of the EBM is its interpretability, revealing that the location was the most critical factor influencing PM2.5 concentrations, followed by humidity and temperature. Elevated PM2.5 levels were observed near the Westlands roundabout, and medium to high humidity correlated with higher PM2.5 levels. Furthermore, the interaction between humidity and traffic volume played a significant role in determining PM2.5 concentrations. By combining CMA-ES for hyperparameter optimization and EBM for prediction and interpretation, this study provides both high predictive accuracy and valuable insights into the environmental drivers of urban air pollution, providing practical guidance for air quality management. Full article
(This article belongs to the Special Issue Atmospheric Emissions Characteristics and Its Impact on Human Health)
Show Figures

Figure 1

Figure 1
<p>Proposed EBM-CMA-ES framework for the prediction and assessment of PM<sub>2.5</sub>.</p>
Full article ">Figure 2
<p>Sites for the data collection along Nairobi expressway.</p>
Full article ">Figure 3
<p>Twelve-hour daily variation in average PM<sub>2.5</sub> at different sites along Nairobi expressway.</p>
Full article ">Figure 3 Cont.
<p>Twelve-hour daily variation in average PM<sub>2.5</sub> at different sites along Nairobi expressway.</p>
Full article ">Figure 4
<p>Prediction error plots using both training and testing datasets: (<b>a</b>) EBM; (<b>b</b>) XGBoost; (<b>c</b>) RF; (<b>d</b>) LightGBM; (<b>e</b>) AdaBoost; (<b>f</b>) MLR.</p>
Full article ">Figure 4 Cont.
<p>Prediction error plots using both training and testing datasets: (<b>a</b>) EBM; (<b>b</b>) XGBoost; (<b>c</b>) RF; (<b>d</b>) LightGBM; (<b>e</b>) AdaBoost; (<b>f</b>) MLR.</p>
Full article ">Figure 5
<p>Uncertainty analysis of the machine learning model by plotting the ratio of predicted PM<sub>2.5</sub> to the observed PM<sub>2.5</sub> vs observed PM<sub>2.5</sub>: (<b>a</b>) EBM model (<b>b</b>) XGBoost model (<b>c</b>) RF model; (<b>d</b>) LightGBM model; (<b>e</b>) AdaBoost model; (<b>f</b>) MLR model.</p>
Full article ">Figure 6
<p>Global factors importance analysis via EBM.</p>
Full article ">Figure 7
<p>Influence of location on PM<sub>2.5</sub> concentrations.</p>
Full article ">Figure 8
<p>Influence of humidity on PM<sub>2.5</sub> concentrations.</p>
Full article ">Figure 9
<p>Influence of temperature on PM<sub>2.5</sub> concentrations.</p>
Full article ">Figure 10
<p>EBM-based heatmap for the interaction of humidity and hourly traffic volume.</p>
Full article ">Figure 11
<p>EBM-based local interpretation of Sample # 12 in testing dataset.</p>
Full article ">Figure 12
<p>EBM-based local interpretation of Sample # 12 in testing dataset.</p>
Full article ">
14 pages, 4975 KiB  
Article
Assessment of Tree Species Classification by Decision Tree Algorithm Using Multiwavelength Airborne Polarimetric LiDAR Data
by Zhong Hu and Songxin Tan
Electronics 2024, 13(22), 4534; https://doi.org/10.3390/electronics13224534 - 19 Nov 2024
Viewed by 259
Abstract
Polarimetric measurement has been proven to be of great importance in various applications, including remote sensing in agriculture and forest. Polarimetric full waveform LiDAR is a relatively new yet valuable active remote sensing tool. This instrument offers the full waveform data and polarimetric [...] Read more.
Polarimetric measurement has been proven to be of great importance in various applications, including remote sensing in agriculture and forest. Polarimetric full waveform LiDAR is a relatively new yet valuable active remote sensing tool. This instrument offers the full waveform data and polarimetric information simultaneously. Current studies have primarily used commercial non-polarimetric LiDAR for tree species classification, either at the dominant species level or at the individual tree level. Many classification approaches combine multiple features, such as tree height, stand width, and crown shape, without utilizing polarimetric information. In this work, a customized Multiwavelength Airborne Polarimetric LiDAR (MAPL) system was developed for field tree measurements. The MAPL is a unique system with unparalleled capabilities in vegetation remote sensing. It features four receiving channels at dual wavelengths and dual polarization: near infrared (NIR) co-polarization, NIR cross-polarization, green (GN) co-polarization, and GN cross-polarization, respectively. Data were collected from several tree species, including coniferous trees (blue spruce, ponderosa pine, and Austrian pine) and deciduous trees (ash and maple). The goal was to improve the target identification ability and detection accuracy. A machine learning (ML) approach, specifically a decision tree, was developed to classify tree species based on the peak reflectance values of the MAPL waveforms. The results indicate a re-substitution error of 3.23% and a k-fold loss error of 5.03% for the 2106 tree samples used in this study. The decision tree method proved to be both accurate and effective, and the classification of new observation data can be performed using the previously trained decision tree, as suggested by both error values. Future research will focus on incorporating additional LiDAR data features, exploring more advanced ML methods, and expanding to other vegetation classification applications. Furthermore, the MAPL data can be fused with data from other sensors to provide augmented reality applications, such as Simultaneous Localization and Mapping (SLAM) and Bird’s Eye View (BEV). Its polarimetric capability will enable target characterization beyond shape and distance. Full article
(This article belongs to the Special Issue Image Analysis Using LiDAR Data)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the MAPL system (<b>a</b>), and a photograph of the receiver package (<b>b</b>).</p>
Full article ">Figure 2
<p>A flowchart of the proposed decision tree classification using MAPL data.</p>
Full article ">Figure 3
<p>Sample LiDAR waveforms of an Austrian pine tree captured by four different polarimetric channels of the MAPL system, with <span class="html-italic">x</span>-axis representing the relative range in pixels and <span class="html-italic">y</span>-axis representing the relative reflectance in arbitrary unit (Ch1: NIR co-polarized, Ch2: NIR cross-polarized, Ch3: GN co-polarized, and Ch4: GN cross-polarized).</p>
Full article ">Figure 3 Cont.
<p>Sample LiDAR waveforms of an Austrian pine tree captured by four different polarimetric channels of the MAPL system, with <span class="html-italic">x</span>-axis representing the relative range in pixels and <span class="html-italic">y</span>-axis representing the relative reflectance in arbitrary unit (Ch1: NIR co-polarized, Ch2: NIR cross-polarized, Ch3: GN co-polarized, and Ch4: GN cross-polarized).</p>
Full article ">Figure 4
<p>Matrix graphs of scatter plots grouped by different tree species. The horizontal axes from left to right blocks and vertical axes from top to bottom blocks represent Ch1, Ch2, Ch3, and Ch4, respectively. Both the <span class="html-italic">x</span> axes and <span class="html-italic">y</span> axes are peak intensity with arbitrary units.</p>
Full article ">Figure 5
<p>Scatter plot of Ch1 vs. Ch2 grouped by different tree species.</p>
Full article ">Figure 6
<p>Graphical description of the decision-making process.</p>
Full article ">
26 pages, 3838 KiB  
Article
High-Order Disturbance Observer-Based Fuzzy Fixed-Time Safe Tracking Control for Uncertain Unmanned Helicopter with Partial State Constraints and Multisource Disturbances
by Ruonan Ren, Zhikai Wang, Haoxiang Ma, Baofeng Ji and Fazhan Tao
Drones 2024, 8(11), 679; https://doi.org/10.3390/drones8110679 (registering DOI) - 18 Nov 2024
Viewed by 187
Abstract
In the real-world operation of unmanned helicopters, various state constraints, system uncertainties and multisource disturbances pose considerable risks to their safe fight. This paper focuses on anti-disturbance adaptive safety fixed-time control design for the uncertain unmanned helicopter subject to partial state constraints and [...] Read more.
In the real-world operation of unmanned helicopters, various state constraints, system uncertainties and multisource disturbances pose considerable risks to their safe fight. This paper focuses on anti-disturbance adaptive safety fixed-time control design for the uncertain unmanned helicopter subject to partial state constraints and multiple disturbances. Firstly, a developed safety protection algorithm is integrated with the fixed-time stability theory, which assures the tracking performance and guarantees that the partial states are always constrained within the time-varying safe range. Then, the compensation mechanism is developed to weaken the adverse impact induced by the filter errors. Simultaneously, the influence of the multisource disturbances on the system stability are weakened through the Ito^ differential equation and high-order disturbance observer. Further, the fuzzy logic system is constructed to approximate the system uncertainties caused by the sensor measurement errors and complex aerodynamic characteristics. Stability analysis proves that the controlled unmanned helicopter is semi-globally fixed-time stable in probability, and the state errors converge to a desired region of the origin. Finally, simulations are provided to illustrate the performance of the proposed scheme. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the UAH system.</p>
Full article ">Figure 2
<p>Control diagram of this paper.</p>
Full article ">Figure 3
<p>Tracking performance of <math display="inline"><semantics> <mrow> <mi>X</mi> </mrow> </semantics></math>-axis.</p>
Full article ">Figure 4
<p>Tracking performance of <math display="inline"><semantics> <mrow> <mi>Y</mi> </mrow> </semantics></math>-axis.</p>
Full article ">Figure 5
<p>Tracking performance of <math display="inline"><semantics> <mrow> <mi>Z</mi> </mrow> </semantics></math>-axis.</p>
Full article ">Figure 6
<p>Tracking performance of velocity.</p>
Full article ">Figure 7
<p>Tracking performance of <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Tracking performance of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>.</p>
Full article ">Figure 9
<p>Tracking performance of <math display="inline"><semantics> <mi>ψ</mi> </semantics></math>.</p>
Full article ">Figure 10
<p>Tracking performance of angular velocity.</p>
Full article ">Figure 11
<p>Control inputs of the designed scheme.</p>
Full article ">Figure 12
<p>Tracking performance of position subsystem.</p>
Full article ">Figure 13
<p>Tracking performance of attitude subsystem.</p>
Full article ">Figure 14
<p>Tracking performance of <math display="inline"><semantics> <msub> <mi>d</mi> <mn>1</mn> </msub> </semantics></math>.</p>
Full article ">Figure 15
<p>Tracking performance of <math display="inline"><semantics> <msub> <mi>d</mi> <mn>2</mn> </msub> </semantics></math>.</p>
Full article ">Figure 16
<p>Three-dimensional trajectory diagram.</p>
Full article ">Figure 17
<p>Tracking performance of <span class="html-italic">X</span> under different control schemes.</p>
Full article ">
12 pages, 3349 KiB  
Communication
Accelerated Life Tests for Time-Dependent Response Characterization of Functionalized Piezoelectric Microcantilever-Based Gas Sensors
by Lawrence Nsubuga and Roana de Oliveira Hansen
Electronics 2024, 13(22), 4525; https://doi.org/10.3390/electronics13224525 - 18 Nov 2024
Viewed by 225
Abstract
This article explores the accelerated lifetime test approach to characterize the time-dependent response of a piezoelectrically driven microcantilever (PD-MC) based gas sensor. The novelty here relies on demonstrating how accelerated lifetime tests can be useful to differentiate sensing mechanisms for non-linear gas sensors. [...] Read more.
This article explores the accelerated lifetime test approach to characterize the time-dependent response of a piezoelectrically driven microcantilever (PD-MC) based gas sensor. The novelty here relies on demonstrating how accelerated lifetime tests can be useful to differentiate sensing mechanisms for non-linear gas sensors. The results show the determination of the sensor’s optimum operation time while maintaining result validity. The approach is demonstrated for 1,5-diaminopentane (cadaverine), a volatile organic compound (VOC) whose concentration in meat and fish products has been proven viable for determining the shelf life. A PD-MC functionalized with a cadaverine-specific binder was therefore incorporated into a hand-held electronic nose, and the response was found to be highly reliable within a specific resonance frequency shift, enabling the accurate prediction of meat and fish expiration dates. To identify the limits of detection in terms of cadaverine concentration and sensor lifetime, this study applies the results of accelerated life tests into a Weibull distribution analysis to extract the expected time to failure. For the accelerated life tests, a functionalized PD-MC was exposed to high concentrations of cadaverine, i.e., 252.3 mg/kg, 335.82 mg/kg, and 421.08 mg/kg, compared to the nominal concentration of 33 mg/kg observed in meat and fish samples. Furthermore, we demonstrate the differentiation of the response mechanisms of the system accruing from the concentration-dependent interaction of cadaverine with the binder. This enables the determination of the upper limit of the analyte concentration for a stable response. The findings suggest that the functionalized PD-MC sensor exhibits a linear and predictable response when exposed to a standard cadaverine concentration of 33 mg/kg for up to 93.01 min. Full article
(This article belongs to the Special Issue Electronic Nose: From Fundamental Research to Applications)
Show Figures

Figure 1

Figure 1
<p>Frequency shift trend for microcantilevers functionalized with a cyclam-based binder when exposed to cadaverine. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math> is the resonance frequency before and <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>f</mi> </mrow> <mrow> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> is the resonance frequency after exposure to cadaverine. The graph shows clear non-linearity of the sensor response as a function of cadaverine concentrations.</p>
Full article ">Figure 2
<p>Typical application of microcantilever beam in sensing for cadaverine and typical response as observed from the shift in the phase peak [<a href="#B2-electronics-13-04525" class="html-bibr">2</a>].</p>
Full article ">Figure 3
<p>Schematics of the PD-MC-based sensor for cadaverine. The driving signal undergoes amplification when actuating the PD-MC. Inset is the functionalized PD-MC used in this study.</p>
Full article ">Figure 4
<p>Layer stack (top to bottom) of the microcantilever with piezoelectric material.</p>
Full article ">Figure 5
<p>Depicts the setup used to measure the resonance frequency shift of functionalized PD-MCs exposed to varying concentrations of cadaverine. The impedance/network analyzer both actuates and measures the change in resonance frequency.</p>
Full article ">Figure 6
<p>Determination of <math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi mathvariant="normal">f</mi> </mrow> <mrow> <mi mathvariant="normal">R</mi> </mrow> </msub> </mrow> </semantics></math> by considering the frequency at which the parameter response stops being linear. For this study, (<math display="inline"><semantics> <mrow> <mo>∆</mo> <msub> <mrow> <mi mathvariant="normal">f</mi> </mrow> <mrow> <mi mathvariant="normal">R</mi> </mrow> </msub> <mo>=</mo> <mo>−</mo> <mn>31.5</mn> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Variation of resonance frequency shifts with time when functionalized PD-MCS are exposed to different concentrations of cadaverine.</p>
Full article ">Figure 8
<p>Weibull distribution plots for the three different cadaverine concentrations (plotted using Develve software [<a href="#B33-electronics-13-04525" class="html-bibr">33</a>]).</p>
Full article ">Figure 9
<p>Variation of resonance frequency with time when functionalized PD-MCS are exposed to nominal concentrations of cadaverine.</p>
Full article ">
24 pages, 9386 KiB  
Article
Toward Improving Human Training by Combining Wearable Full-Body IoT Sensors and Machine Learning
by Nazia Akter, Andreea Molnar and Dimitrios Georgakopoulos
Sensors 2024, 24(22), 7351; https://doi.org/10.3390/s24227351 (registering DOI) - 18 Nov 2024
Viewed by 287
Abstract
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to [...] Read more.
This paper proposes DigitalUpSkilling, a novel IoT- and AI-based framework for improving and personalising the training of workers who are involved in physical-labour-intensive jobs. DigitalUpSkilling uses wearable IoT sensors to observe how individuals perform work activities. Such sensor observations are continuously processed to synthesise an avatar-like kinematic model for each worker who is being trained, referred to as the worker’s digital twins. The framework incorporates novel work activity recognition using generative adversarial network (GAN) and machine learning (ML) models for recognising the types and sequences of work activities by analysing an individual’s kinematic model. Finally, the development of skill proficiency ML is proposed to evaluate each trainee’s proficiency in work activities and the overall task. To illustrate DigitalUpSkilling from wearable IoT-sensor-driven kinematic models to GAN-ML models for work activity recognition and skill proficiency assessment, the paper presents a comprehensive study on how specific meat processing activities in a real-world work environment can be recognised and assessed. In the study, DigitalUpSkilling achieved 99% accuracy in recognising specific work activities performed by meat workers. The study also presents an evaluation of the proficiency of workers by comparing kinematic data from trainees performing work activities. The proposed DigitalUpSkilling framework lays the foundation for next-generation digital personalised training. Full article
(This article belongs to the Special Issue Wearable and Mobile Sensors and Data Processing—2nd Edition)
Show Figures

Figure 1

Figure 1
<p>DigitalUpSkilling framework.</p>
Full article ">Figure 2
<p>Hybrid GAN-ML activity classification.</p>
Full article ">Figure 3
<p>Skill proficiency assessment.</p>
Full article ">Figure 4
<p>(<b>a</b>) Placement of sensors; (<b>b</b>) sensors and straps; (<b>c</b>) alignment of sensors with the participant’s movements.</p>
Full article ">Figure 5
<p>Work environment for the data collection: (<b>a</b>) boning area; (<b>b</b>) slicing area.</p>
Full article ">Figure 6
<p>Dataflow of the study.</p>
Full article ">Figure 7
<p>(<b>a</b>) Worker performing boning; (<b>b</b>) worker’s real-time digital twin; (<b>c</b>) digital twins showing body movements along with real-time graphs of the joint’s movements.</p>
Full article ">Figure 8
<p>Comparison of the error rates of the different ML models.</p>
Full article ">Figure 9
<p>Confusion matrices: (<b>a</b>) boning; (<b>b</b>) slicing with pitch and roll from right-hand sensors.</p>
Full article ">Figure 10
<p>Distribution of the activity classification: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 11
<p>Accuracy of the GAN for different percentages of synthetic data: (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 12
<p>Accuracy of the GAN with different percentages of synthetic data (circled area showing drop in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 13
<p>Classification accuracy with the GAN, SMOTE, and ENN (circled area showing improvement in the accuracy): (<b>a</b>) boning; (<b>b</b>) slicing.</p>
Full article ">Figure 14
<p>Distribution of right-hand pitch and roll mean (in degree).</p>
Full article ">Figure 15
<p>Comparison of the engagement in boning (W1: Worker 1; W2: Worker 2).</p>
Full article ">Figure 16
<p>Comparison of the engagement in slicing.</p>
Full article ">Figure 17
<p>Comparison of the accelerations of the right hand.</p>
Full article ">Figure 18
<p>Comparison of the accelerations of the right-hand.</p>
Full article ">Figure 19
<p>Comparisons of abduction, rotation, and flexion of the right shoulder during boning activities: (<b>a</b>) worker 1; (<b>b</b>) worker 2.</p>
Full article ">
39 pages, 8691 KiB  
Review
Comprehensive Review of Lithium-Ion Battery State of Charge Estimation by Sliding Mode Observers
by Vahid Behnamgol, Mohammad Asadi, Mohamed A. A. Mohamed, Sumeet S. Aphale and Mona Faraji Niri
Energies 2024, 17(22), 5754; https://doi.org/10.3390/en17225754 (registering DOI) - 18 Nov 2024
Viewed by 330
Abstract
The state of charge (SoC) is a critical parameter in lithium-ion batteries and their alternatives. It determines the battery’s remaining energy capacity and influences its performance longevity. Accurate SoC estimation is essential for making informed charging and discharging decisions, mitigating the risks of [...] Read more.
The state of charge (SoC) is a critical parameter in lithium-ion batteries and their alternatives. It determines the battery’s remaining energy capacity and influences its performance longevity. Accurate SoC estimation is essential for making informed charging and discharging decisions, mitigating the risks of overcharging or deep discharge, and ensuring safety. Battery management systems rely on SoC estimation, utilising both hardware and software components to maintain safe and efficient battery operation. Existing SoC estimation methods are broadly classified into direct and indirect approaches. Direct methods (e.g., Coulumb counting) rely on current measurements. In contrast, indirect methods (often based on a filter or observer) utilise a model of a battery to incorporate voltage measurements besides the current. While the latter is more accurate, it faces challenges related to sensor drift, computational complexity, and model inaccuracies. The need for more precise and robust SoC estimation without increasing complexity is critical, particularly for real-time applications. Recently, sliding mode observers (SMOs) have gained prominence in this field for their robustness against model uncertainties and external disturbances, offering fast convergence and superior accuracy. Due to increased interest, this review focuses on various SMO approaches for SoC estimation, including first-order, adaptive, high-order, terminal, fractional-order, and advanced SMOs, along with hybrid methods integrating intelligent techniques. By evaluating these methodologies, their strengths, weaknesses, and modelling frameworks in the literature, this paper highlights the ongoing challenges and future directions in SoC estimation research. Unlike common review papers, this work also compares the performance of various existing methods via a comprehensive simulation study in MATLAB 2024b to quantify the difference and guide the users in selecting a suitable version for the applications. Full article
(This article belongs to the Section D: Energy Storage and Application)
Show Figures

Figure 1

Figure 1
<p>Classification of SoC estimation methods.</p>
Full article ">Figure 2
<p>Classification of battery models for SoC estimation.</p>
Full article ">Figure 3
<p>First order resistor-capacitor electrical modelling of a LIB.</p>
Full article ">Figure 4
<p>Open circuit voltage vs. SoC of LIB for different temperatures [<a href="#B101-energies-17-05754" class="html-bibr">101</a>].</p>
Full article ">Figure 5
<p>First order battery equivalent circuit model with hysteresis.</p>
Full article ">Figure 6
<p>Hysteresis loop in battery charging/discharging OCV curves [<a href="#B103-energies-17-05754" class="html-bibr">103</a>].</p>
Full article ">Figure 7
<p>Simplified first-order ECM of the LIB.</p>
Full article ">Figure 8
<p>Second Order RC ECM.</p>
Full article ">Figure 9
<p>Second order battery ECM with the hysteresis.</p>
Full article ">Figure 10
<p>Nth-order Randle battery ECM.</p>
Full article ">Figure 11
<p>Fractional order RC ECM.</p>
Full article ">Figure 12
<p>Classification of SMO-based SoC estimation methods.</p>
Full article ">Figure 13
<p>Considered second-order battery ECM for the simulation test.</p>
Full article ">Figure 14
<p>Estimation results using the conventional first-order sliding mode observer.</p>
Full article ">Figure 15
<p>Estimation results using the approximated first-order sliding mode observer.</p>
Full article ">Figure 16
<p>Estimation results using the conventional adaptive sliding mode observer.</p>
Full article ">Figure 17
<p>Estimation results using the approximated adaptive sliding mode observer.</p>
Full article ">Figure 18
<p>Estimation results using the second-order super-twisting sliding mode observer.</p>
Full article ">Figure 19
<p>Estimation results using the conventional terminal sliding mode observer.</p>
Full article ">Figure 20
<p>Estimation results using the approximated terminal sliding mode observer.</p>
Full article ">Figure 21
<p>Comparison of the V<sub>oc</sub> estimation by the conventional first-order, adaptive, and terminal SMOs and the super-twisting method at the beginning of simulation.</p>
Full article ">Figure 22
<p>Comparison of the SoC estimation by the conventional first-order, adaptive, and terminal SMOs and super-twisting method.</p>
Full article ">Figure 23
<p>Comparison of the SoC estimation by the approximated first-order, adaptive, and terminal SMOs and super-twisting method.</p>
Full article ">
29 pages, 27816 KiB  
Article
Trajectory Aware Deep Reinforcement Learning Navigation Using Multichannel Cost Maps
by Tareq A. Fahmy, Omar M. Shehata and Shady A. Maged
Robotics 2024, 13(11), 166; https://doi.org/10.3390/robotics13110166 - 17 Nov 2024
Viewed by 256
Abstract
Deep reinforcement learning (DRL)-based navigation in an environment with dynamic obstacles is a challenging task due to the partially observable nature of the problem. While DRL algorithms are built around the Markov property (assumption that all the necessary information for making a decision [...] Read more.
Deep reinforcement learning (DRL)-based navigation in an environment with dynamic obstacles is a challenging task due to the partially observable nature of the problem. While DRL algorithms are built around the Markov property (assumption that all the necessary information for making a decision is contained in a single observation of the current state) for structuring the learning process; the partially observable Markov property in the DRL navigation problem is significantly amplified when dealing with dynamic obstacles. A single observation or measurement of the environment is often insufficient for capturing the dynamic behavior of obstacles, thereby hindering the agent’s decision-making. This study addresses this challenge by using an environment-specific heuristic approach to augment the dynamic obstacles’ temporal information in observation to guide the agent’s decision-making. We proposed Multichannel Cost Map Observation for Spatial and Temporal Information (M-COST) to mitigate these limitations. Our results show that the M-COST approach more than doubles the convergence rate in concentrated tunnel situations, where successful navigation is only possible if the agent learns to avoid dynamic obstacles. Additionally, navigation efficiency improved by 35% in tunnel scenarios and by 12% in dense-environment navigation compared to standard methods that rely on raw sensor data or frame stacking. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>M-COST observation approach. This Diagram illustrates the overall architecture of the proposed technique, detailing each module and the sequential process flow.</p>
Full article ">Figure 2
<p>This is an example view from the simulated dynamic environment, where the blue lines represent the rays of the robot’s 2D LIDAR. The white obstacles represent the dynamic obstacles, and the grey obstacles represent the static obstacles, which are randomized in each episode.</p>
Full article ">Figure 3
<p>Example of the obstacle cost map with inflation corresponding to the view shown in <a href="#robotics-13-00166-f001" class="html-fig">Figure 1</a>. The blue cells represent cost, the magenta regions represent the highest cost, and the green polygon represents the robot. The coordinates on top of the obstacles represent the tracked dynamic obstacles. Both the robot’s green polygon and the coordinates of the dynamic obstacles are for demonstration purposes only and are not included in the agent’s observations.</p>
Full article ">Figure 4
<p>(<b>a</b>) Obstacle trajectory cost map channel, where the robot is marked as a green polygon. The predicted dynamic obstacle trajectories are shown, where the magenta area represents a higher probability distribution of the dynamic obstacles’ future positions in the next time steps, and the blue represents a lower probability for the positions that the dynamic obstacles might take in the near future. (<b>b</b>) A 3D representation of Figure (<b>a</b>) where the <span class="html-italic">z</span>-axis represents the probability of the dynamic obstacles’ position in the next time steps (lighter colors represent higher probabilities).</p>
Full article ">Figure 5
<p>Gazebo tunnel simulation environment: Boxes represent obstacles that can be static or dynamic, moving at varying speeds from one side to the other. The humanoids are dynamic actors that also move at varying speeds from one side to the other.</p>
Full article ">Figure 6
<p>(<b>a</b>) Gazebo simulation for point-to-point navigation: white obstacles are dynamic, grey obstacles represent static randomized obstacles, and blue rays represent the LIDAR rays. (<b>b</b>) The complex environment with 12 dynamic actors.</p>
Full article ">Figure 7
<p>SAC learning process.</p>
Full article ">Figure 8
<p>Fully connected neural network architecture of the policy.</p>
Full article ">Figure 9
<p>Policy architecture uses CNN as a feature extractor for the cost map and FNN for the extracted features and other scalar data.</p>
Full article ">Figure 10
<p>(<b>a</b>) Obstacle cost map as observation, (<b>b</b>) Stacked previous cost maps, and (<b>c</b>) M-COST observation of two channels: obstacles cost map channel and predicted trajectory of the dynamic obstacles in the temporal channel.</p>
Full article ">Figure 11
<p>Training reward vs. training steps in the tunnel scenario. Solid lines represent mean training reward, while shaded areas indicate variation or distribution of scores over different steps.</p>
Full article ">Figure 12
<p>Evaluation scores vs. training steps in the tunnel scenario. Solid lines represent mean evaluation scores, while shaded areas indicate variation or distribution of scores over different steps.</p>
Full article ">Figure 13
<p>Navigation performance metrics for the tunnel environment.</p>
Full article ">Figure 14
<p>Training reward vs. training steps in the point-to-point simple scenario. Solid lines represent mean training reward, while shaded areas indicate score distributions over different steps.</p>
Full article ">Figure 15
<p>Evaluation scores vs. training steps in the point-to-point simple scenario. Solid lines represent mean evaluation scores, while shaded areas indicate score distributions over different steps.</p>
Full article ">Figure 16
<p>Training reward vs. training steps in the point-to-point complex scenario. Solid lines represent mean training reward, while shaded areas indicate score distributions over different steps.</p>
Full article ">Figure 17
<p>Evaluation scores vs. training steps in the point-to-point complex scenario. Solid lines represent mean evaluation scores, while shaded areas indicate score distributions over different steps.</p>
Full article ">Figure 18
<p>Navigation performance metrics for the point-to-point navigation test with 12 dynamic actors and complex structure environment.</p>
Full article ">
20 pages, 9833 KiB  
Article
Reconstruction of Hourly Gap-Free Sea Surface Skin Temperature from Multi-Sensors
by Qianguang Tu, Zengzhou Hao, Dong Liu, Bangyi Tao, Liangliang Shi and Yunwei Yan
Remote Sens. 2024, 16(22), 4268; https://doi.org/10.3390/rs16224268 - 15 Nov 2024
Viewed by 282
Abstract
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is [...] Read more.
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is therefore essential for the purpose of filling these gaps. The extant fusion methodologies frequently fail to account for the influence of depth disparities and the diurnal variability of sea surface temperatures (SSTs) retrieved from multi-sensors. We have developed a novel approach that integrates depth and diurnal corrections and employs advanced data fusion techniques to generate hourly gap-free SST datasets. The General Ocean Turbulence Model (GOTM) is employed to model the diurnal variability of the SST profile, incorporating depth and diurnal corrections. Subsequently, the corrected SSTs at the same observed time and depth are blended using the Markov method and the remaining data gaps are filled with optimal interpolation. The overall precision of the hourly gap-free SSTskin generated demonstrates a mean bias of −0.14 °C and a root mean square error of 0.57 °C, which is comparable to the precision of satellite observations. The hourly gap-free SSTskin is vital for improving our comprehension of air–sea interactions and monitoring critical oceanographic processes with high-frequency variability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall flowchart of multi-sensors fusion for SST<sub>skin</sub>.</p>
Full article ">Figure 2
<p>The DV of SST<sub>skin</sub> modeled by GOTM on 8 May 2007.</p>
Full article ">Figure 3
<p>Histogram of the difference between MTSAT-observed DV and GOTM DV on 8 May 2007.</p>
Full article ">Figure 4
<p>GOTM of the SST at 2 p.m. on 8 May 2007. (<b>a</b>) The SST profile at 122°E and 35.25°N; (<b>b</b>) the difference in the spatial distributions between SST<sub>skin</sub> and SST<sub>subskin</sub>.</p>
Full article ">Figure 5
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 6
<p>(<b>a</b>) Number of sensors available on 8 May 2007; (<b>b</b>) the fusion SST at 10:30 a.m. using Markov estimation.</p>
Full article ">Figure 7
<p>Covariance structure function of the East China Sea estimated from MTSAT in 2007. The spatial covariance functions at (<b>a</b>) zonal and (<b>b</b>) meridional directions for the SST variations. Temporal correlation with time lags computed using hourly SST (<b>c</b>). Red line is the fitting function. Vertical bars represent ±1 standard deviation.</p>
Full article ">Figure 8
<p>The hourly gap-free SST<sub>skin</sub> on 8 May 2007.</p>
Full article ">Figure 9
<p>The diurnal variation of SST<sub>skin</sub> at 124°E and 28°N on 8 May 2007.</p>
Full article ">Figure 10
<p>(<b>a</b>) Scatter plot between in situ SST<sub>skin</sub> and fusion SST<sub>skin</sub>. (<b>b</b>) The hourly mean bias and standard deviation during 2007.</p>
Full article ">
14 pages, 3240 KiB  
Article
Buried PE Pipeline Location Method Based on Double-Tree Complex Wavelet Cross-Correlation Delay
by Yang Li, Hanyu Zhang, Zhuo Xu, Ao Zhang, Xianfa Liu, Pengyao Sun and Xianchao Sun
Sensors 2024, 24(22), 7310; https://doi.org/10.3390/s24227310 - 15 Nov 2024
Viewed by 330
Abstract
This study presents a location method for buried polyethylene (PE) pipelines based on the double-tree complex wavelet cross-correlation delay. Initially, the dual-tree complex wavelet transform (DTCWT) is applied to denoise the acquired signal, followed by extracting the delay time through the cross-correlation function [...] Read more.
This study presents a location method for buried polyethylene (PE) pipelines based on the double-tree complex wavelet cross-correlation delay. Initially, the dual-tree complex wavelet transform (DTCWT) is applied to denoise the acquired signal, followed by extracting the delay time through the cross-correlation function to locate the buried pipeline. A simulation model is established to analyze the peak values of the time-domain signals in both asymmetric and symmetric sensor layouts using COMSOL, determining the relationship between the signal time differences and pipeline positions. Then, an experimental test system is set up, and experiments are carried out under the conditions of asymmetric and symmetrical sensors and different excitation points. The results indicate that the maximum error is 4.6% for asymmetric arrangements and less than 1% for symmetric arrangements. In practical applications, the pipeline’s position can be inferred from the delay time, with higher accuracy observed as the excitation point approaches the sensor. This method addresses the limitations of existing pipeline locating techniques and provides a foundation for the development of pipeline positioning technology. Full article
Show Figures

Figure 1

Figure 1
<p>Positioning principle for buried PE gas pipeline.</p>
Full article ">Figure 2
<p>COMSOL simulation model.</p>
Full article ">Figure 3
<p>(<b>a</b>) Time-domain signal collected by the sensor at a position of 10 cm. (<b>b</b>) Time-domain signal collected by the sensor at a position of 50 cm.</p>
Full article ">Figure 4
<p>(<b>a</b>) Time-domain signal collected by sensor at 50 cm position. (<b>b</b>) Time-domain signal collected by sensor at 50 cm position.</p>
Full article ">Figure 5
<p>PE pipeline field experiment: device layout.</p>
Full article ">Figure 6
<p>Location of the sensor in the conduction bracket.</p>
Full article ">Figure 7
<p>The original signal and noise reduction signal for the asymmetrically arranged signal 1 under the first excitation.</p>
Full article ">Figure 8
<p>The original signal and noise reduction signal for the asymmetrically arranged signal 2 under the first excitation.</p>
Full article ">Figure 9
<p>(<b>a</b>) The number of correlations for the asymmetric placement of the first excitation; (<b>b</b>) the number of correlations for the asymmetric placement of the second excitation.</p>
Full article ">Figure 10
<p>(<b>a</b>) The number of correlations for the asymmetric placement of the first excitation; (<b>b</b>) the number of correlations for the asymmetric placement of the second excitation.</p>
Full article ">
22 pages, 5816 KiB  
Article
Causality-Driven Feature Selection for Calibrating Low-Cost Airborne Particulate Sensors Using Machine Learning
by Vinu Sooriyaarachchi, David J. Lary, Lakitha O. H. Wijeratne and John Waczak
Sensors 2024, 24(22), 7304; https://doi.org/10.3390/s24227304 - 15 Nov 2024
Viewed by 368
Abstract
With escalating global environmental challenges and worsening air quality, there is an urgent need for enhanced environmental monitoring capabilities. Low-cost sensor networks are emerging as a vital solution, enabling widespread and affordable deployment at fine spatial resolutions. In this context, machine learning for [...] Read more.
With escalating global environmental challenges and worsening air quality, there is an urgent need for enhanced environmental monitoring capabilities. Low-cost sensor networks are emerging as a vital solution, enabling widespread and affordable deployment at fine spatial resolutions. In this context, machine learning for the calibration of low-cost sensors is particularly valuable. However, traditional machine learning models often lack interpretability and generalizability when applied to complex, dynamic environmental data. To address this, we propose a causal feature selection approach based on convergent cross mapping within the machine learning pipeline to build more robustly calibrated sensor networks. This approach is applied in the calibration of a low-cost optical particle counter OPC-N3, effectively reproducing the measurements of PM1 and PM2.5 as recorded by research-grade spectrometers. We evaluated the predictive performance and generalizability of these causally optimized models, observing improvements in both while reducing the number of input features, thus adhering to the Occam’s razor principle. For the PM1 calibration model, the proposed feature selection reduced the mean squared error on the test set by 43.2% compared to the model with all input features, while the SHAP value-based selection only achieved a reduction of 29.6%. Similarly, for the PM2.5 model, the proposed feature selection led to a 33.2% reduction in the mean squared error, outperforming the 30.2% reduction achieved by the SHAP value-based selection. By integrating sensors with advanced machine learning techniques, this approach advances urban air quality monitoring, fostering a deeper scientific understanding of microenvironments. Beyond the current test cases, this feature selection method holds potential for broader applications in other environmental monitoring applications, contributing to the development of interpretable and robust environmental models. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Attractor manifold of the canonical Lorenz system (<span class="html-italic">M</span>) plotted in 3D space, showing the trajectory of the original system in the state space with variables <span class="html-italic">X</span>, <span class="html-italic">Y</span>, and <span class="html-italic">Z</span>. (<b>b</b>) Reconstructed manifold <math display="inline"><semantics> <msub> <mi>M</mi> <mi>X</mi> </msub> </semantics></math> using delay-coordinate embedding of the <span class="html-italic">X</span> variable. The coordinates <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mn>2</mn> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> approximate the original attractor dynamics, capturing the structure of the system dynamics based only on the <span class="html-italic">X</span> time series. (<b>c</b>) Reconstructed manifold <math display="inline"><semantics> <msub> <mi>M</mi> <mi>Y</mi> </msub> </semantics></math> using delay-coordinate embedding of the <span class="html-italic">Y</span> variable. The coordinates <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>T</mi> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>(</mo> <mi>t</mi> <mo>−</mo> <mn>2</mn> <mi>τ</mi> <mo>)</mo> </mrow> </semantics></math> again form an attractor diffeomorphic to the original manifold, illustrating how the <span class="html-italic">Y</span> time series alone, through lagged coordinates, captures the dynamics of the system.</p>
Full article ">Figure 2
<p>Proposed causality-driven feature selection pipeline.</p>
Full article ">Figure 3
<p>Input features to the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> calibration model ranked in descending order of mean absolute SHAP values. The 10 highest-ranked features are highlighted in red.</p>
Full article ">Figure 4
<p>Potential input features to the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> calibration model ranked in descending order of strength of causal influence after eliminating features with <span class="html-italic">p</span>-value <math display="inline"><semantics> <mrow> <mo>≥</mo> <mn>0.05</mn> </mrow> </semantics></math>. The 10 highest-ranked features are highlighted in red.</p>
Full article ">Figure 5
<p>Input features to the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> calibration model ranked in descending order of mean absolute SHAP values. The 10 highest-ranked features are highlighted in red.</p>
Full article ">Figure 6
<p>Potential input features to the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> calibration model ranked in descending order of strength of causal influence after eliminating features with <span class="html-italic">p</span>-value <math display="inline"><semantics> <mrow> <mo>≥</mo> <mn>0.05</mn> </mrow> </semantics></math>. The 10 highest-ranked features are highlighted in red.</p>
Full article ">Figure 7
<p>Scatter diagram comparing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> measurements from the reference instrument on the x-axis against the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> estimates from OPC-N3 on the y-axis prior to calibration.</p>
Full article ">Figure 8
<p>Density plots of the residuals for the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> calibration models derived from each approach.</p>
Full article ">Figure 9
<p>Scatter diagrams for the calibration models with the x-axis showing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> count from the reference instrument and the y-axis showing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mn>1</mn> </msub> </mrow> </semantics></math> count provided by calibrating the LCS: (<b>a</b>) Without any feature selection. (<b>b</b>) SHAP value-based feature selection. (<b>c</b>) Causality-based feature selection. (<b>d</b>) Comparison of true vs. predicted values for the test set across models.</p>
Full article ">Figure 10
<p>Scatter diagram comparing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> measurements from the reference instrument on the x-axis against the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> estimates from OPC-N3 on the y-axis prior to calibration.</p>
Full article ">Figure 11
<p>Density plots of the residuals for the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> calibration models derived from each approach.</p>
Full article ">Figure 12
<p>Scatter diagrams for the calibration models with the x-axis showing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> count from the reference instrument and the y-axis showing the <math display="inline"><semantics> <mrow> <msub> <mi>PM</mi> <mrow> <mn>2.5</mn> </mrow> </msub> </mrow> </semantics></math> count provided by calibrating the LCS: (<b>a</b>) Without any feature selection. (<b>b</b>) SHAP value-based feature selection. (<b>c</b>) Causality-based feature selection. (<b>d</b>) Comparison of true vs. predicted values for the test set across models.</p>
Full article ">
13 pages, 3048 KiB  
Article
Thermal Quenching of Intrinsic Photoluminescence in Amorphous and Monoclinic HfO2 Nanotubes
by Artem Shilov, Sergey Savchenko, Alexander Vokhmintsev, Kanat Zhusupov and Ilya Weinstein
Materials 2024, 17(22), 5587; https://doi.org/10.3390/ma17225587 - 15 Nov 2024
Viewed by 241
Abstract
Nanotubular hafnia arrays hold significant promise for advanced opto- and nanoelectronic applications. However, the known studies concern mostly the luminescent properties of doped HfO2-based nanostructures, while the optical properties of nominally pure hafnia with optically active centers of intrinsic origin are [...] Read more.
Nanotubular hafnia arrays hold significant promise for advanced opto- and nanoelectronic applications. However, the known studies concern mostly the luminescent properties of doped HfO2-based nanostructures, while the optical properties of nominally pure hafnia with optically active centers of intrinsic origin are far from being sufficiently investigated. In this work, for the first time we have conducted research on the wide-range temperature effects in the photoluminescence processes of anion-defective hafnia nanotubes with an amorphous and monoclinic structure, synthesized by the electrochemical oxidation method. It is shown that the spectral parameters, such as the position of the maximum and half-width of the band, remain almost unchanged in the range of 7–296 K. The experimental data obtained for the photoluminescence temperature quenching are quantitatively analyzed under the assumption made for two independent channels of non-radiative relaxation of excitations with calculating the appropriate energies of activation barriers—9 and 39 meV for amorphous hafnia nanotubes, 15 and 141 meV for monoclinic ones. The similar temperature behavior of photoluminescence spectra indicates close values of short-range order parameters in the local atomic surrounding of the active emission centers in hafnium dioxide with amorphous and monoclinic structure. Anion vacancies VO and VO2 appeared in the positions of three-coordinated oxygen and could be the main contributors to the spectral features of emission response and observed thermally stimulated processes. The recognized and clarified mechanisms occurring during thermal quenching of photoluminescence could be useful for the development of light-emitting devices and thermo-optical sensors with functional media based on oxygen-deficient hafnia nanotubes. Full article
(This article belongs to the Special Issue Advances in Luminescent Materials)
Show Figures

Figure 1

Figure 1
<p>Scanning electron microscope (SEM) (<b>a</b>,<b>b</b>) and transmission electron microscope (TEM) (<b>c</b>,<b>d</b>) images obtained for the monoclinic HfO<sub>2</sub> nanotubes under study. The value marked in (<b>d</b>) corresponds to the interplanar distance <math display="inline"><semantics> <mrow> <mover accent="true"> <mn>1</mn> <mo>¯</mo> </mover> <mn>11</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>Photoluminescence (PL) spectra of amorphous (<b>top</b>) and monoclinic (<b>bottom</b>) hafnia nanotubes measured at different temperatures.</p>
Full article ">Figure 3
<p>Temperature dependencies of the experimental values of the maximum position E<sub>max</sub> (blue color) and half-width FWHM (green color) of the measured PL bands. The circles indicate data for amorphous NTs, triangles—for monoclinic NTs. The dashed lines show the averaged values of E<sub>max</sub> and FWHM in the temperature range of 7–296 K.</p>
Full article ">Figure 4
<p>PL spectra of amorphous (<b>left</b>, circles) and monoclinic (<b>right</b>, triangles) nanotubes measured at a temperature of 10 K, with decomposition into Gaussian components (red lines).</p>
Full article ">Figure 5
<p>Dependence <span class="html-italic">I</span>(<span class="html-italic">T</span>) for amorphous (<b>top</b>) and monoclinic (<b>bottom</b>) NTs. The red and blue lines, see insets, are linear approximations.</p>
Full article ">
17 pages, 2380 KiB  
Article
Nondestructive Detection of Litchi Stem Borers Using Multi-Sensor Data Fusion
by Zikun Zhao, Sai Xu, Huazhong Lu, Xin Liang, Hongli Feng and Wenjing Li
Agronomy 2024, 14(11), 2691; https://doi.org/10.3390/agronomy14112691 - 15 Nov 2024
Viewed by 271
Abstract
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, [...] Read more.
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, as they often fail to capture both external and internal fruit characteristics. By integrating multiple sensors, our approach overcomes these limitations, offering a more accurate and robust detection system. Significant differences were observed between pest-free and infested lychees. Pest-free lychees exhibited higher hardness, soluble sugars (11% higher in flesh, 7% higher in peel), vitamin C (50% higher in flesh, 2% higher in peel), polyphenols, anthocyanins, and ORAC values (26%, 9%, and 14% higher, respectively). The Vis/NIR data processed with SG+SNV+CARS yielded a partial least squares regression (PLSR) model with an R2 of 0.82, an RMSE of 0.18, and accuracy of 89.22%. The hyperspectral model, using SG+MSC+SPA, achieved an R2 of 0.69, an RMSE of 0.23, and 81.74% accuracy, while the X-ray method with support vector regression (SVR) reached an R2 of 0.69, an RMSE of 0.22, and 76.25% accuracy. Through feature-level fusion, Recursive Feature Elimination with Cross-Validation (RFECV), and dimensionality reduction using PCA, we optimized hyperparameters and developed a Random Forest model. This model achieved 92.39% accuracy in pest detection, outperforming the individual methods by 3.17%, 10.25%, and 16.14%, respectively. The multi-source fusion approach also improved the overall accuracy by 4.79%, highlighting the critical role of sensor fusion in enhancing pest detection and supporting the development of automated non-destructive systems for lychee stem borer detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the visible/near-infrared spectroscopy acquisition device.</p>
Full article ">Figure 2
<p>Schematic diagram of the hyperspectral imaging acquisition device.</p>
Full article ">Figure 3
<p>Schematic diagram of the X-ray image acquisition system.</p>
Full article ">Figure 4
<p>Multi-source information fusion flowchart.</p>
Full article ">Figure 5
<p>(<b>a</b>) Raw visible/near-infrared spectrum, (<b>b</b>) visible/near-infrared spectrum after SG+SNV preprocessing.</p>
Full article ">Figure 6
<p>(<b>a</b>) Raw hyperspectral spectrum, (<b>b</b>) hyperspectral spectrum after SG+MSC preprocessing.</p>
Full article ">Figure 7
<p>PCA classification of grayscale values in X-ray imaging feature regions for stem-borer-infested and non-infested fruit.</p>
Full article ">Figure 8
<p>(<b>a</b>) Litchi fruit without pests, (<b>b</b>) litchi fruit with pests.</p>
Full article ">
28 pages, 12679 KiB  
Article
DESAT: A Distance-Enhanced Strip Attention Transformer for Remote Sensing Image Super-Resolution
by Yujie Mao, Guojin He, Guizhou Wang, Ranyu Yin, Yan Peng and Bin Guan
Remote Sens. 2024, 16(22), 4251; https://doi.org/10.3390/rs16224251 - 14 Nov 2024
Viewed by 391
Abstract
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization [...] Read more.
Transformer-based methods have demonstrated impressive performance in image super-resolution tasks. However, when applied to large-scale Earth observation images, the existing transformers encounter two significant challenges: (1) insufficient consideration of spatial correlation between adjacent ground objects; and (2) performance bottlenecks due to the underutilization of the upsample module. To address these issues, we propose a novel distance-enhanced strip attention transformer (DESAT). The DESAT integrates distance priors, easily obtainable from remote sensing images, into the strip window self-attention mechanism to capture spatial correlations more effectively. To further enhance the transfer of deep features into high-resolution outputs, we designed an attention-enhanced upsample block, which combines the pixel shuffle layer with an attention-based upsample branch implemented through the overlapping window self-attention mechanism. Additionally, to better simulate real-world scenarios, we constructed a new cross-sensor super-resolution dataset using Gaofen-6 satellite imagery. Extensive experiments on both simulated and real-world remote sensing datasets demonstrate that the DESAT outperforms state-of-the-art models by up to 1.17 dB along with superior qualitative results. Furthermore, the DESAT achieves more competitive performance in real-world tasks, effectively balancing spatial detail reconstruction and spectral transform, making it highly suitable for practical remote sensing super-resolution applications. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing Image Enhancement)
Show Figures

Figure 1

Figure 1
<p>The overall methodology of this paper.</p>
Full article ">Figure 2
<p>Construction flow chart of GF6SRD dataset.</p>
Full article ">Figure 3
<p>Some WFV-PMS image pairs of the proposed GF6SRD dataset.</p>
Full article ">Figure 4
<p>The overall architecture of the DESAT and the structure of the DSAB, DESAM, and RDSG.</p>
Full article ">Figure 5
<p>Overview of the proposed AEUB module.</p>
Full article ">Figure 6
<p>Visual comparisons on the simulated AID dataset (the red box represent the zoomed area. And (<b>a</b>) represent the visual results of image “square_229”, while (<b>b</b>) represent the visual results of image “industrial_137”).</p>
Full article ">Figure 7
<p>Visual comparisons on two WFV-PMS image pairs shot on 29 September 2023 (the green box represent the zoomed area. And (<b>a</b>,<b>b</b>) represent the visual results of two different areas shot on 29 September 2023, respectively).</p>
Full article ">Figure 8
<p>Visual comparisons on a WFV-PMS image pair shot on 22 January 2023.</p>
Full article ">Figure 9
<p>Visual comparisons on a WFV-PMS image pair shot on 9 November 2023.</p>
Full article ">Figure 10
<p>Local attribution maps (LAM) results for different models in the AID dataset (the red box represents the selected local image patch, and the green box represents the zoomed area).</p>
Full article ">Figure 11
<p>Spectral value comparisons in each band between the SR images and the PMS images (the x-axis represents the PMS values, and the y-axis represents the SR values).</p>
Full article ">Figure 12
<p>Visual examples of different land cover types in WFV images, SR images, and PMS images, along with the representative spectral curves (zoom in for a better view).</p>
Full article ">Figure 13
<p>The overviews and local details of the WFV image and the super-resolution results.</p>
Full article ">
16 pages, 4667 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 414
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Test environments.</p>
Full article ">Figure 2
<p>Foot slipping scenarios of a quadruped robot during ground contact.</p>
Full article ">Figure 3
<p>Estimation of foot contact probability during unstable contact events, with (<b>a</b>) representing right front leg and (<b>b</b>) left rear leg.</p>
Full article ">Figure 4
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 4 Cont.
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 5
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">Figure 5 Cont.
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">
11 pages, 4207 KiB  
Article
Respiration Monitoring Using Humidity Sensor Based on  Hydrothermally Synthesized Two-Dimensional MoS2
by Gwangsik Hong, Mi Eun Kim, Jun Sik Lee, Ja-Yeon Kim and Min-Ki Kwon
Nanomaterials 2024, 14(22), 1826; https://doi.org/10.3390/nano14221826 - 14 Nov 2024
Viewed by 494
Abstract
Breathing is the process of exchanging gases between the human body and the surrounding environment. It plays a vital role in maintaining human health, sustaining life, and supporting various bodily functions. Unfortunately, current methods for monitoring respiration are impractical for medical applications because [...] Read more.
Breathing is the process of exchanging gases between the human body and the surrounding environment. It plays a vital role in maintaining human health, sustaining life, and supporting various bodily functions. Unfortunately, current methods for monitoring respiration are impractical for medical applications because of their high costs and need for bulky equipment. When measuring changes in moisture during respiration, we observed a slow response time for 2D nanomaterial-based resistance measurement methods used in respiration sensors. Through thermal annealing, the crystal structure of MoS2 is transformed from 1T@2H to 2H, allowing the measurement of respiration at more than 30 cycles per minute and enabling analysis of the response. This study highlights the potential of two-dimensional nanomaterials for the development of low-cost and highly sensitive humidity and respiration sensors for various applications. Full article
(This article belongs to the Special Issue 2D Materials for Advanced Sensors: Fabrication and Applications)
Show Figures

Figure 1

Figure 1
<p>The fabrication process steps for the MaS<sub>2</sub>-based respiration sensor.</p>
Full article ">Figure 2
<p>Optical microscope images at different annealing temperatures: (<b>a</b>) before annealing, (<b>b</b>) at 400 °C, (<b>c</b>) at 700 °C; (<b>d</b>) Raman spectrums of MOS<sub>2</sub> without and with annealing (400 and 700 °C) and; (<b>e</b>) peak frequency difference between A<sub>1g</sub> and E<sup>1</sup><sub>2g</sub> peaks.</p>
Full article ">Figure 3
<p>(<b>a</b>) XRD spectra and S 2p and Mo 3d XPS spectra of MoS<sub>2</sub> (<b>b</b>) with and (<b>c</b>) without thermal annealing.</p>
Full article ">Figure 4
<p>TEM images of MoS<sub>2</sub> annealed at 700 °C: (<b>a</b>) low magnification and (<b>b</b>) high-resolution TEM image.</p>
Full article ">Figure 5
<p>Current response to humidity change for resistive-type, MoS<sub>2</sub>-based respiration sensors (<b>a</b>) without and (<b>b</b>) with thermal annealing.</p>
Full article ">Figure 6
<p>Current response of MoS<sub>2</sub>-based sensors depending on distance between nose and sensor: (<b>a</b>) without thermal annealing and (<b>b</b>) with thermal annealing; and current response to normal and fast breathing with MoS<sub>2</sub>: (<b>c</b>) without thermal annealing and (<b>d</b>) with thermal annealing.</p>
Full article ">Figure 7
<p>Current response of single breathing with sensor using MoS<sub>2</sub> (<b>a</b>) without and (<b>b</b>) with thermal annealing.</p>
Full article ">Figure 8
<p>(<b>a</b>) Mice without cancer cell injection; (<b>b</b>) mice injected with cancer cells; and (<b>c</b>) mouse set up for measurement of respiration response. The red dot ring in (<b>b</b>) shows the location and appearance of cancer cells.</p>
Full article ">Figure 9
<p>Respiratory responses of healthy mice (<b>a</b>,<b>b</b>) and cancer-bearing mice (<b>c</b>,<b>d</b>): (<b>a</b>) week 3, (<b>b</b>) zoom in data of (<b>a</b>), week 3, (<b>c</b>) week 0, (<b>d</b>) week 3.</p>
Full article ">
Back to TopTop