[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,107)

Search Parameters:
Keywords = multi-sensor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
13 pages, 5902 KiB  
Article
Modulation of Surface Elastic Waves and Surface Acoustic Waves by Acoustic–Elastic Metamaterials
by Chang Fu and Tian-Xue Ma
Crystals 2024, 14(11), 997; https://doi.org/10.3390/cryst14110997 (registering DOI) - 18 Nov 2024
Viewed by 36
Abstract
Metamaterials enable the modulation of elastic waves or acoustic waves in unprecedented ways and have a wide range of potential applications. This paper achieves the simultaneous manipulation of surface elastic waves (SEWs) and surface acoustic waves (SAWs) using two-dimensional acousto-elastic metamaterials (AEMMs). The [...] Read more.
Metamaterials enable the modulation of elastic waves or acoustic waves in unprecedented ways and have a wide range of potential applications. This paper achieves the simultaneous manipulation of surface elastic waves (SEWs) and surface acoustic waves (SAWs) using two-dimensional acousto-elastic metamaterials (AEMMs). The proposed AEMMs are composed of periodic hollow cylinders on the surface of a semi-infinite substrate. The band diagrams and the frequency responses of the AEMMs are numerically calculated through the finite element approach. The band diagrams exhibit simultaneous bandgaps for the SEWs and SAWs, which can also be effectively tuned by the modification of AEMM geometry. Furthermore, we construct the AEMM waveguide by the introduction of a line defect and hence demonstrate its ability to guide the SEWs and SAWs simultaneously. We expect that the proposed AEMMs will contribute to the development of multi-functional wave devices, such as filters for dual waves in microelectronics or liquid sensors that detect more than one physical property. Full article
(This article belongs to the Section Hybrid and Composite Crystalline Materials)
Show Figures

Figure 1

Figure 1
<p>Schemes of the AEMM unit cells for the elastic (<b>a</b>) and acoustic (<b>b</b>) waves. (<b>c</b>) Cross-section view of the AEMM unit cell. (<b>d</b>) The first Brillouin zone of the square lattice.</p>
Full article ">Figure 2
<p>Band diagrams of the AEMM unit cell for the elastic (<b>a</b>) and acoustic (<b>b</b>) waves. (<b>c</b>) Displacement distributions and deformations of the SEW modes marked in (<b>a</b>). (<b>d</b>) Pressure distributions of the SAW modes marked in (<b>b</b>).</p>
Full article ">Figure 3
<p>Transmission spectra of the SEWs (<b>a</b>) and SAWs (<b>b</b>) in the finite-sized AEMM along the <math display="inline"><semantics> <mi mathvariant="normal">Γ</mi> </semantics></math>X direction.</p>
Full article ">Figure 4
<p>Distributions of the displacement (<b>a</b>) and pressure (<b>b</b>) fields of the finite-sized AEMM at different excitation frequencies.</p>
Full article ">Figure 5
<p>Band diagrams of the AEMM unit cell with different cylinder heights, where the upper and lower panels are the results of the elastic and acoustic waves, respectively.</p>
Full article ">Figure 6
<p>Transmission curves of the SEWs (<b>upper panel</b>) and SAWs (<b>lower panel</b>) along the <math display="inline"><semantics> <mi mathvariant="normal">Γ</mi> </semantics></math>X direction for different cylinder heights.</p>
Full article ">Figure 7
<p>Schemes of the AEMM supercells with a line defect for the elastic (<b>a</b>) and acoustic (<b>b</b>) waves.</p>
Full article ">Figure 8
<p>Band diagrams of the AEMM supercell for the elastic (<b>a</b>) and acoustic (<b>b</b>) waves, where the direction of wave propagation is the <math display="inline"><semantics> <mi mathvariant="normal">Γ</mi> </semantics></math>X direction.</p>
Full article ">Figure 9
<p>(<b>a</b>) Displacement distributions and deformations of the SEW modes marked in <a href="#crystals-14-00997-f008" class="html-fig">Figure 8</a>a. (<b>b</b>) Pressure distributions of the SAW modes marked in <a href="#crystals-14-00997-f008" class="html-fig">Figure 8</a>b.</p>
Full article ">Figure 10
<p>Schemes for calculating the frequency responses of the AEMM waveguide: (<b>a</b>) solid domain and (<b>b</b>) air domain.</p>
Full article ">Figure 11
<p>(<b>a</b>) Transmission curves of the SEWs in the AEMM waveguide, where the normalized frequencies corresponding to marker points 1, 2 are 0.257, 0.293. (<b>b</b>) Transmission curves of the SAWs in the AEMM waveguide, where the normalized frequencies corresponding to marker points I, II are 0.30, 0.36.</p>
Full article ">Figure 12
<p>Distributions of the displacement (<b>a</b>) and pressure (<b>b</b>) fields of the AEMM waveguide at different excitation frequencies.</p>
Full article ">
19 pages, 8885 KiB  
Article
Multi-Task Water Quality Colorimetric Detection Method Based on Deep Learning
by Shenlan Zhang, Shaojie Wu, Liqiang Chen, Pengxin Guo, Xincheng Jiang, Hongcheng Pan and Yuhong Li
Sensors 2024, 24(22), 7345; https://doi.org/10.3390/s24227345 (registering DOI) - 18 Nov 2024
Viewed by 124
Abstract
The colorimetric method, due to its rapid and low-cost characteristics, demonstrates a wide range of application prospects in on-site water quality testing. Current research on colorimetric detection using deep learning algorithms predominantly focuses on single-target classification. To address this limitation, we propose a [...] Read more.
The colorimetric method, due to its rapid and low-cost characteristics, demonstrates a wide range of application prospects in on-site water quality testing. Current research on colorimetric detection using deep learning algorithms predominantly focuses on single-target classification. To address this limitation, we propose a multi-task water quality colorimetric detection method based on YOLOv8n, leveraging deep learning techniques to achieve a fully automated process of “image input and result output”. Initially, we constructed a dataset that encompasses colorimetric sensor data under varying lighting conditions to enhance model generalization. Subsequently, to effectively improve detection accuracy while reducing model parameters and computational load, we implemented several improvements to the deep learning algorithm, including the MGFF (Multi-Scale Grouped Feature Fusion) module, the LSKA-SPPF (Large Separable Kernel Attention-Spatial Pyramid Pooling-Fast) module, and the GNDCDH (Group Norm Detail Convolution Detection Head). Experimental results demonstrate that the optimized deep learning algorithm excels in precision (96.4%), recall (96.2%), and mAP50 (98.3), significantly outperforming other mainstream models. Furthermore, compared to YOLOv8n, the parameter count and computational load were reduced by 25.8% and 25.6%, respectively. Additionally, precision improved by 2.8%, recall increased by 3.5%, mAP50 enhanced by 2%, and mAP95 rose by 1.9%. These results affirm the substantial potential of our proposed method for rapid on-site water quality detection, offering new technological insights for future water quality monitoring. Full article
(This article belongs to the Special Issue Sensors for Water Quality Monitoring and Assessment)
Show Figures

Figure 1

Figure 1
<p>The overview of our method.</p>
Full article ">Figure 2
<p>The smartphone photographs the colorimetric sensor.</p>
Full article ">Figure 3
<p>Data enhancement. (<b>a</b>) Original image; (<b>b</b>) random adjustment of contrast and brightness; (<b>c</b>) motion blur; (<b>d</b>) random angle rotation.</p>
Full article ">Figure 4
<p>The improved network structure.</p>
Full article ">Figure 5
<p>The MGFF module structure.</p>
Full article ">Figure 6
<p>(<b>a</b>) Splitting depth-wise convolution and depth-wise dilated convolution; (<b>b</b>) combining deep-wise convolution and depth-wise dilated convolution achieves large-scale convolution kernels.</p>
Full article ">Figure 7
<p>Structural diagram of the LSKA module.</p>
Full article ">Figure 8
<p>The GNDCDH module structure.</p>
Full article ">Figure 9
<p>Model detection results in different scenarios.</p>
Full article ">Figure 10
<p>Visual statistical results.</p>
Full article ">Figure 11
<p>The comparison of the HiResCAM heatmaps between YOLOv8n and the model proposed in this paper.</p>
Full article ">
17 pages, 12186 KiB  
Article
A Model-Driven Approach to Extract Multi-Source Fault Features of a Screw Pump
by Weigang Wen, Jingqi Qin, Xiangru Xu, Kaifu Mi and Meng Zhou
Processes 2024, 12(11), 2571; https://doi.org/10.3390/pr12112571 (registering DOI) - 17 Nov 2024
Viewed by 198
Abstract
Screw pumps’ faulty working conditions affect the stability of oil production. At project sites, different sensors are used simultaneously to collect multi-dimensional signals; the data fault labels and location are not clear, and how to comprehensively use multi-source information in effective fault feature [...] Read more.
Screw pumps’ faulty working conditions affect the stability of oil production. At project sites, different sensors are used simultaneously to collect multi-dimensional signals; the data fault labels and location are not clear, and how to comprehensively use multi-source information in effective fault feature extraction has become an urgent issue. Existing diagnostic methods use a single signal or part of a signal and do not fully utilize the acquired signal, which makes it difficult to achieve the required accuracy of diagnostic results. This paper focuses on the model-driven approach to extract multi-source fault features of screw pumps. Firstly, it constructs a fault data model (FDM) by analyzing the fault mechanism of the screw pump. Secondly, it uses the FDM to select an effective data set. Thirdly, it constructs a multi-dimensional fault feature extraction model (MDFEM) to extract featured signal features and data features, for which we also comprehensively used multi-source signals in effective fault feature extraction, while other traditional methods only use one or two signals. Finally, after feature selection, unsupervised fault diagnosis was achieved by using the k-means method. After experimental verification, the method can comprehensively use multi-source information to construct an effective data set and extract multi-dimensional, effective fault features for screw pump fault diagnosis. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

Figure 1
<p>Screw pump fault diagnosis framework.</p>
Full article ">Figure 2
<p>Screw pump fault data model.</p>
Full article ">Figure 3
<p>Slide sampling method.</p>
Full article ">Figure 4
<p>Heat map of signal correlation coefficients.</p>
Full article ">Figure 5
<p>Results of comparison of Experiment I. (<b>a</b>) Clustering results of Feature Set-1; (<b>b</b>) clustering results of Feature Set-2.</p>
Full article ">Figure 6
<p>CHI of different feature sets with the number of clusters.</p>
Full article ">Figure 7
<p>Diagnosis results for different datasets after clustering: (<b>a</b>) average of accuracy; (<b>b</b>) RMSE.</p>
Full article ">Figure 8
<p>Results of comparison of Experiment II. (<b>a</b>) Clustering results of Feature Set-3; (<b>b</b>) clustering results of Feature Set-4; (<b>c</b>) clustering results of Feature Set-5; (<b>d</b>) clustering results of Feature Set-6.</p>
Full article ">Figure 9
<p>CHI of different feature sets with the number of clusters.</p>
Full article ">Figure 10
<p>Diagnosis results for different datasets after clustering: (<b>a</b>) average of accuracy; (<b>b</b>) RMSE.</p>
Full article ">
16 pages, 8192 KiB  
Perspective
Embedding AI-Enabled Data Infrastructures for Sustainability in Agri-Food: Soft-Fruit and Brewery Use Case Perspectives
by Milan Markovic, Andy Li, Tewodros Alemu Ayall, Nicholas J. Watson, Alexander L. Bowler, Mel Woods, Peter Edwards, Rachael Ramsey, Matthew Beddows, Matthias Kuhnert and Georgios Leontidis
Sensors 2024, 24(22), 7327; https://doi.org/10.3390/s24227327 (registering DOI) - 16 Nov 2024
Viewed by 352
Abstract
The agri-food sector is undergoing a comprehensive transformation as it transitions towards net zero. To achieve this, fundamental changes and innovations are required, including changes in how food is produced and delivered to customers, new technologies, data and physical infrastructures, and algorithmic advancements. [...] Read more.
The agri-food sector is undergoing a comprehensive transformation as it transitions towards net zero. To achieve this, fundamental changes and innovations are required, including changes in how food is produced and delivered to customers, new technologies, data and physical infrastructures, and algorithmic advancements. In this paper, we explore the opportunities and challenges of deploying AI-based data infrastructures for sustainability in the agri-food sector by focusing on two case studies: soft-fruit production and brewery operations. We investigate the potential benefits of incorporating Internet of Things (IoT) sensors and AI technologies for improving the use of resources, reducing carbon footprints, and enhancing decision-making. We identify user engagement with new technologies as a key challenge, together with issues in data quality arising from environmental volatility, difficulties in generalising models, including those designed for carbon calculators, and socio-technical barriers to adoption. We highlight and advocate for user engagement, more granular availability of sensor, production, and emissions data, and more transparent carbon footprint calculations. Our proposed future directions include semantic data integration to enhance interoperability, the generation of synthetic data to overcome the lack of real-world farm data, and multi-objective optimisation systems to model the competing interests between yield and sustainability goals. In general, we argue that AI is not a silver bullet for net zero challenges in the agri-food industry, but at the same time, AI solutions, when appropriately designed and deployed, can be a useful tool when operating in synergy with other approaches. Full article
(This article belongs to the Special Issue Application of Sensors Technologies in Agricultural Engineering)
Show Figures

Figure 1

Figure 1
<p>Temp./humidity sensor outside tunnel.</p>
Full article ">Figure 2
<p>Temp./humidity and light sensor inside tunnel.</p>
Full article ">Figure 3
<p>Flow meter inside tunnel.</p>
Full article ">Figure 4
<p>Fermentation sensor.</p>
Full article ">Figure 5
<p>Wireless electricity monitor.</p>
Full article ">
17 pages, 5063 KiB  
Article
Enhancing Recovery of Structural Health Monitoring Data Using CNN Combined with GRU
by Nguyen Thi Cam Nhung, Hoang Nguyen Bui and Tran Quang Minh
Infrastructures 2024, 9(11), 205; https://doi.org/10.3390/infrastructures9110205 (registering DOI) - 16 Nov 2024
Viewed by 234
Abstract
Structural health monitoring (SHM) plays a crucial role in ensuring the safety of infrastructure in general, especially critical infrastructure such as bridges. SHM systems allow the real-time monitoring of structural conditions and early detection of abnormalities. This enables managers to make accurate decisions [...] Read more.
Structural health monitoring (SHM) plays a crucial role in ensuring the safety of infrastructure in general, especially critical infrastructure such as bridges. SHM systems allow the real-time monitoring of structural conditions and early detection of abnormalities. This enables managers to make accurate decisions during the operation of the infrastructure. However, for various reasons, data from SHM systems may be interrupted or faulty, leading to serious consequences. This study proposes using a Convolutional Neural Network (CNN) combined with Gated Recurrent Units (GRUs) to recover lost data from accelerometer sensors in SHM systems. CNNs are adept at capturing spatial patterns in data, making them highly effective for recognizing localized features in sensor signals. At the same time, GRUs are designed to model sequential dependencies over time, making the combined architecture particularly suited for time-series data. A dataset collected from a real bridge structure will be used to validate the proposed method. Different cases of data loss are considered to demonstrate the feasibility and potential of the CNN-GRU approach. The results show that the CNN-GRU hybrid network effectively recovers data in both single-channel and multi-channel data loss scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Convolutional Neural Networks.</p>
Full article ">Figure 2
<p>The structure of GRU network [<a href="#B44-infrastructures-09-00205" class="html-bibr">44</a>].</p>
Full article ">Figure 3
<p>Data recovery process using CNN-GRU.</p>
Full article ">Figure 4
<p>Thang Long Bridge: (<b>a</b>) side view; (<b>b</b>) lower floor.</p>
Full article ">Figure 5
<p>Arrangement of measuring points at Thang Long Bridge.</p>
Full article ">Figure 6
<p>Data collection: (<b>a</b>) equipment station; (<b>b</b>) sensors’ installation location.</p>
Full article ">Figure 7
<p>Network training results in single-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 8
<p>Recovery data segment using CNN-GRU; CNN and GRU.</p>
Full article ">Figure 9
<p>Mode shapes of two datasets.</p>
Full article ">Figure 10
<p>Network training results in multi-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 10 Cont.
<p>Network training results in multi-channel data recovery scenario: (<b>a</b>) training convergence curve; (<b>b</b>) mean absolute error.</p>
Full article ">Figure 11
<p>MAC values: (<b>a</b>) two-sensor data recovery; (<b>b</b>) three-sensor data recovery; (<b>c</b>) four-sensor data recovery.</p>
Full article ">Figure 11 Cont.
<p>MAC values: (<b>a</b>) two-sensor data recovery; (<b>b</b>) three-sensor data recovery; (<b>c</b>) four-sensor data recovery.</p>
Full article ">
15 pages, 941 KiB  
Article
Embedding Tree-Based Intrusion Detection System in Smart Thermostats for Enhanced IoT Security
by Abbas Javed, Muhammad Naeem Awais, Ayyaz-ul-Haq Qureshi, Muhammad Jawad, Jehangir Arshad and Hadi Larijani
Sensors 2024, 24(22), 7320; https://doi.org/10.3390/s24227320 (registering DOI) - 16 Nov 2024
Viewed by 215
Abstract
IoT devices with limited resources, and in the absence of gateways, become vulnerable to various attacks, such as denial of service (DoS) and man-in-the-middle (MITM) attacks. Intrusion detection systems (IDS) are designed to detect and respond to these threats in IoT environments. While [...] Read more.
IoT devices with limited resources, and in the absence of gateways, become vulnerable to various attacks, such as denial of service (DoS) and man-in-the-middle (MITM) attacks. Intrusion detection systems (IDS) are designed to detect and respond to these threats in IoT environments. While machine learning-based IDS have typically been deployed at the edge (gateways) or in the cloud, in the absence of gateways, the IDS must be embedded within the sensor nodes themselves. Available datasets mainly contain features extracted from network traffic at the edge (e.g., Raspberry Pi/computer) or cloud servers. We developed a unique dataset, named as Intrusion Detection in the Smart Homes (IDSH) dataset, which is based on features retrievable from microcontroller-based IoT devices. In this work, a Tree-based IDS is embedded into a smart thermostat for real-time intrusion detection. The results demonstrated that the IDS achieved an accuracy of 98.71% for binary classification with an inference time of 276 microseconds, and an accuracy of 97.51% for multi-classification with an inference time of 273 microseconds. Real-time testing showed that the smart thermostat is capable of detecting DoS and MITM attacks without relying on a gateway or cloud. Full article
(This article belongs to the Special Issue Sensor Data Privacy and Intrusion Detection for IoT Networks)
Show Figures

Figure 1

Figure 1
<p>Proposed architecture of embedded IDS for smart thermostats.</p>
Full article ">Figure 2
<p>Dataset collection on smart thermostats.</p>
Full article ">Figure 3
<p>Comparison of IDS implemented with quantization and without quantization.</p>
Full article ">Figure 4
<p>Comparison of IDS implemented with CatBoost and XGBoost on the smart thermostat.</p>
Full article ">
20 pages, 9833 KiB  
Article
Reconstruction of Hourly Gap-Free Sea Surface Skin Temperature from Multi-Sensors
by Qianguang Tu, Zengzhou Hao, Dong Liu, Bangyi Tao, Liangliang Shi and Yunwei Yan
Remote Sens. 2024, 16(22), 4268; https://doi.org/10.3390/rs16224268 (registering DOI) - 15 Nov 2024
Viewed by 265
Abstract
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is [...] Read more.
The sea surface skin temperature (SSTskin) is of critical importance with regard to air–sea interactions and marine carbon circulation. At present, no single remote sensor is capable of providing a gap-free SSTskin. The use of data fusion techniques is therefore essential for the purpose of filling these gaps. The extant fusion methodologies frequently fail to account for the influence of depth disparities and the diurnal variability of sea surface temperatures (SSTs) retrieved from multi-sensors. We have developed a novel approach that integrates depth and diurnal corrections and employs advanced data fusion techniques to generate hourly gap-free SST datasets. The General Ocean Turbulence Model (GOTM) is employed to model the diurnal variability of the SST profile, incorporating depth and diurnal corrections. Subsequently, the corrected SSTs at the same observed time and depth are blended using the Markov method and the remaining data gaps are filled with optimal interpolation. The overall precision of the hourly gap-free SSTskin generated demonstrates a mean bias of −0.14 °C and a root mean square error of 0.57 °C, which is comparable to the precision of satellite observations. The hourly gap-free SSTskin is vital for improving our comprehension of air–sea interactions and monitoring critical oceanographic processes with high-frequency variability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall flowchart of multi-sensors fusion for SST<sub>skin</sub>.</p>
Full article ">Figure 2
<p>The DV of SST<sub>skin</sub> modeled by GOTM on 8 May 2007.</p>
Full article ">Figure 3
<p>Histogram of the difference between MTSAT-observed DV and GOTM DV on 8 May 2007.</p>
Full article ">Figure 4
<p>GOTM of the SST at 2 p.m. on 8 May 2007. (<b>a</b>) The SST profile at 122°E and 35.25°N; (<b>b</b>) the difference in the spatial distributions between SST<sub>skin</sub> and SST<sub>subskin</sub>.</p>
Full article ">Figure 5
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 5 Cont.
<p>(<b>a</b>) The original hourly MTSAT SST on 8 May 2007. (<b>b</b>) The diurnal variation-corrected (normalized) hourly MTSAT SST on 8 May 2007.</p>
Full article ">Figure 6
<p>(<b>a</b>) Number of sensors available on 8 May 2007; (<b>b</b>) the fusion SST at 10:30 a.m. using Markov estimation.</p>
Full article ">Figure 7
<p>Covariance structure function of the East China Sea estimated from MTSAT in 2007. The spatial covariance functions at (<b>a</b>) zonal and (<b>b</b>) meridional directions for the SST variations. Temporal correlation with time lags computed using hourly SST (<b>c</b>). Red line is the fitting function. Vertical bars represent ±1 standard deviation.</p>
Full article ">Figure 8
<p>The hourly gap-free SST<sub>skin</sub> on 8 May 2007.</p>
Full article ">Figure 9
<p>The diurnal variation of SST<sub>skin</sub> at 124°E and 28°N on 8 May 2007.</p>
Full article ">Figure 10
<p>(<b>a</b>) Scatter plot between in situ SST<sub>skin</sub> and fusion SST<sub>skin</sub>. (<b>b</b>) The hourly mean bias and standard deviation during 2007.</p>
Full article ">
21 pages, 11350 KiB  
Article
A Fast Obstacle Detection Algorithm Based on 3D LiDAR and Multiple Depth Cameras for Unmanned Ground Vehicles
by Fenglin Pang, Yutian Chen, Yan Luo, Zigui Lv, Xuefei Sun, Xiaobin Xu and Minzhou Luo
Drones 2024, 8(11), 676; https://doi.org/10.3390/drones8110676 (registering DOI) - 15 Nov 2024
Viewed by 250
Abstract
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use [...] Read more.
With the advancement of technology, unmanned ground vehicles (UGVs) have shown increasing application value in various tasks, such as food delivery and cleaning. A key capability of UGVs is obstacle detection, which is essential for avoiding collisions during movement. Current mainstream methods use point cloud information from onboard sensors, such as light detection and ranging (LiDAR) and depth cameras, for obstacle perception. However, the substantial volume of point clouds generated by these sensors, coupled with the presence of noise, poses significant challenges for efficient obstacle detection. Therefore, this paper presents a fast obstacle detection algorithm designed to ensure the safe operation of UGVs. Building on multi-sensor point cloud fusion, an efficient ground segmentation algorithm based on multi-plane fitting and plane combination is proposed in order to prevent them from being considered as obstacles. Additionally, instead of point cloud clustering, a vertical projection method is used to count the distribution of the potential obstacle points through converting the point cloud to a 2D polar coordinate system. Points in the fan-shaped area with a density lower than a certain threshold will be considered as noise. To verify the effectiveness of the proposed algorithm, a cleaning UGV equipped with one LiDAR sensor and four depth cameras is used to test the performance of obstacle detection in various environments. Several experiments have demonstrated the effectiveness and real-time capability of the proposed algorithm. The experimental results show that the proposed algorithm achieves an over 90% detection rate within a 20 m sensing area and has an average processing time of just 14.1 ms per frame. Full article
Show Figures

Figure 1

Figure 1
<p>Overall process of the proposed algorithm.</p>
Full article ">Figure 2
<p>The schematic diagram of coordinate transformation. <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>T</mi> </mrow> <mrow> <msub> <mrow> <mi mathvariant="normal">C</mi> </mrow> <mrow> <mi>i</mi> </mrow> </msub> <mi mathvariant="normal">L</mi> </mrow> </msub> </mrow> </semantics></math> represents the relative pose relationship from the <span class="html-italic">i</span>-th depth camera to the LiDAR sensor.</p>
Full article ">Figure 3
<p>The schematic diagram of ground point cloud segmentation.</p>
Full article ">Figure 4
<p>Schematic diagram of fan-shaped area retrieval.</p>
Full article ">Figure 5
<p>The loaded sensors. (<b>a</b>) Leishen C32W LiDAR; (<b>b</b>) Orbbec DaBai DCW2; (<b>c</b>) Orbbec Dabai MAX.</p>
Full article ">Figure 6
<p>The cleaning UGV equipped with these sensors.</p>
Full article ">Figure 7
<p>The vertical view of the fused point cloud in the main coordinate system (warehouse).</p>
Full article ">Figure 8
<p>The vertical view of the fused point cloud in the main coordinate system (parking).</p>
Full article ">Figure 9
<p>The performance of the ground segmentation effect by Patchwork++ in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 10
<p>The performance of the ground segmentation effect by DipG-Seg in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 11
<p>The performance of the ground segmentation effect by the proposed method in the warehouse environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 12
<p>The performance of the ground segmentation effect by Patchwork++ in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 13
<p>The performance of the ground segmentation effect by DipG-Seg in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 14
<p>The performance of the ground segmentation effect by the proposed method in the parking environment. Red represents the ground point cloud and green represents the non-ground point cloud.</p>
Full article ">Figure 15
<p>Detailed image of the ground segmentation effect of the proposed algorithm. (<b>a</b>) Warehouse; (<b>b</b>) parking.</p>
Full article ">Figure 16
<p>The vertical view of the obstacle detection effect using Euclidean clustering (warehouse).</p>
Full article ">Figure 17
<p>The vertical view of the obstacle detection effect using CenterPoint (warehouse).</p>
Full article ">Figure 18
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (warehouse).</p>
Full article ">Figure 19
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (warehouse).</p>
Full article ">Figure 20
<p>The vertical view of the obstacle detection effect using Euclidean clustering (parking).</p>
Full article ">Figure 21
<p>The vertical view of the obstacle detection effect using CenterPoint (parking).</p>
Full article ">Figure 22
<p>The vertical view of the obstacle detection effect using the proposed algorithm in smaller hyperparameter settings (parking).</p>
Full article ">Figure 23
<p>The vertical view of the obstacle detection effect using the proposed algorithm in larger hyperparameter settings (parking).</p>
Full article ">
23 pages, 4387 KiB  
Article
Multisensor Feature Selection for Maritime Target Estimation
by Sun Choi and Jhonghyun An
Electronics 2024, 13(22), 4497; https://doi.org/10.3390/electronics13224497 - 15 Nov 2024
Viewed by 240
Abstract
This paper introduces a preprocessing and feature selection technique for maritime target estimation. Given the distinct challenges of the maritime environment and the use of multiple sensors, we propose a target estimation model designed to achieve high accuracy while minimizing computational costs through [...] Read more.
This paper introduces a preprocessing and feature selection technique for maritime target estimation. Given the distinct challenges of the maritime environment and the use of multiple sensors, we propose a target estimation model designed to achieve high accuracy while minimizing computational costs through suitable data preprocessing and feature selection. The experimental results demonstrate excellent performance, with the mean square error (MSE) reduced by about 99%. This approach is expected to enhance vessel tracking in situations where vessel estimation sensors, such as the automatic identification system (AIS), are disabled. By enabling reliable vessel tracking, this technique can aid in the detection of illegal vessels. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

Figure 1
<p>Overall stream from raw data input to target estimation output.</p>
Full article ">Figure 2
<p>Data preprocessing.</p>
Full article ">Figure 3
<p>Comparison of data before and after synchronization. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) is the raw data, and (<b>b</b>) is the synchronized data, aligned to the point where the target is the closest.</p>
Full article ">Figure 4
<p>Comparison of data before and after outlier handling. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows the data with outliers and (<b>b</b>) shows the data after outlier removal, which allows for clearer visualization.</p>
Full article ">Figure 5
<p>Results for various scaling methods. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows the visualization of the original data without any applied scaling. (<b>b</b>–<b>f</b>) display the visualizations for each of the different scaling methods.</p>
Full article ">Figure 6
<p>Feature selection.</p>
Full article ">Figure 7
<p>STL decomposition example for magnetic field sensor data. The plots show the original, trend, seasonal, and residual components from the top, respectively. For residual-restricting thresholds in the last plot, the green dashed line is the quantile-based threshold which only restricts the top 5% and bottom 5% of residuals. The blue dashed line is the <math display="inline"><semantics> <mrow> <mi>S</mi> <mi>R</mi> </mrow> </semantics></math>-based threshold.</p>
Full article ">Figure 8
<p>Regression results for all preprocessing combinations before feature selection. Different marker shapes represent the 9 regressors, while color variations indicate the 5 scaling methods. Filled markers denote the SR denoising threshold, markers with black outlines represent QR, and empty markers indicate TS. Shapes indicate the models, with pentagons for XGB, circles for RandomForest, squares for GradientBoosting, triangles for DecisionTree, right-pointing triangles for Ridge, left-pointing triangles for Lasso, hexagons for SVR, diamonds for Linear, and inverted triangles for KNeighbors. Colors represent the scalers, where gray is MinMax, orange is Standard, green is Robust, purple is MaxAbs, and blue is Normalizer.</p>
Full article ">Figure 9
<p>Results of sensor stability. The blue bar chart represents the mean correlation, and the red line chart represents the standard deviation.</p>
Full article ">Figure 10
<p>Final regression results using the selected features, with Normalizer as the scaling method and TS as the denoising threshold.</p>
Full article ">Figure 11
<p>Target estimation.</p>
Full article ">Figure 12
<p>Comparison of average MSE between different feature selection methods. Hierarchical method shows the lowest MSE among other methods.</p>
Full article ">Figure 13
<p>Comparison of the average MSE across different scaler methods and denoising thresholds. (<b>a</b>) shows the graph for 5 scaler methods, showing that the Normalizer yields the lowest average MSE. (<b>b</b>) shows the graph for 3 denoising thresholds, with the TS threshold achieving the lowest average MSE.</p>
Full article ">Figure 14
<p>Visualization of each sensor. The different colors in the graph represent each of the 20 experiments. (<b>a</b>) shows data from the acoustic sensor, which was frequently selected as a key sensor in the hierarchical feature selection process. In contrast, (<b>b</b>) shows data from the specific energy sensor, which was identified as less important. By examining these visualizations of the actual sensor data, we can assess the validity of the feature selection results.</p>
Full article ">Figure 15
<p>Qualitative results for target estimation using LSTM. The graph represents the distance between sensors and the target. The x-axis represents time, whereas the y-axis represents target distance. The blue line shows the original target distance. Gray, green, and fuchsia dashed lines represent denoised, existing feature selection, and proposed hierarchical feature selection respectively.</p>
Full article ">
32 pages, 11087 KiB  
Article
Path Planning and Motion Control of Robot Dog Through Rough Terrain Based on Vision Navigation
by Tianxiang Chen, Yipeng Huangfu, Sutthiphong Srigrarom and Boo Cheong Khoo
Sensors 2024, 24(22), 7306; https://doi.org/10.3390/s24227306 - 15 Nov 2024
Viewed by 438
Abstract
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from [...] Read more.
This article delineates the enhancement of an autonomous navigation and obstacle avoidance system for a quadruped robot dog. Part one of this paper presents the integration of a sophisticated multi-level dynamic control framework, utilizing Model Predictive Control (MPC) and Whole-Body Control (WBC) from MIT Cheetah. The system employs an Intel RealSense D435i depth camera for depth vision-based navigation, which enables high-fidelity 3D environmental mapping and real-time path planning. A significant innovation is the customization of the EGO-Planner to optimize trajectory planning in dynamically changing terrains, coupled with the implementation of a multi-body dynamics model that significantly improves the robot’s stability and maneuverability across various surfaces. The experimental results show that the RGB-D system exhibits superior velocity stability and trajectory accuracy to the SLAM system, with a 20% reduction in the cumulative velocity error and a 10% improvement in path tracking precision. The experimental results also show that the RGB-D system achieves smoother navigation, requiring 15% fewer iterations for path planning, and a 30% faster success rate recovery in challenging environments. The successful application of these technologies in simulated urban disaster scenarios suggests promising future applications in emergency response and complex urban environments. Part two of this paper presents the development of a robust path planning algorithm for a robot dog on a rough terrain based on attached binocular vision navigation. We use a commercial-of-the-shelf (COTS) robot dog. An optical CCD binocular vision dynamic tracking system is used to provide environment information. Likewise, the pose and posture of the robot dog are obtained from the robot’s own sensors, and a kinematics model is established. Then, a binocular vision tracking method is developed to determine the optimal path, provide a proposal (commands to actuators) of the position and posture of the bionic robot, and achieve stable motion on tough terrains. The terrain is assumed to be a gentle uneven terrain to begin with and subsequently proceeds to a more rough surface. This work consists of four steps: (1) pose and position data are acquired from the robot dog’s own inertial sensors, (2) terrain and environment information is input from onboard cameras, (3) information is fused (integrated), and (4) path planning and motion control proposals are made. Ultimately, this work provides a robust framework for future developments in the vision-based navigation and control of quadruped robots, offering potential solutions for navigating complex and dynamic terrains. Full article
Show Figures

Figure 1

Figure 1
<p>Simplified box model of the Lite3P quadruped robotic dog.</p>
Full article ">Figure 2
<p>Internal sensor arrangement of the quadruped robotic dog.</p>
Full article ">Figure 3
<p>Dynamic control flowchart.</p>
Full article ">Figure 4
<p>MPC flowchart.</p>
Full article ">Figure 5
<p>WBC flowchart [<a href="#B30-sensors-24-07306" class="html-bibr">30</a>].</p>
Full article ">Figure 6
<p>Robot coordinates and joint point settings [<a href="#B30-sensors-24-07306" class="html-bibr">30</a>].</p>
Full article ">Figure 7
<p>Intel D435i and velodyne LIDAR.</p>
Full article ">Figure 8
<p>ICP diagram.</p>
Full article ">Figure 9
<p>Comparison of before and after modifying the perception region.</p>
Full article ">Figure 10
<p>Point cloud processing flowchart.</p>
Full article ">Figure 11
<p>{p, v} generation: (<b>a</b>) the creation of {p, v} pairs for collision points; (<b>b</b>) the process of generating anchor points and repulsive vectors for dynamic obstacle avoidance [<a href="#B41-sensors-24-07306" class="html-bibr">41</a>].</p>
Full article ">Figure 12
<p>Overall framework of 2D EGO-Planner.</p>
Full article ">Figure 13
<p>Robot initialization and control process in Gazebo simulation: (<b>a</b>) Gazebo environment creation, (<b>b</b>) robot model import, (<b>c</b>) torque balance mode activation, and (<b>d</b>) robot stepping and rotation in simulation.</p>
Full article ">Figure 14
<p>Joint rotational angles of FL and RL legs.</p>
Full article ">Figure 15
<p>Joint angular velocities of FL and RL legs.</p>
Full article ">Figure 16
<p>Torque applied to FL and RL joints during the gait cycle.</p>
Full article ">Figure 17
<p>The robot navigating in a simple environment using a camera.</p>
Full article ">Figure 18
<p>The robot navigating in a complex environment using a camera.</p>
Full article ">Figure 19
<p>A 2D trajectory showing start and goal positions, obstacles, and rough path.</p>
Full article ">Figure 20
<p>Initial environment setup.</p>
Full article ">Figure 21
<p>The robot starts navigating in a simple environment with a static obstacle (brown box).</p>
Full article ">Figure 22
<p>Dynamic Obstacle 1 introduced: the robot detects a new obstacle and recalculates its path.</p>
Full article ">Figure 23
<p>Dynamic Obstacle 2 introduced: after avoiding the first obstacle, a second obstacle is introduced and detected by the planner.</p>
Full article ">Figure 24
<p>Approaching the target: the robot adjusts its path to approach the target point as the distance shortens.</p>
Full article ">Figure 25
<p>Reaching the target: the robot completes its path and reaches the designated target point.</p>
Full article ">Figure 26
<p>Real-time B-spline trajectory updates in response to dynamic obstacles. Set 1 (orange) shows the initial path avoiding static obstacles. When the first dynamic obstacle is detected, the EGO-Planner updates the path (Set 2, blue) using local optimization. A second obstacle prompts another adjustment (Set 3, green), guiding the robot smoothly towards the target as trajectory updates become more frequent.</p>
Full article ">Figure 27
<p>The robot navigating a simple environment using SLAM.</p>
Full article ">Figure 28
<p>The robot navigating a complex environment using SLAM.</p>
Full article ">Figure 29
<p>A 2D trajectory showing start and goal positions, obstacles, and the planned path in a complex environment using SLAM.</p>
Full article ">Figure 30
<p>Navigation based on RGB-D camera.</p>
Full article ">Figure 31
<p>Navigation based on SLAM.</p>
Full article ">Figure 32
<p>Velocity deviation based on RGB-D camera.</p>
Full article ">Figure 33
<p>Velocity deviation based on SLAM.</p>
Full article ">Figure 34
<p>Cumulative average iterations.</p>
Full article ">Figure 35
<p>Cumulative success rate.</p>
Full article ">
17 pages, 2380 KiB  
Article
Nondestructive Detection of Litchi Stem Borers Using Multi-Sensor Data Fusion
by Zikun Zhao, Sai Xu, Huazhong Lu, Xin Liang, Hongli Feng and Wenjing Li
Agronomy 2024, 14(11), 2691; https://doi.org/10.3390/agronomy14112691 - 15 Nov 2024
Viewed by 241
Abstract
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, [...] Read more.
To enhance lychee quality assessment and address inconsistencies in post-harvest pest detection, this study presents a multi-source fusion approach combining hyperspectral imaging, X-ray imaging, and visible/near-infrared (Vis/NIR) spectroscopy. Traditional single-sensor methods are limited in detecting pest damage, particularly in lychees with complex skins, as they often fail to capture both external and internal fruit characteristics. By integrating multiple sensors, our approach overcomes these limitations, offering a more accurate and robust detection system. Significant differences were observed between pest-free and infested lychees. Pest-free lychees exhibited higher hardness, soluble sugars (11% higher in flesh, 7% higher in peel), vitamin C (50% higher in flesh, 2% higher in peel), polyphenols, anthocyanins, and ORAC values (26%, 9%, and 14% higher, respectively). The Vis/NIR data processed with SG+SNV+CARS yielded a partial least squares regression (PLSR) model with an R2 of 0.82, an RMSE of 0.18, and accuracy of 89.22%. The hyperspectral model, using SG+MSC+SPA, achieved an R2 of 0.69, an RMSE of 0.23, and 81.74% accuracy, while the X-ray method with support vector regression (SVR) reached an R2 of 0.69, an RMSE of 0.22, and 76.25% accuracy. Through feature-level fusion, Recursive Feature Elimination with Cross-Validation (RFECV), and dimensionality reduction using PCA, we optimized hyperparameters and developed a Random Forest model. This model achieved 92.39% accuracy in pest detection, outperforming the individual methods by 3.17%, 10.25%, and 16.14%, respectively. The multi-source fusion approach also improved the overall accuracy by 4.79%, highlighting the critical role of sensor fusion in enhancing pest detection and supporting the development of automated non-destructive systems for lychee stem borer detection. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the visible/near-infrared spectroscopy acquisition device.</p>
Full article ">Figure 2
<p>Schematic diagram of the hyperspectral imaging acquisition device.</p>
Full article ">Figure 3
<p>Schematic diagram of the X-ray image acquisition system.</p>
Full article ">Figure 4
<p>Multi-source information fusion flowchart.</p>
Full article ">Figure 5
<p>(<b>a</b>) Raw visible/near-infrared spectrum, (<b>b</b>) visible/near-infrared spectrum after SG+SNV preprocessing.</p>
Full article ">Figure 6
<p>(<b>a</b>) Raw hyperspectral spectrum, (<b>b</b>) hyperspectral spectrum after SG+MSC preprocessing.</p>
Full article ">Figure 7
<p>PCA classification of grayscale values in X-ray imaging feature regions for stem-borer-infested and non-infested fruit.</p>
Full article ">Figure 8
<p>(<b>a</b>) Litchi fruit without pests, (<b>b</b>) litchi fruit with pests.</p>
Full article ">
25 pages, 2899 KiB  
Article
Learning Omni-Dimensional Spatio-Temporal Dependencies for Millimeter-Wave Radar Perception
by Hang Yan, Yongji Li, Luping Wang and Shichao Chen
Remote Sens. 2024, 16(22), 4256; https://doi.org/10.3390/rs16224256 - 15 Nov 2024
Viewed by 382
Abstract
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar [...] Read more.
Reliable environmental perception capabilities are a prerequisite for achieving autonomous driving. Cameras and LiDAR are sensitive to illumination and weather conditions, while millimeter-wave radar avoids these issues. Existing models rely heavily on image-based approaches, which may not be able to fully characterize radar sensor data or efficiently further utilize them for perception tasks. This paper rethinks the approach to modeling radar signals and proposes a novel U-shaped multilayer perceptron network (U-MLPNet) that aims to enhance the learning of omni-dimensional spatio-temporal dependencies. Our method involves innovative signal processing techniques, including a 3D CNN for spatio-temporal feature extraction and an encoder–decoder framework with cross-shaped receptive fields specifically designed to capture the sparse and non-uniform characteristics of radar signals. We conducted extensive experiments using a diverse dataset of urban driving scenarios to characterize the sensor’s performance in multi-view semantic segmentation and object detection tasks. Experiments showed that U-MLPNet achieves competitive performance against state-of-the-art (SOTA) methods, improving the mAP by 3.0% and mDice by 2.7% in RD segmentation and AR and AP by 1.77% and 2.03%, respectively, in object detection. These improvements signify an advancement in radar-based perception for autonomous vehicles, potentially enhancing their reliability and safety across diverse driving conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The complete millimeter-wave radar signal collection and preprocessing pipeline. First, the received and transmitted signals are mixed to generate raw ADC data. These signals are then subjected to various forms of FFT algorithms, resulting in the RA view, RD view, and RAD tensor, which are the RF signals prepared for further processing.</p>
Full article ">Figure 2
<p>Overall framework of our U-MLPNet. The left part represents the multi-view encoder, the middle part is the latent space, and the right part is the dual-view decoder. The skip connections between the encoder and decoder effectively maintain the disparities between different perspectives and balance model performance. The latent space contains the U-MLP module, which can efficiently fuse multi-scale, multi-view global and local spatio-temporal features.</p>
Full article ">Figure 3
<p>Radar RF features. The top row illustrates the CARRADA dataset with RGB images and RA, RD, and AD views arranged from left to right. The bottom row shows the echo of the CRUW dataset, with RGB images on the left and RA images on the right.</p>
Full article ">Figure 4
<p>Overall framework of our U-MLP. The left side the encoder, while the right side represents the decoder. The encoder employs a lightweight MLP to extract meaningful radar features. The decoder progressively integrates these features and restores resolution in a stepwise manner.</p>
Full article ">Figure 5
<p>The receptive field of U-MLP. The original receptive field, the receptive field proposed in this paper, and the equivalent guard band are displayed from left to right. Feature points, the guard band, and feature regions are distinguished by orange, a blue diagonal grid, and light blue, respectively.</p>
Full article ">Figure 6
<p>Visual comparison of RA views for various algorithms on the CARRADA dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 7
<p>Visual comparison of RD views for various algorithms on the CARRADA dataset. The pedestrian category is highlighted in red, the car category in blue, and the cyclist category in green. (<b>a</b>–<b>h</b>) RGB images, RF images, ground truth (GT), U-MLPNet, TransRadar, PeakConv, TMVA-Net, and MVNet, respectively.</p>
Full article ">Figure 8
<p>Polar plot of RD views for various algorithms on the CARRADA dataset across different categories. Each line represents the mIoU of a specific algorithm across these categories, with higher values indicating superior performance.</p>
Full article ">Figure 9
<p>Visual comparison of RA views for various algorithms on the CRUW dataset. The pedestrian category is annotated in red, the car category in blue, and the cyclist category in green.</p>
Full article ">Figure 10
<p>To evaluate the performance and robustness of U-MLPNet in complex environments, we conduct qualitative testing using a nighttime dataset.</p>
Full article ">
10 pages, 20455 KiB  
Communication
Sub-Micron Two-Dimensional Displacement Sensor Based on a Multi-Core Fiber
by Kexin Zhu, Shijie Ren, Xiangdong Li, Yuanzhen Liu, Jiaxin Li, Liqiang Zhang and Minghong Wang
Photonics 2024, 11(11), 1073; https://doi.org/10.3390/photonics11111073 - 15 Nov 2024
Viewed by 291
Abstract
A sub-micron two-dimensional displacement sensor based on a segment of multi-core fiber is presented in this paper. Light at the wavelengths of 1520 nm, 1530 nm, and 1540 nm was introduced separately into three cores of a seven-core fiber (SCF). They were independently [...] Read more.
A sub-micron two-dimensional displacement sensor based on a segment of multi-core fiber is presented in this paper. Light at the wavelengths of 1520 nm, 1530 nm, and 1540 nm was introduced separately into three cores of a seven-core fiber (SCF). They were independently transmitted in their respective cores, and after being emitted from the other end of the SCF, they were irradiated onto the end-face of a single-mode fiber (SMF). The SMF received light at three different wavelengths, the power of which was related to the relative position between the SCF and the SMF. When the SMF moved within a two-dimensional plane, the direction of displacement could be determined based on the changes in power at different wavelengths. As a benefit of the high sensitivity of the spectrometer, the sensor could detect displacements at the sub-micron level. When the SMF was translated in 200 nm steps over a range from 5.2 μm to 6.2 μm, the sensitivities at the wavelengths of 1520 nm, 1530 nm, and 1540 nm were 0.34 dB/μm, 0.40 dB/μm, and 0.36 dB/μm, respectively. The two-dimensional displacement sensor proposed in this paper offers the advantages of high detection precision, simple structure, and ease of implementation. Full article
(This article belongs to the Section Lasers, Light Sources and Sensors)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Schematic diagram of the two-dimensional displacement sensor; (<b>b</b>) schematic diagram of the seven-core fiber end-face; (<b>c</b>) distribution of light spots on the end-face of the single-mode fiber.</p>
Full article ">Figure 2
<p>Gaussian beam output from core ①.</p>
Full article ">Figure 3
<p>Numerical model of the two-dimensional displacement sensor.</p>
Full article ">Figure 4
<p>(<b>a</b>) Definition of movement angle; (<b>b</b>) the power variation received by the SMF when moving in different directions; (<b>c</b>) the dependence of distance, t, on the angle.</p>
Full article ">Figure 5
<p>Variation in the optical power for each wavelength received by the SMF when it moves in different directions: (<b>a</b>) 30°; (<b>b</b>) 180°; (<b>c</b>) 210°.</p>
Full article ">Figure 6
<p>(<b>a</b>) Cross-sectional view of the SCF captured by a 1000× CCD camera; (<b>b</b>) schematic diagram of the displacement sensor; (<b>c</b>) relationship between the received power and the distance between the SCF and SMF.</p>
Full article ">Figure 7
<p>Power fluctuation of the light source within one hour. (<b>a</b>) stability of the light source itself; (<b>b</b>) Spectra of repeated scans of the experimental setup; (<b>c</b>) the power variation read by the spectrometer.</p>
Full article ">Figure 8
<p>Changes in power at different wavelengths when the SMF moves in the directions of 30° (<b>a1</b>,<b>a2</b>), 180° (<b>b1</b>,<b>b2</b>), and 210° (<b>c1</b>,<b>c2</b>).</p>
Full article ">Figure 8 Cont.
<p>Changes in power at different wavelengths when the SMF moves in the directions of 30° (<b>a1</b>,<b>a2</b>), 180° (<b>b1</b>,<b>b2</b>), and 210° (<b>c1</b>,<b>c2</b>).</p>
Full article ">Figure 9
<p>Changes in power at different wavelengths with the SMF moved in increments of 1 μm (<b>a</b>–<b>c</b>) and 200 nm (<b>d</b>–<b>f</b>).</p>
Full article ">Figure 9 Cont.
<p>Changes in power at different wavelengths with the SMF moved in increments of 1 μm (<b>a</b>–<b>c</b>) and 200 nm (<b>d</b>–<b>f</b>).</p>
Full article ">
16 pages, 4667 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 396
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Test environments.</p>
Full article ">Figure 2
<p>Foot slipping scenarios of a quadruped robot during ground contact.</p>
Full article ">Figure 3
<p>Estimation of foot contact probability during unstable contact events, with (<b>a</b>) representing right front leg and (<b>b</b>) left rear leg.</p>
Full article ">Figure 4
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 4 Cont.
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 5
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">Figure 5 Cont.
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">
14 pages, 6553 KiB  
Article
An Arteriovenous Bioreactor Perfusion System for Physiological In Vitro Culture of Complex Vascularized Tissue Constructs
by Florian Helms, Delia Käding, Thomas Aper, Arjang Ruhparwar and Mathias Wilhelmi
Bioengineering 2024, 11(11), 1147; https://doi.org/10.3390/bioengineering11111147 (registering DOI) - 14 Nov 2024
Viewed by 303
Abstract
Background: The generation and perfusion of complex vascularized tissues in vitro requires sophisticated perfusion techniques. For multiscale arteriovenous networks, not only the arterial, but also the venous, biomechanical and biochemical conditions that physiologically exist in the human body must be accurately emulated. For [...] Read more.
Background: The generation and perfusion of complex vascularized tissues in vitro requires sophisticated perfusion techniques. For multiscale arteriovenous networks, not only the arterial, but also the venous, biomechanical and biochemical conditions that physiologically exist in the human body must be accurately emulated. For this, we here present a modular arteriovenous perfusion system for the in vitro culture of a multi-scale bioartificial vascular network. Methods: The custom-built perfusion system consisted of two circuits: in the arterial circuit, physiological arterial biomechanical and biochemical conditions were simulated using a modular set-up with a pulsatile peristaltic pump, compliance chambers, and resistors. In the venous circuit, venous conditions were emulated accordingly. In the center of the system, a bioartificial multi-scale vascularized fibrin-based tissue was perfused by both circuits simultaneously under biomimetic arteriovenous conditions. Culture conditions were monitored continuously using a multi-sensor monitoring system. Results: The physiological arterial and venous pressure- and flow-curves, as well as the microvascular arteriovenous oxygen partial pressure gradient, were accurately emulated in the perfusion system. The multi-sensor monitoring system facilitated live monitoring of the respective parameters and data-logging. In a proof-of-concept experiment, vascularized three-dimensional fibrin tissues showed sustained cell viability and homogenous microvessel formation after culture in the perfusion system. Conclusions: The arteriovenous perfusion system facilitated the in vitro culture of a multiscale vascularized tissue under physiological pressure-, flow-, and oxygen-gradient conditions. With that, it presents a promising technique for the in vitro generation and culture of complex large-scale vascularized tissues. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Generation of the vascularized fibrin-based matrix. (<b>A</b>) Schematic cross-section of the targeted multi-scale vasculature. Venous and arterial fibrin-based macrovessels (1 + 2) were placed in parallel to each other and interconnected via four microchannels (3). Vascular sprouts arising from the microchannels (4) were intended to interconnect the microchannels to a capillary network built-up by the co-culture of human umbilical vein derived endothelial cells (HUVECs) and adipogenous stem cells (5) seeded throughout a low-density fibrin matrix (6). Both macrovessels and microchannels were enothelialized by a HUVEC monolayer (7). Black arrows indicate the media flow direction during perfusion. (<b>B</b>) Perfusion chamber with the integrated fibrin-based tissue construct. Two hose nozzles on each side facilitated connection of the integrated macrovessels to the respective arterial and venous perfusion circuit, and the perforated sheath on the bottom allowed for insertion of needles during the molding process for the generation of the microchannels. (<b>C</b>) Macroscopic morphology of the explanted fibrin-based tissue matrix after 48 h of culture in the arteriovenous perfusion system. Scale bar = 1 cm.</p>
Full article ">Figure 2
<p>(<b>A</b>) Schematic representation of the arteriovenous perfusion system setup and desired pressure and flow curves. 1: Pulsatile peristaltic pump; 2: upstream compliance chamber; 3: pressure sensor; 4: flow sensor; 5: perfusion chamber with the integrated fibrin-based matrix and vessels; 6: variable resistor; 7: reservoir; 8: dissolved oxygen sensor; 9: oxygen inflow canula; 10: downstream arterial compliance chamber; 11: backflow line. (<b>B</b>) Photographic top-view of the assembled system.</p>
Full article ">Figure 3
<p>Pressure curve analysis. (<b>A</b>) Pressure curve monitored in the arterial circuit; (<b>B</b>) systolic (black) and diastolic (grey) pressures observed in the arterial circuit over 48 h. (<b>C</b>) Pressure curve monitored in the venous circuit; (<b>D</b>) systolic (black) and diastolic (grey) pressures observed in the venous circuit over 48 h.</p>
Full article ">Figure 4
<p>Flow curve analysis. (<b>A</b>) Flow curve monitored in the arterial circuit. (<b>B</b>) Flow curve monitored in the venous circuit.</p>
Full article ">Figure 5
<p>Arterial (black) and venous (grey) oxygen partial pressure monitored in the system over 48 h.</p>
Full article ">Figure 6
<p>(<b>A</b>) Fluorescence microscopic view of the fibrin-based tissue matrix. Capillary tubes were visualized based on red fluorescent protein expression of human umbilical vein endothelial cells. (<b>B</b>) Angiotool analysis of the capillary network depicted in (<b>A</b>). Crossing points were marked by blue dots, capillary tubes were depicted in red, and outlines were marked in yellow. Scale bar = 100 µm.</p>
Full article ">
Back to TopTop