[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,913)

Search Parameters:
Keywords = space-based detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6191 KiB  
Article
An Approach for Detecting Faulty Lines in a Small-Current, Grounded System Using Learning Spiking Neural P Systems with NLMS
by Yangheng Hu, Yijin Wu, Qiang Yang, Yang Liu, Shunli Wang, Jianping Dong, Xiaohua Zeng and Dapeng Zhang
Energies 2024, 17(22), 5742; https://doi.org/10.3390/en17225742 (registering DOI) - 16 Nov 2024
Viewed by 294
Abstract
Detecting faulty lines in small-current, grounded systems is a crucial yet challenging task in power system protection. Existing methods often struggle with the accurate identification of faults due to the complex and dynamic nature of current and voltage signals in these systems. This [...] Read more.
Detecting faulty lines in small-current, grounded systems is a crucial yet challenging task in power system protection. Existing methods often struggle with the accurate identification of faults due to the complex and dynamic nature of current and voltage signals in these systems. This gap in reliable fault detection necessitates more advanced methodologies to improve system stability and safety. Here, a novel approach, using learning spiking neural P systems combined with a normalized least mean squares (NLMS) algorithm to enhance faulty line detection in small-current, grounded systems, is proposed. The proposed method analyzes the features of current and voltage signals, as well as active and reactive power, by separately considering their transient and steady-state components. To improve fault detection accuracy, we quantified the likelihood of a fault occurrence based on feature changes and expanded the feature space to higher dimensions using an ascending dimension structure. An adaptive learning mechanism was introduced to optimize the convergence and precision of the detection model. Simulation scheduling datasets and real-world data were used to validate the effectiveness of the proposed approach, demonstrating significant improvements over traditional methods. These findings provide a robust framework for faulty-line detection in small-current, grounded systems, contributing to enhanced reliability and safety in power system operations. This approach has the potential to be widely applied in power system protection and maintenance, advancing the broader field of intelligent fault diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Smart Grids)
24 pages, 4156 KiB  
Article
Emotion Recognition in a Closed-Cabin Environment: An Exploratory Study Using Millimeter-Wave Radar and Respiration Signals
by Hanyu Wang, Dengkai Chen, Sen Gu, Yao Zhou, Jianghao Xiao, Yiwei Sun, Jianhua Sun, Yuexin Huang, Xian Zhang and Hao Fan
Appl. Sci. 2024, 14(22), 10561; https://doi.org/10.3390/app142210561 (registering DOI) - 15 Nov 2024
Viewed by 331
Abstract
In the field of psychology and cognition within closed cabins, noncontact vital sign detection holds significant potential as it can enhance the user’s experience by utilizing objective measurements to assess emotions, making the process more sustainable and easier to deploy. To evaluate the [...] Read more.
In the field of psychology and cognition within closed cabins, noncontact vital sign detection holds significant potential as it can enhance the user’s experience by utilizing objective measurements to assess emotions, making the process more sustainable and easier to deploy. To evaluate the capability of noncontact methods for emotion recognition in closed spaces, such as submarines, this study proposes an emotion recognition method that employs a millimeter-wave radar to capture respiration signals and uses a machine-learning framework for emotion classification. Respiration signals were collected while the participants watched videos designed to elicit different emotions. An automatic sparse encoder was used to extract features from respiration signals, and two support vector machines were employed for emotion classification. The proposed method was experimentally validated using the FaceReader software, which is based on audiovisual signals, and achieved an emotion classification accuracy of 68.21%, indicating the feasibility and effectiveness of using respiration signals to recognize and assess the emotional states of individuals in closed cabins. Full article
18 pages, 6433 KiB  
Article
High-Performance Telescope System Design for Space-Based Gravitational Waves Detection
by Huiru Ji, Lujia Zhao, Zichao Fan, Rundong Fan, Jiamin Cao, Yan Mo, Hao Tan, Zhiyu Jiang and Donglin Ma
Sensors 2024, 24(22), 7309; https://doi.org/10.3390/s24227309 - 15 Nov 2024
Viewed by 238
Abstract
Space-based gravitational wave (GW) detection employs the Michelson interferometry principle to construct ultra-long baseline laser interferometers in space for detecting GW signals with a frequency band of 10−4–1 Hz. The spaceborne telescope, as a core component directly integrated into the laser [...] Read more.
Space-based gravitational wave (GW) detection employs the Michelson interferometry principle to construct ultra-long baseline laser interferometers in space for detecting GW signals with a frequency band of 10−4–1 Hz. The spaceborne telescope, as a core component directly integrated into the laser link, comes in various configurations, with the off-axis four-mirror design being the most prevalent. In this paper, we present a high-performance design based on this configuration, which exhibits a stable structure, ultra-low wavefront aberration, and high-level stray light suppression capabilities, effectively eliminating background noise. Also, a scientifically justified positioning of the entrance and exit pupils has been implemented, thereby paving adequate spatial provision for the integration of subsequent optical systems. The final design realizes a wavefront error of less than λ/500 in the science field of view, and after tolerance allocation and Monte Carlo analysis, a wavefront error of less than λ/30 can be achieved with a probability of 92%. The chief ray spot diagram dimensions are significantly small, indicating excellent control of pupil aberrations. Additionally, the tilt-to-length (TTL) noise and stray light meet the stringent requirements for space-based gravitational wave detection. The refined design presented in this paper proves to be a more fitting candidate for GW detection projects, offering more accurate and rational guidance. Full article
(This article belongs to the Special Issue Advanced Optics and Sensing Technologies for Telescopes)
Show Figures

Figure 1

Figure 1
<p>Raytracing of the initial coaxial PM and SM.</p>
Full article ">Figure 2
<p>Raytracing of the initial coaxial TM and QM.</p>
Full article ">Figure 3
<p>Multi-configuration field of view settings (units: μrad).</p>
Full article ">Figure 4
<p>Pupil distortion caused by pupil aberration.</p>
Full article ">Figure 5
<p>The initial layout of PM and SM: (<b>a</b>) coaxial layout after calculation; (<b>b</b>) off-axis layout.</p>
Full article ">Figure 6
<p>PM-SM initial structure spot diagram.</p>
Full article ">Figure 7
<p>Initial off-axis system layout of TM and QM: (<b>a</b>) co-axial layout after calculation; (<b>b</b>) off-axis layout.</p>
Full article ">Figure 8
<p>Initial off-axis four-mirror system layout.</p>
Full article ">Figure 9
<p>Chief ray spot diagram of Initial off-axis four-mirror system at the theoretical exit pupil position.</p>
Full article ">Figure 10
<p>Final system layout of GW detection telescope.</p>
Full article ">Figure 11
<p>The spot diagram of the improved GW detection telescope system.</p>
Full article ">Figure 12
<p>The wavefront error across the science FOV of the improved telescope system.</p>
Full article ">Figure 13
<p>Chief ray spot diagrams at the exit pupil.</p>
Full article ">Figure 14
<p>Analysis of TTL coupling noise. (<b>a</b>) The curve of LPS with beam angle; (<b>b</b>) TTL noise caused by wavefront error.</p>
Full article ">Figure 15
<p>RMS WFE distribution of Monte Carlo 1000 trials.</p>
Full article ">Figure 16
<p>Displacement nephogram of mirrors: (<b>a</b>) PM; (<b>b</b>) SM; (<b>c</b>) TM; (<b>d</b>) QM.</p>
Full article ">
16 pages, 9416 KiB  
Article
An Image Processing Approach to Quality Control of Drop-on-Demand Electrohydrodynamic (EHD) Printing
by Yahya Tawhari, Charchit Shukla and Juan Ren
Micromachines 2024, 15(11), 1376; https://doi.org/10.3390/mi15111376 - 14 Nov 2024
Viewed by 276
Abstract
Droplet quality in drop-on-demand (DoD) Electrohydrodynamic (EHD) inkjet printing plays a crucial role in influencing the overall performance and manufacturing quality of the operation. The current approach to droplet printing analysis involves manually outlining/labeling the printed dots on the substrate under a microscope [...] Read more.
Droplet quality in drop-on-demand (DoD) Electrohydrodynamic (EHD) inkjet printing plays a crucial role in influencing the overall performance and manufacturing quality of the operation. The current approach to droplet printing analysis involves manually outlining/labeling the printed dots on the substrate under a microscope and then using microscope software to estimate the dot sizes by assuming the dots have a standard circular shape. Therefore, it is prone to errors. Moreover, the dot spacing information is missing, which is also important for EHD DoD printing processes, such as manufacturing micro-arrays. In order to address these issues, the paper explores the application of feature extraction methods aimed at identifying characteristics of the printed droplets to enhance the detection, evaluation, and delineation of significant structures and edges in printed images. The proposed method involves three main stages: (1) image pre-processing, where edge detection techniques such as Canny filtering are applied for printed dot boundary detection; (2) contour detection, which is used to accurately quantify the dot sizes (such as dot perimeter and area); and (3) centroid detection and distance calculation, where the spacing between neighboring dots is quantified as the Euclidean distance of the dot geometric centers. These stages collectively improve the precision and efficiency of EHD DoD printing analysis in terms of dot size and spacing. Edge and contour detection strategies are implemented to minimize edge discrepancies and accurately delineate droplet perimeters for quality analysis, enhancing measurement precision. The proposed image processing approach was first tested using simulated EHD printed droplet arrays with specified dot sizes and spacing, and the achieved quantification accuracy was over 98% in analyzing dot size and spacing, highlighting the high precision of the proposed approach. This approach was further demonstrated through dot analysis of experimentally EHD-printed droplets, showing its superiority over conventional microscope-based measurements. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed framework for DoD EHD printing analysis. (<b>a</b>) The proposed method takes a microscope image of printed droplets as the input, outputs the detection results, and calculates the spacing of the dot size (area). (<b>b</b>) Visual analysis of the printing quality provides size distribution of the printed dots. The x-axis represents the three categories of dot size range, and the y-axis indicates the percentage within each size category.</p>
Full article ">Figure 2
<p>An image (<b>a</b>) is used as input for the pre-proposed method; (<b>b</b>) is the image after applying a Gaussian filter; (<b>c</b>) is the image after applying Canny edge detection; (<b>d</b>) is the image after applying dilation and morphological closing.</p>
Full article ">Figure 3
<p>Line dot printing pattern. (<b>a</b>) original pattern. (<b>b</b>) processed pattern.</p>
Full article ">Figure 4
<p>Distance Quantification Analysis.</p>
Full article ">Figure 5
<p>Circular dot printing pattern. (<b>a</b>) original pattern. (<b>b</b>) processed pattern.</p>
Full article ">Figure 6
<p>Experimental EHD DoD print result analysis. (<b>a</b>) The origin image of the printed droplet. (<b>b</b>) The printed droplet was analyzed offline using a manual microscope approach. (<b>c</b>) The analyzed image using the the proposed approach. (<b>d</b>) Detailed comparison of the two analysis methods.</p>
Full article ">
24 pages, 2376 KiB  
Article
An Efficient Weed Detection Method Using Latent Diffusion Transformer for Enhanced Agricultural Image Analysis and Mobile Deployment
by Yuzhuo Cui, Yingqiu Yang, Yuqing Xia, Yan Li, Zhaoxi Feng, Shiya Liu, Guangqi Yuan and Chunli Lv
Plants 2024, 13(22), 3192; https://doi.org/10.3390/plants13223192 - 13 Nov 2024
Viewed by 299
Abstract
This paper presents an efficient weed detection method based on the latent diffusion transformer, aimed at enhancing the accuracy and applicability of agricultural image analysis. The experimental results demonstrate that the proposed model achieves a precision of 0.92, a recall of 0.89, an [...] Read more.
This paper presents an efficient weed detection method based on the latent diffusion transformer, aimed at enhancing the accuracy and applicability of agricultural image analysis. The experimental results demonstrate that the proposed model achieves a precision of 0.92, a recall of 0.89, an accuracy of 0.91, a mean average precision (mAP) of 0.91, and an F1 score of 0.90, indicating its outstanding performance in complex scenarios. Additionally, ablation experiments reveal that the latent-space-based diffusion subnetwork outperforms traditional models, such as the the residual diffusion network, which has a precision of only 0.75. By combining latent space feature extraction with self-attention mechanisms, the constructed lightweight model can respond quickly on mobile devices, showcasing the significant potential of deep learning technologies in agricultural applications. Future research will focus on data diversity and model interpretability to further enhance the model’s adaptability and user trust. Full article
Show Figures

Figure 1

Figure 1
<p>Dataset samples. (<b>a</b>) <span class="html-italic">Amaranthus retroflexus</span>, (<b>b</b>) <span class="html-italic">Xanthium spinosum</span>, (<b>c</b>) <span class="html-italic">Xanthium sibiricum</span>, (<b>d</b>) <span class="html-italic">Amaranthus palmeri</span>, (<b>e</b>) <span class="html-italic">Xanthium italicum</span>, (<b>f</b>) <span class="html-italic">Amaranthus hybridus</span>.</p>
Full article ">Figure 2
<p>Dataset augmentation. (<b>a</b>) CutOut, (<b>b</b>) CutMix, (<b>c</b>) Mosaic.</p>
Full article ">Figure 3
<p>The overall structure diagram of the weed disease detection method based on a latent diffusion transformer. This figure illustrates the various modules of the model, including the latent-based diffusion subnetwork, latent-based transformer subnetwork, and latent loss function, reflecting the complete process of data input, feature extraction, information processing, and output.</p>
Full article ">Figure 4
<p>Schematic diagram of the latent-based diffusion subnetwork. This figure presents the architecture of the latent-based diffusion subnetwork, which includes the input layer, multiple diffusion layers, and the output layer. It highlights the process of progressive denoising and feature reconstruction to effectively enhance the feature extraction capability of complex agricultural images.</p>
Full article ">Figure 5
<p>Schematic diagram of the latent-based transformer. This figure showcases the structure of the latent-based transformer, including the self-attention mechanism, multiscale feature extraction, and feed-forward neural network layers. It emphasizes how features are processed and fused in the latent space to enhance the accuracy and stability of weed disease detection.</p>
Full article ">
15 pages, 3407 KiB  
Article
Minimalist Design for Multi-Dimensional Pressure-Sensing and Feedback Glove with Variable Perception Communication
by Hao Ling, Jie Li, Chuanxin Guo, Yuntian Wang, Tao Chen and Minglu Zhu
Actuators 2024, 13(11), 454; https://doi.org/10.3390/act13110454 - 13 Nov 2024
Viewed by 205
Abstract
Immersive human–machine interaction relies on comprehensive sensing and feedback systems, which enable transmission of multiple pieces of information. However, the integration of increasing numbers of feedback actuators and sensors causes a severe issue in terms of system complexity. In this work, we propose [...] Read more.
Immersive human–machine interaction relies on comprehensive sensing and feedback systems, which enable transmission of multiple pieces of information. However, the integration of increasing numbers of feedback actuators and sensors causes a severe issue in terms of system complexity. In this work, we propose a pressure-sensing and feedback glove that enables multi-dimensional pressure sensing and feedback with a minimalist design of the functional units. The proposed glove consists of modular strain and pressure sensors based on films of liquid metal microchannels and coin vibrators. Strain sensors located at the finger joints can simultaneously project the bending motion of the individual joint into the virtual space or robotic hand. For subsequent tactile interactions, the design of two symmetrically distributed pressure sensors and vibrators at the fingertips possesses capabilities for multi-directional pressure sensing and feedback by evaluating the relationship of the signal variations between two sensors and tuning the feedback intensities of two vibrators. Consequently, both dynamic and static multi-dimensional pressure communication can be realized, and the vibrational actuation can be monitored by a liquid-metal-based sensor via a triboelectric sensing mechanism. A demonstration of object interaction indicates that the proposed glove can effectively detect dynamic force in varied directions at the fingertip while offering the reconstruction of a similar perception via the haptic feedback function. This device introduces an approach that adopts a minimalist design to achieve a multi-functional system, and it can benefit commercial applications in a more cost-effective way. Full article
Show Figures

Figure 1

Figure 1
<p>Multi-dimensional pressure-sensing and feedback glove and its intelligent interaction system. Schematic diagram of the glove’s application in enhanced spatial immersive interaction, including (i) the structural diagram of the pressure sensor, (ii) the components of the vibration haptic feedback module, and (iii) the structural diagram of the bending sensor.</p>
Full article ">Figure 2
<p>Sensors of the multi-dimensional pressure-sensing and feedback glove. (<b>a</b>) Optical image of the pressure sensor; (<b>b</b>) optical image of the bending sensor; and (<b>c</b>) optical image of the interactive glove and the corresponding components.</p>
Full article ">Figure 3
<p>Working mechanism of the pressure sensor and the bending sensor. (<b>a</b>) The (i) schematic diagram of the pressure sensor, (ii) dimensional changes of the liquid metal electrodes in the normal and pressurized states, and (iii) changes in the A-A’ cross-section of the liquid metal electrodes; and (<b>b</b>) the (i) schematic diagram of the bending sensor, (ii) changes in the liquid metal electrodes in the normal and bending states, and (iii) dimensional changes in the bending sensors observed from view B.</p>
Full article ">Figure 4
<p>Characterization of the pressure sensor. (<b>a</b>) Schematic of the characterization method; (<b>b</b>) relationship between the sensor’s output signal and the pressure under loading conditions; (<b>c</b>) relationship between the pressure sensor’s output signal and the pressure under loading and unloading conditions; (<b>d</b>) real-time monitoring of the output signal changes during one cycle of pressure increase and decrease; (<b>e</b>) response and recovery times of the sensor; (<b>f</b>) repeatability test over 2000 cycles at 55 kPa; (<b>g</b>) relationship between the driven voltage of a coin vibration and the collected triboelectric voltage signal of the sensor; and (<b>h</b>) real-time triboelectric voltage signal as the driven voltage continues to increase.</p>
Full article ">Figure 5
<p>Characterization of the bending sensor. (<b>a</b>) Schematic of the characterization method; (<b>b</b>) relationship between the sensor’s output signal and the strain under tensile conditions; (<b>c</b>) relationship between the bending sensor’s output signal and the pressure under loading and unloading conditions; (<b>d</b>) response and recovery times of the sensor; (<b>e</b>) repeatability test over 2000 cycles at 20% strain; (<b>f</b>) response of the sensor to strain with a given initial torsion angle; and (<b>g</b>) response of the sensor to strain with a given initial curvature.</p>
Full article ">Figure 6
<p>Demonstration application of the multi-dimensional pressure-sensing and feedback glove. (<b>a</b>) Schematic of fingertip pressing status including (i) left side contact, (ii) right side contact, (iii) intermediate contact and (iv) rolling from left to right; (<b>b</b>) real-time output signals of the pressure sensor at different pressing angles; (<b>c</b>) feedback from the coin vibrators at different pressing angles with single vibrator running condition marked by grey and both vibrators running condition marked by pale yellow; (<b>d</b>) output signals from the bending sensor measure the stepped bending of the finger at an angle of 10 degrees each time up to 90 degrees; (<b>e</b>) response of the bending sensor under different bending methods; (<b>f</b>) various hand gestures labelled from ① to ⑧ used to test the bending sensor; (<b>g</b>) output signals corresponding to different hand gestures labelled from ② to ⑧; (<b>h</b>) demonstration of grasping a test tube; (<b>i</b>) feedback from coin vibrators during the grasping process; (<b>j</b>) real-time signal output during the grasp; and (<b>k</b>) snapshot of pressure and bending angles before and after grasping.</p>
Full article ">
17 pages, 1906 KiB  
Article
Advancing Indoor Epidemiological Surveillance: Integrating Real-Time Object Detection and Spatial Analysis for Precise Contact Rate Analysis and Enhanced Public Health Strategies
by Ali Baligh Jahromi, Koorosh Attarian, Ali Asgary and Jianhong Wu
Int. J. Environ. Res. Public Health 2024, 21(11), 1502; https://doi.org/10.3390/ijerph21111502 - 13 Nov 2024
Viewed by 396
Abstract
In response to escalating concerns about the indoor transmission of respiratory diseases, this study introduces a sophisticated software tool engineered to accurately determine contact rates among individuals in enclosed spaces—essential for public health surveillance and disease transmission mitigation. The tool applies YOLOv8, a [...] Read more.
In response to escalating concerns about the indoor transmission of respiratory diseases, this study introduces a sophisticated software tool engineered to accurately determine contact rates among individuals in enclosed spaces—essential for public health surveillance and disease transmission mitigation. The tool applies YOLOv8, a cutting-edge deep learning model that enables precise individual detection and real-time tracking from video streams. An innovative feature of this system is its dynamic circular buffer zones, coupled with an advanced 2D projective transformation to accurately overlay video data coordinates onto a digital layout of the physical environment. By analyzing the overlap of these buffer zones and incorporating detailed heatmap visualizations, the software provides an in-depth quantification of contact instances and spatial contact patterns, marking an advancement over traditional contact tracing and contact counting methods. These enhancements not only improve the accuracy and speed of data analysis but also furnish public health officials with a comprehensive framework to develop more effective non-pharmaceutical infection control strategies. This research signifies a crucial evolution in epidemiological tools, transitioning from manual, simulation, and survey-based tracking methods to automated, real time, and precision-driven technologies that integrate advanced visual analytics to better understand and manage disease transmission in indoor settings. Full article
Show Figures

Figure 1

Figure 1
<p>Flowchart; steps include initialization, object detection using YOLOv8, real-time human tracking, dynamic buffer zones, spatial analysis, people counting and density analysis, and data handling and visualization.</p>
Full article ">Figure 2
<p>Detecting and tracking individuals in indoor environment. Count of 5 individuals each with their track line (green line) and track id (yellow numbers).</p>
Full article ">Figure 3
<p>Transformation of occupants in 2D floor plan.</p>
Full article ">Figure 4
<p>Interaction duration analysis across tracked individuals.</p>
Full article ">Figure 5
<p>Comparative spatial interaction heatmaps depicting density and movement patterns at time 1 (second) and time 31 (second) during our experiment.</p>
Full article ">
18 pages, 3295 KiB  
Systematic Review
The Diagnosis and Management of Infraoccluded Deciduous Molars: A Systematic Review
by Gianna Dipalma, Alessio Danilo Inchingolo, Lucia Memè, Lucia Casamassima, Claudio Carone, Giuseppina Malcangi, Francesco Inchingolo, Andrea Palermo and Angelo Michele Inchingolo
Children 2024, 11(11), 1375; https://doi.org/10.3390/children11111375 - 12 Nov 2024
Viewed by 491
Abstract
The infraocclusion (IO) of primary molars, often seen in retained deciduous teeth, is a common condition that presents significant challenges for pediatric oral health. It occurs when primary molars are positioned below the occlusal plane due to the absence of permanent successors, leading [...] Read more.
The infraocclusion (IO) of primary molars, often seen in retained deciduous teeth, is a common condition that presents significant challenges for pediatric oral health. It occurs when primary molars are positioned below the occlusal plane due to the absence of permanent successors, leading to complications such as misaligned teeth, impaired chewing, and long-term dental health issues. Objectives: This study examines IO prevalence, diagnosis, and treatment approaches. Methods: A systematic review following PRISMA guidelines was conducted, searching PubMed, Web of Science, and Scopus for articles from the last 15 years. Nine articles were included for qualitative analysis. Results: IO was associated with several complications, including root resorption, altered eruption of adjacent teeth, and space loss within the dental arch. Clinical and radiographic evaluations are key to early detection. Severe cases often require invasive treatments, such as tooth extraction and space maintenance, while mild cases could be monitored. Conclusions: IO is prevalent in pediatric dentistry and can lead to significant dental issues if untreated. Early detection and intervention are crucial for preventing complications like tooth misalignment and impacted premolars. Tailored treatment strategies based on severity, along with increased awareness among dental practitioners, are essential to improve long-term outcomes for affected children. Full article
(This article belongs to the Collection Advance in Pediatric Dentistry)
Show Figures

Figure 1

Figure 1
<p>Clinical examination where there is IODM in a child of nine years old.</p>
Full article ">Figure 2
<p>Orthopantomography (OPT)to determine the permanent successors’ status and the main molars’ position.</p>
Full article ">Figure 3
<p>Literature search PRISMA flow diagram and database search indicators.</p>
Full article ">
18 pages, 7255 KiB  
Article
DC-Mamba: A Novel Network for Enhanced Remote Sensing Change Detection in Difficult Cases
by Junyi Zhang, Renwen Chen, Fei Liu, Hao Liu, Boyu Zheng and Chenyu Hu
Remote Sens. 2024, 16(22), 4186; https://doi.org/10.3390/rs16224186 - 10 Nov 2024
Viewed by 416
Abstract
Remote sensing change detection (RSCD) aims to utilize paired temporal remote sensing images to detect surface changes in the same area. Traditional CNN-based methods are limited by the size of the receptive field, making it difficult to capture the global features of remote [...] Read more.
Remote sensing change detection (RSCD) aims to utilize paired temporal remote sensing images to detect surface changes in the same area. Traditional CNN-based methods are limited by the size of the receptive field, making it difficult to capture the global features of remote sensing images. In contrast, Transformer-based methods address this issue with their powerful modeling capabilities. However, applying the Transformer architecture to image processing introduces a quadratic complexity problem, significantly increasing computational costs. Recently, the Mamba architecture based on state-space models has gained widespread application in the field of RSCD due to its excellent global feature extraction capabilities and linear complexity characteristics. Nevertheless, existing Mamba-based methods lack optimization for complex change areas, making it easy to lose shallow features or local features, which leads to poor performance on challenging detection cases and high-difficulty datasets. In this paper, we propose a Mamba-based RSCD network for difficult cases (DC-Mamba), which effectively improves the model’s detection capability in complex change areas. Specifically, we introduce the edge-feature enhancement (EFE) block and the dual-flow state-space (DFSS) block, which enhance the details of change edges and local features while maintaining the model’s global feature extraction capability. We propose a dynamic loss function to address the issue of sample imbalance, giving more attention to difficult samples during training. Extensive experiments on three change detection datasets demonstrate that our proposed DC-Mamba outperforms existing state-of-the-art methods overall and exhibits significant performance improvements in detecting difficult cases. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Figure 1
<p>Examples of difficult cases. T1 and T2 represent dual-time remote sensing images, GT is the ground truth of changed areas, and ChangeMamba and CDMamba are two SOTA methods based on Mamba. The key areas are highlighted with red boxes.</p>
Full article ">Figure 2
<p>Overall architecture of our proposed DC-Mamba.</p>
Full article ">Figure 3
<p>The constituent modules of DC-Mamba. (<b>a</b>) Architecture of the EFE block, (<b>b</b>) architecture of the DFSS encoder, and (<b>c</b>) architecture of the DFSS decoder.</p>
Full article ">Figure 4
<p>Overall architecture of our proposed DC-Mamba.</p>
Full article ">Figure 5
<p>Ablation studies on EFE block and DFSS block conducted on the LEVIR-CD+, WHU-CD, and SYSU datasets. Baseline uses the original Mamba architecture. DC-Mamba uses both the EFE block and DFSS block. All the models utilize dynamic loss function, where <math display="inline"><semantics> <mrow> <mo>ε</mo> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>σ</mo> <mo>=</mo> <mn>0.75</mn> </mrow> </semantics></math>. The key areas are highlighted with red boxes.</p>
Full article ">Figure 6
<p>Visualization results of feature heatmaps on different datasets. Red denotes higher attention values and blue denotes lower values. (<b>a</b>–<b>c</b>) Pre-change image, post-change image, and change map, respectively. (<b>d</b>) Overlay of the feature heatmaps output by the EFE block onto the original image. (<b>e</b>) Feature heatmaps output by the DC-Mamba decoder without using the EFE block. (<b>f</b>) Feature heatmaps output by the DC-Mamba decoder using the EFE block.</p>
Full article ">Figure 7
<p>Qualitative analysis results on the LEVIR-CD+ test set. (<b>a</b>–<b>d</b>) Selected samples of different types. The key areas are highlighted with red boxes.</p>
Full article ">Figure 8
<p>Qualitative analysis results on the WHU-CD test set. (<b>a</b>–<b>d</b>) Selected samples of different types. The key areas are highlighted with red boxes.</p>
Full article ">Figure 9
<p>Qualitative analysis results on the SYSU test set. (<b>a</b>–<b>d</b>) Selected samples of different types. The key areas are highlighted with red boxes.</p>
Full article ">
23 pages, 1660 KiB  
Article
A Deep Learning Model for Accurate Maize Disease Detection Based on State-Space Attention and Feature Fusion
by Tong Zhu, Fengyi Yan, Xinyang Lv, Hanyi Zhao, Zihang Wang, Keqin Dong, Zhengjie Fu, Ruihao Jia and Chunli Lv
Plants 2024, 13(22), 3151; https://doi.org/10.3390/plants13223151 - 9 Nov 2024
Viewed by 387
Abstract
In improving agricultural yields and ensuring food security, precise detection of maize leaf diseases is of great importance. Traditional disease detection methods show limited performance in complex environments, making it challenging to meet the demands for precise detection in modern agriculture. This paper [...] Read more.
In improving agricultural yields and ensuring food security, precise detection of maize leaf diseases is of great importance. Traditional disease detection methods show limited performance in complex environments, making it challenging to meet the demands for precise detection in modern agriculture. This paper proposes a maize leaf disease detection model based on a state-space attention mechanism, aiming to effectively utilize the spatiotemporal characteristics of maize leaf diseases to achieve efficient and accurate detection. The model introduces a state-space attention mechanism combined with a multi-scale feature fusion module to capture the spatial distribution and dynamic development of maize diseases. In experimental comparisons, the proposed model demonstrates superior performance in the task of maize disease detection, achieving a precision, recall, accuracy, and F1 score of 0.94. Compared with baseline models such as AlexNet, GoogLeNet, ResNet, EfficientNet, and ViT, the proposed method achieves a precision of 0.95, with the other metrics also reaching 0.94, showing significant improvement. Additionally, ablation experiments verify the impact of different attention mechanisms and loss functions on model performance. The standard self-attention model achieved a precision, recall, accuracy, and F1 score of 0.74, 0.70, 0.72, and 0.72, respectively. The Convolutional Block Attention Module (CBAM) showed a precision of 0.87, recall of 0.83, accuracy of 0.85, and F1 score of 0.85, while the state-space attention module achieved a precision of 0.95, with the other metrics also at 0.94. In terms of loss functions, cross-entropy loss showed a precision, recall, accuracy, and F1 score of 0.69, 0.65, 0.67, and 0.67, respectively. Focal loss showed a precision of 0.83, recall of 0.80, accuracy of 0.81, and F1 score of 0.81. State-space loss demonstrated the best performance in these experiments, achieving a precision of 0.95, with recall, accuracy, and F1 score all at 0.94. These results indicate that the model based on the state-space attention mechanism achieves higher detection accuracy and better generalization ability in the task of maize leaf disease detection, effectively improving the accuracy and efficiency of disease recognition and providing strong technical support for the early diagnosis and management of maize diseases. Future work will focus on further optimizing the model’s spatiotemporal feature modeling capabilities and exploring multi-modal data fusion to enhance the model’s application in real agricultural scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Dataset samples. (<b>a</b>) Maize head smut; (<b>b</b>) maize northern leaf blight; (<b>c</b>) maize smut; (<b>d</b>) maize southern leaf blight; (<b>e</b>) maize round spot; (<b>f</b>) maize brown spot.</p>
Full article ">Figure 2
<p>Dataset augmentation. (<b>a</b>) CutOut; (<b>b</b>) CutMix; (<b>c</b>) Mosaic.</p>
Full article ">Figure 3
<p>The overall workflow diagram of the proposed corn leaf disease detection model based on a state-space attention mechanism. Here, <span class="html-italic">H</span> and W represent the height and width of the input image, respectively, while <math display="inline"><semantics> <msub> <mi>C</mi> <mi>i</mi> </msub> </semantics></math> represents the number of channels in the output feature maps at different stages of the model. <math display="inline"><semantics> <msub> <mi>C</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>2</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>C</mi> <mn>3</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>C</mi> <mn>4</mn> </msub> </semantics></math> correspond to the channel dimensions at stages 1, 2, 3, and 4, respectively. Each ’state block’ processes the feature maps, and ’downsampling’ reduces dimensions and increases the channel depth at subsequent stages.</p>
Full article ">Figure 4
<p>Illustration of the state-space attention mechanism.</p>
Full article ">Figure 5
<p>Illustration of the multi-scale fusion module. The module employs “Temporal Convolution” (TC) to process time-series data merged through multiple stages, extracting features along the temporal dimension. The final classification step includes two distinct processes: “TC —&gt; Linear” where features processed by temporal convolution undergo a linear transformation, and “Linear —&gt; Linear” which represents a sequential linear transformation to enhance feature integration before classification. Further details on TC and the rationale behind the dual linear transformations are provided in the figure annotations.</p>
Full article ">Figure 6
<p>Confusion matrix.</p>
Full article ">
19 pages, 2630 KiB  
Article
Enhancing Long-Term Robustness of Inter-Space Laser Links in Space Gravitational Wave Detection: An Adaptive Weight Optimization Method for Multi-Attitude Sensors Data Fusion
by Zhao Cui, Xue Wang, Jinke Yang, Haoqi Shi, Bo Liang, Xingguang Qian, Zongjin Ye, Jianjun Jia, Yikun Wang and Jianyu Wang
Remote Sens. 2024, 16(22), 4179; https://doi.org/10.3390/rs16224179 - 8 Nov 2024
Viewed by 248
Abstract
The stable and high-precision acquisition of attitude data is crucial for sustaining the long-term robustness of laser links to detect gravitational waves in space. We introduce an effective method that utilizes an adaptive weight optimization approach for the fusion of attitude data obtained [...] Read more.
The stable and high-precision acquisition of attitude data is crucial for sustaining the long-term robustness of laser links to detect gravitational waves in space. We introduce an effective method that utilizes an adaptive weight optimization approach for the fusion of attitude data obtained from charge-coupled device (CCD) spot-positioning-based attitude measurements, differential power sensing (DPS), and differential wavefront sensing (DWS). This approach aims to obtain more robust and lower-noise-level attitude data. A system is designed based on the Michelson interferometer for link simulations; validation experiments are also conducted. The experimental results demonstrate that the fused data exhibit higher robustness. Even in the case of a single sensor failure, valid attitude data can still be obtained. Additionally, the fused data have lower noise levels, with root mean square errors of 9.5%, 37.4%, and 93.4% for the single CCD, DPS, and DWS noise errors, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>Control system scheme for inter-space laser link establishment stage, scientific measurement stage, and reacquisition stage. The blue section represents the laser link establishment stage, the green section represents the scientific measurement stage, and the yellow section represents the reacquisition stage. The black solid line arrows indicate essential execution parts, including the establishment of the inter-space laser link and gravitational wave detection in scientific mode. The black dashed line arrows indicate parts executed only in the case of inter-space laser link interruption for link reacquisition.</p>
Full article ">Figure 2
<p>Improved control system scheme.</p>
Full article ">Figure 3
<p>Improved closed-loop control system scheme for inter-space laser links.</p>
Full article ">Figure 4
<p>Basic framework for weight computation in the attitude data fusion processor.</p>
Full article ">Figure 5
<p>The adaptive weight optimization method for multi-attitude sensors.</p>
Full article ">Figure 6
<p>Data validity judgment method.</p>
Full article ">Figure 7
<p>Comparison of Fusion data with A, B, and C data, where the variance of A remains constant at one: (<b>a</b>) variance of Fusion as a function of the variances of B and C; (<b>b</b>) minimum variance among A, B, and C as a function of the variances of B and C; (<b>c</b>) difference between the minimum variance among A, B, and C and the variance of Fusion as a function of the variances of B and C; (<b>d</b>) difference between the minimum RMS among A, B, and C and the RMS of Fusion as a function of the variances of B and C.</p>
Full article ">Figure 8
<p>Variation of A, B, C, and Fusion during exhaustive computation: (<b>a</b>) variance variation; (<b>b</b>) RMS variation.</p>
Full article ">Figure 9
<p>Schematic diagram of link simulation system structure.</p>
Full article ">Figure 10
<p>Physical diagram of the link simulation experimental system (CCD: charge coupled device, FC: fiber collimator, FSM: fast steering mirror, BS: beam splitter, ID: iris diaphragm, BE: beam expander, QPD: quadrant photodiode, AOM: acousto-optical modulator).</p>
Full article ">Figure 11
<p>CCD, DPS, and DWS data fusion results (CCD: charge coupled device measurement, DPS: differential power sensing, DWS: differential wavefront sensing, Fusion: Attitude data fusion results).</p>
Full article ">Figure 12
<p>CCD and DPS data fusion results.</p>
Full article ">Figure 13
<p>CCD and DWS data fusion results.</p>
Full article ">Figure 14
<p>DPS and DWS data fusion results.</p>
Full article ">
18 pages, 1079 KiB  
Article
A Threefold Approach for Enhancing Fuzzy Interpolative Reasoning: Case Study on Phishing Attack Detection Using Sparse Rule Bases
by Mohammad Almseidin, Maen Alzubi, Jamil Al-Sawwa, Mouhammd Alkasassbeh and Mohammad Alfraheed
Computers 2024, 13(11), 291; https://doi.org/10.3390/computers13110291 - 8 Nov 2024
Viewed by 364
Abstract
Fuzzy systems are powerful modeling systems for uncertainty applications. In contrast to traditional crisp systems, fuzzy systems offer the opportunity to extend the binary decision to continuous space, which could offer benefits for various application areas such as intrusion detection systems (IDSs), because [...] Read more.
Fuzzy systems are powerful modeling systems for uncertainty applications. In contrast to traditional crisp systems, fuzzy systems offer the opportunity to extend the binary decision to continuous space, which could offer benefits for various application areas such as intrusion detection systems (IDSs), because of their ability to measure the degree of attacks instead of making a binary decision. Furthermore, fuzzy systems offer a suitable environment that is able to deal with uncertainty. However, fuzzy systems face a critical challenge represented by the sparse fuzzy rules. Typical fuzzy systems demand complete fuzzy rules in order to offer the required results. Additionally, generating complete fuzzy rules can be difficult due to many factors, such as a lack of knowledge base or limited data availability, such as in IDS applications. Fuzzy rule interpolation (FRI) was introduced to overcome this limitation by generating the required interpolation results in cases with sparse fuzzy rules. This work introduces a threefold approach designed to address the cases of missing fuzzy rules, which uses a few fuzzy rules to handle the limitations of missing fuzzy rules. This is achieved by finding the interpolation condition of neighboring fuzzy rules. This procedure was accomplished based on the concept of factors (which determine the degree to which each neighboring fuzzy rule contributes to the interpolated results, in cases of missing fuzzy rules). The evaluation procedure for the threefold approach was conducted using the following two steps: firstly, using the FRI benchmark numerical metrics, the results demonstrated the ability of the threefold approach to generate the required results for the various benchmark scenarios. Secondly, using a real-life dataset (phishing attacks dataset), the results demonstrated the effectiveness of the suggested approach to handle cases of missing fuzzy rules in the area of phishing attacks. Consequently, the suggested threefold approach offers an opportunity to reduce the number of fuzzy rules effectively and generate the required results using only a few fuzzy rules. Full article
(This article belongs to the Special Issue Multimedia Data and Network Security)
Show Figures

Figure 1

Figure 1
<p>The triangular membership function.</p>
Full article ">Figure 2
<p>The interpolation conditions extraction procedure based on the factor parameters.</p>
Full article ">Figure 3
<p>The general architecture of the proposed threefold approach.</p>
Full article ">Figure 4
<p>The results of the threefold approach evaluation compared to other FRI methods [<a href="#B22-computers-13-00291" class="html-bibr">22</a>,<a href="#B25-computers-13-00291" class="html-bibr">25</a>], based on benchmark metric (1).</p>
Full article ">Figure 5
<p>The results of the threefold approach evaluation compared to other FRI methods [<a href="#B22-computers-13-00291" class="html-bibr">22</a>,<a href="#B25-computers-13-00291" class="html-bibr">25</a>], based on benchmark metric (2).</p>
Full article ">Figure 6
<p>The results of the threefold approach evaluation compared to other FRI methods [<a href="#B22-computers-13-00291" class="html-bibr">22</a>,<a href="#B25-computers-13-00291" class="html-bibr">25</a>], based on benchmark metric (3).</p>
Full article ">Figure 7
<p>The results of the threefold approach evaluation compared to other FRI methods [<a href="#B22-computers-13-00291" class="html-bibr">22</a>,<a href="#B25-computers-13-00291" class="html-bibr">25</a>], based on benchmark metric (4).</p>
Full article ">Figure 8
<p>The results of the threefold approach in the case of missing fuzzy rules (part 1).</p>
Full article ">Figure 9
<p>The results of the threefold approach in the case of missing fuzzy rules (part 2).</p>
Full article ">Figure 10
<p>The performance metrics of suggested threefold approach for the phishing attack dataset.</p>
Full article ">
15 pages, 3866 KiB  
Article
Distributed Passive Positioning and Sorting Method for Multi-Network Frequency-Hopping Time Division Multiple Access Signals
by Jiaqi Mao, Feng Luo and Xiaoquan Hu
Sensors 2024, 24(22), 7168; https://doi.org/10.3390/s24227168 - 8 Nov 2024
Viewed by 323
Abstract
When there are time division multiple access (TDMA) signals with large bandwidth, waveform aliasing, and fast frequency-hopping in space, current methods have difficulty achieving the accurate localization of radiation sources and signal-sorting from multiple network stations. To solve the above problems, a distributed [...] Read more.
When there are time division multiple access (TDMA) signals with large bandwidth, waveform aliasing, and fast frequency-hopping in space, current methods have difficulty achieving the accurate localization of radiation sources and signal-sorting from multiple network stations. To solve the above problems, a distributed passive positioning and network stations sorting method for broadband frequency-hopping signals based on two-level parameter estimation and joint clustering is proposed in this paper. Firstly, a two-stage filtering structure is designed to achieve control filtering for each frequency point. After narrowing down the parameter estimation range through adaptive threshold detection, the time difference of arrival (TDOA) and the velocity difference of arrival (VDOA) can be obtained via coherent accumulating based on the cross ambiguity function (CAF). Then, a multi-station positioning method based on the TDOA/VDOA is used to estimate the position of the target. Finally, the distributed joint eigenvectors of the multi-stations are constructed, and the signals belonging to different network stations are effectively classified using the improved K-means method. Numerical simulations indicate that the proposed method has a better positioning and sorting effect in low signal-to-noise (SNR) and low snapshot conditions compared with current methods. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

Figure 1
<p>Location scenario of a distributed reconnaissance system.</p>
Full article ">Figure 2
<p>The framework of the proposed method.</p>
Full article ">Figure 3
<p>Broadband receiving structure based on RF direct acquisition.</p>
Full article ">Figure 4
<p>Preprocessing structure based on narrowband mixing.</p>
Full article ">Figure 5
<p>Experimental scenario.</p>
Full article ">Figure 6
<p>Coherent accumulation peak of CAF.</p>
Full article ">Figure 7
<p>Target positioning results.</p>
Full article ">Figure 8
<p>Signal sorting results.</p>
Full article ">Figure 9
<p>Comparison of the parameter estimation accuracy among various algorithms: (<b>a</b>) the TDOA error; (<b>b</b>) the VDOA error.</p>
Full article ">Figure 10
<p>Comparison of the positioning accuracy of different algorithms.</p>
Full article ">Figure 11
<p>Comparison of the sorting accuracy of the K-means algorithm, the improved K-means algorithms in [<a href="#B17-sensors-24-07168" class="html-bibr">17</a>,<a href="#B18-sensors-24-07168" class="html-bibr">18</a>,<a href="#B19-sensors-24-07168" class="html-bibr">19</a>], and the improved K-means method proposed in this paper.</p>
Full article ">
23 pages, 9966 KiB  
Article
SFFNet: Shallow Feature Fusion Network Based on Detection Framework for Infrared Small Target Detection
by Zhihui Yu, Nian Pan and Jin Zhou
Remote Sens. 2024, 16(22), 4160; https://doi.org/10.3390/rs16224160 - 8 Nov 2024
Viewed by 362
Abstract
Infrared small target detection (IRSTD) is the process of recognizing and distinguishing small targets from infrared images that are obstructed by crowded backgrounds. This technique is used in various areas, including ground monitoring, flight navigation, and so on. However, due to complex backgrounds [...] Read more.
Infrared small target detection (IRSTD) is the process of recognizing and distinguishing small targets from infrared images that are obstructed by crowded backgrounds. This technique is used in various areas, including ground monitoring, flight navigation, and so on. However, due to complex backgrounds and the loss of information in deep networks, infrared small target detection remains a difficult undertaking. To solve the above problems, we present a shallow feature fusion network (SFFNet) based on detection framework. Specifically, we design the shallow-layer-guided feature enhancement (SLGFE) module, which guides multi-scale feature fusion with shallow layer information, effectively mitigating the loss of information in deep networks. Then, we design the visual-Mamba-based global information extension (VMamba-GIE) module, which leverages a multi-branch structure combining the capability of convolutional layers to extract features in local space with the advantages of state space models in the exploration of long-distance information. The design significantly extends the network’s capacity to acquire global contextual information, enhancing its capability to handle complex backgrounds. And through the effective fusion of the SLGFE and VMamba-GIE modules, the exorbitant computation brought by the SLGFE module is substantially reduced. The experimental results on two publicly available infrared small target datasets demonstrate that the SFFNet surpasses other state-of-the-art algorithms. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of 2D-Selective-Scan (SS2D).</p>
Full article ">Figure 2
<p>(<b>a</b>) Overall architecture of the SFFNet. (<b>b</b>) Architecture of SLGFE module. (<b>c</b>) Architecture of VMamba-GIE module.</p>
Full article ">Figure 3
<p>Illustration of the backbone network architecture.</p>
Full article ">Figure 4
<p>Illustration of the detection head architecture.</p>
Full article ">Figure 5
<p>Comparison of feature maps at different scales extracted by the backbone network. The target position is highlighted by the red dotted box. (<b>a</b>) original image. (<b>b</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>1</mn> </msub> </semantics></math>. (<b>c</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>2</mn> </msub> </semantics></math>. (<b>d</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>3</mn> </msub> </semantics></math>. (<b>e</b>) <math display="inline"><semantics> <msub> <mi>F</mi> <mn>4</mn> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Partial visualization results obtained by different infrared small target detection methods on the NUAA-SIRST dataset.</p>
Full article ">Figure 7
<p>Partial visualization results obtained by different object detection networks on the NUAA-SIRST dataset.</p>
Full article ">Figure 8
<p>Partial visualization results obtained by different infrared small target detection methods on the IRSTD-1K dataset.</p>
Full article ">Figure 9
<p>Partial visualization results obtained by different object detection networks on the IRSTD-1k dataset.</p>
Full article ">Figure 10
<p>Partial feature maps at different stages. The positions of targets are highlighted with red dashed circle.</p>
Full article ">
15 pages, 11951 KiB  
Technical Note
Axis Estimation of Spaceborne Targets via Inverse Synthetic Aperture Radar Image Sequence Based on Regression Network
by Wenjing Guo, Qi Yang, Hongqiang Wang and Chenggao Luo
Remote Sens. 2024, 16(22), 4148; https://doi.org/10.3390/rs16224148 - 7 Nov 2024
Viewed by 312
Abstract
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and [...] Read more.
Axial estimation is an important task for detecting non-cooperative space targets in orbit, with inverse synthetic aperture radar (ISAR) imaging serving as a fundamental approach to facilitate this process. However, most of the existing axial estimation methods usually rely on manually extracting and matching features of key corner points or linear structures in the images, which may result in a degradation in estimation accuracy. To address these issues, this paper proposes an axial estimation method for spaceborne targets via ISAR image sequences based on a regression network. Firstly, taking the ALOS satellite as an example, its Computer-Aided Design (CAD) model is constructed through a prior analysis of its structural features. Subsequently, target echoes are generated using electromagnetic simulation software, followed by imaging processing, analysis of imaging characteristics, and the determination of axial labels. Finally, in contrast to traditional classification approaches, this study introduces a straightforward yet effective regression network specifically designed for ISAR image sequences. This network transforms the classification loss into a loss function constrained by the minimum mean square error, which can be utilized to adaptively perform the feature extraction and estimation of axial parameters. The effectiveness of the proposed method is validated through both electromagnetic simulations and experimental data. Full article
(This article belongs to the Special Issue Recent Advances in Nonlinear Processing Technique for Radar Sensing)
Show Figures

Figure 1

Figure 1
<p>The overall framework of axial estimation.</p>
Full article ">Figure 2
<p>Definition of the orbit and body coordinate system.</p>
Full article ">Figure 3
<p>Definition of the yaw and pitch angle.</p>
Full article ">Figure 4
<p>ALOS satellite modeling and typical electromagnetic simulation imaging. (<b>a</b>) In-orbit schematic of the ALOS satellite; (<b>b</b>) CAD model; (<b>c</b>) Schematic of imaging result.</p>
Full article ">Figure 5
<p>Imaging results of the ALOS satellite at typical attitude angles.</p>
Full article ">Figure 6
<p>Architecture of the regression network.</p>
Full article ">Figure 7
<p>The sequence of ISAR images whose pitch angles are all 70° and yaw angles are 155°, 160°, and 165° from left to right.</p>
Full article ">Figure 8
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 8 Cont.
<p>Typical ISAR imaging results with corresponding CAD modules. (<b>a</b>) Typical ISAR imaging results; (<b>b</b>) CAD modules.</p>
Full article ">Figure 9
<p>The imaging results under varying SNR levels. (<b>a</b>) SNR: 0; (<b>b</b>) SNR: 5; (<b>c</b>) SNR: 10.</p>
Full article ">Figure 10
<p>Yaw and pitch angle estimation errors for different loss functions.</p>
Full article ">Figure 11
<p>Mean estimation errors for various yaw angle intervals in different pitch angle. (<b>a</b>) 15°; (<b>b</b>) 30°; (<b>c</b>) 45°; (<b>d</b>) 60°; (<b>e</b>) 75°.</p>
Full article ">Figure 12
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 12 Cont.
<p>Feature visualization with different convolutional layers. (<b>a</b>) First convolutional layer; (<b>b</b>) Second convolutional layer; (<b>c</b>) Third convolutional layer; (<b>d</b>) Fourth convolutional layer; (<b>e</b>) Convolutional layers for yaw estimation; (<b>f</b>) Convolutional layers for pitch estimation.</p>
Full article ">Figure 13
<p>Real images of the satellite.</p>
Full article ">Figure 14
<p>The imaging results at three different azimuth angles. (<b>a</b>) yaw: 0°; (<b>b</b>) yaw: 45°; (<b>c</b>) yaw: 75°; (<b>d</b>) 90°; (<b>e</b>) yaw: 100°; (<b>f</b>) yaw: 115°.</p>
Full article ">
Back to TopTop