[go: up one dir, main page]

 
 
applsci-logo

Journal Browser

Journal Browser

Object Detection Technology

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 March 2025 | Viewed by 8530

Special Issue Editors

Department of Navigation and Observation, Naval Submarine Academy, Qingdao 266000, China
Interests: object detection; target recognition; synthetic aperture radar intepretation

E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: computer vision; neural networks; object detection/classification/segmentation; remote sensing processing; synthetic aperture radar; millimeter wave radar technology
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Interests: interferometry synthetic aperture radar (InSAR); InSAR remote sensing; remote sensing processing; machine learning and deep learning; detection and classification using SAR images
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, AGH University of Science and Technology, 30-059 Kraków, Poland
Interests: artificial intelligence; multiobjective optimization; evolutionary and evolutionary multi-agent systems; mobile platforms

Special Issue Information

Dear Colleagues,

Object detection refers to the identification and tracking of important targets in several types of data and electromagnetic signals, such as visible and infrared spectrum, radar, sonar and synthetic aperture radar signals; acoustic and magnetic signals; and optical, spectral and medical data.

Today, it is a crucial basic technology and has many applications in industry and daily life. Within the past two decades—particularly since 2012, following the tremendous progress in sensor development and computer techniques such as deep learning—object detection entered a rapid development period, and remarkable theoretical achievements and practical applications have emerged.

While working on modern object detection techniques, researchers are faced with various types of sensors, data, requirements and applications. This Special Issue is considered a forum to present the progress and state-of-the-art of target detection technologies and their applications. Thus, we welcome research on new algorithms for object and target detection and tracking in different types of signals and data.

Dr. Jianwei Li
Dr. Tianwen Zhang
Prof. Dr. Xiaoling Zhang
Dr. Leszek Siwik
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • object detection
  • optical remote sensing
  • synthetic aperture radar object detection
  • magnetic target detection and localization
  • sonar object detection

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 18733 KiB  
Article
MDD-YOLOv8: A Multi-Scale Object Detection Model Based on YOLOv8 for Synthetic Aperture Radar Images
by Jie Liu, Xue Liu, Huaixin Chen and Sijie Luo
Appl. Sci. 2025, 15(4), 2239; https://doi.org/10.3390/app15042239 - 19 Feb 2025
Viewed by 353
Abstract
The targets in Synthetic Aperture Radar (SAR) images are often tiny, irregular, and difficult to detect against complex backgrounds, leading to a high probability of missed or incorrect detections by object detection algorithms. To address this issue and improve the recall rate, we [...] Read more.
The targets in Synthetic Aperture Radar (SAR) images are often tiny, irregular, and difficult to detect against complex backgrounds, leading to a high probability of missed or incorrect detections by object detection algorithms. To address this issue and improve the recall rate, we introduce an improved version of YOLOv8 (You Only Look Once), named MDD-YOLOv8. This model is not only fast but also highly accurate, with fewer instances of missed or incorrect detections. Our proposed model outperforms the baseline YOLOv8 in SAR image detection by utilizing dynamic convolution to replace static convolution (DynamicConv) and incorporating a deformable large kernel attention mechanism (DLKA). Additionally, we modify the structure of the FPN-PAN and introduce an extra detection header to better detect tiny objects. Experiments on the MSAR-1.0 dataset demonstrate that MDD-YOLOv8 achieves 87.7% precision, 76.1% recall, 78.9% mAP@50, and 0.81 F1 score. These metrics show an improvement of 8.1%, 6.0%, 6.9%, and 0.07, respectively, compared to the original YOLOv8. Although, MDD-YOLOv8 increases parameters by about 20% and GFLOPs by 53% more than YOLOv8n. To further validate the model’s effectiveness, we conducted generalization experiments on four additional SAR image datasets, proving that MDD-YOLOv8’s performance is robust and universally applicable. In summary, MDD-YOLOv8 is a robust, generalized model with strong potential for industrial applications. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>The structure of the YOLOv8 network.</p>
Full article ">Figure 2
<p>The network structure of MDD-YOLOv8.</p>
Full article ">Figure 3
<p>The structure of M-FPN-PAN.</p>
Full article ">Figure 4
<p>The structure of the DynamicConv module.</p>
Full article ">Figure 5
<p>The structure of the C2f_DynamicConv module.</p>
Full article ">Figure 6
<p>The mechanism of deformable large kernel attention.</p>
Full article ">Figure 7
<p>The structure of the C2f_DLKA module.</p>
Full article ">Figure 8
<p>The distribution of targets in the training set. (<b>a</b>) shows the instances of each category; (<b>b</b>) is the space distribution of targets; (<b>c</b>) is the sizes of the bounding boxes.</p>
Full article ">Figure 9
<p>The results of model training.</p>
Full article ">Figure 10
<p>The confusion matrixes of YOLOv8 and MDD-YOLOv8. (<b>a</b>) YOLOv8; (<b>b</b>) MDD-YOLOv8.</p>
Full article ">
20 pages, 1850 KiB  
Article
Generative AI-Enabled Energy-Efficient Mobile Augmented Reality in Multi-Access Edge Computing
by Minsu Na and Joohyung Lee
Appl. Sci. 2024, 14(18), 8419; https://doi.org/10.3390/app14188419 - 19 Sep 2024
Viewed by 1237
Abstract
This paper proposes a novel offloading and super-resolution (SR) control scheme for energy-efficient mobile augmented reality (MAR) in multi-access edge computing (MEC) using SR as a promising generative artificial intelligence (GAI) technology. Specifically, SR can enhance low-resolution images into high-resolution versions using GAI [...] Read more.
This paper proposes a novel offloading and super-resolution (SR) control scheme for energy-efficient mobile augmented reality (MAR) in multi-access edge computing (MEC) using SR as a promising generative artificial intelligence (GAI) technology. Specifically, SR can enhance low-resolution images into high-resolution versions using GAI technologies. This capability is particularly advantageous in MAR by lowering the bitrate required for network transmission. However, this SR process requires considerable computational resources and can introduce latency, potentially overloading the MEC server if there are numerous offload requests for MAR services. In this context, we conduct an empirical study to verify that the computational latency of SR increases with the upscaling level. Therefore, we demonstrate a trade-off between computational latency and improved service satisfaction when upscaling images for object detection, as it enhances the detection accuracy. From this perspective, determining whether to apply SR for MAR, while jointly controlling offloading decisions, is challenging. Consequently, to design energy-efficient MAR, we rigorously formulate analytical models for the energy consumption of a MAR device, the overall latency and the MAR satisfaction of service quality from the enforcement of the service accuracy, taking into account the SR process at the MEC server. Finally, we develop a theoretical framework that optimizes the computation offloading and SR control problem for MAR clients by jointly optimizing the offloading and SR decisions, considering their trade-off in MAR with MEC. Finally, the performance evaluation indicates that our proposed framework effectively supports MAR services by efficiently managing offloading and SR decisions, balancing trade-offs between energy consumption, latency, and service satisfaction compared to benchmarks. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>Proposed system architecture.</p>
Full article ">Figure 2
<p>Task flow in proposed system.</p>
Full article ">Figure 3
<p>The impact of input image resolution on the SR computational latency.</p>
Full article ">Figure 4
<p>The result of object detection on implemented framework.</p>
Full article ">Figure 5
<p>The impact of latency and energy consumption on the change of weight parameters.</p>
Full article ">Figure 6
<p>Total performance of proposed cost model per number of MAR users (MMN clients) with EDSR SR model.</p>
Full article ">Figure 7
<p>Latency of proposed cost model per number of MAR users (MMN clients) with EDSR SR model.</p>
Full article ">Figure 8
<p>Energy consumption of proposed cost model per number of MAR users (MMN clients) with EDSR SR model.</p>
Full article ">Figure 9
<p>Total performance of proposed cost model per number of MAR users (MMN clients) with SRGAN SR model.</p>
Full article ">Figure 10
<p>Latency of proposed cost model per number of MAR users (MMN clients) with SRGAN SR model.</p>
Full article ">Figure 11
<p>Energy consumption of proposed cost model per number of MAR users (MMN clients) with SRGAN SR model.</p>
Full article ">
16 pages, 3755 KiB  
Article
Infrared Dim and Small Target Detection Based on Local–Global Feature Fusion
by Xiao Ling, Chuan Zhang, Zhijun Yan, Bo Wang, Qinghong Sheng and Jun Li
Appl. Sci. 2024, 14(17), 7878; https://doi.org/10.3390/app14177878 - 4 Sep 2024
Cited by 1 | Viewed by 1225
Abstract
Infrared detection, known for its robust anti-interference capabilities, performs well in all weather conditions and various environments. Its applications include precision guidance, surveillance, and early warning systems. However, detecting infrared dim and small targets presents challenges, such as weak target features, blurred targets [...] Read more.
Infrared detection, known for its robust anti-interference capabilities, performs well in all weather conditions and various environments. Its applications include precision guidance, surveillance, and early warning systems. However, detecting infrared dim and small targets presents challenges, such as weak target features, blurred targets with small area percentages, missed detections, and false alarms. To address the issue of insufficient target feature information, this paper proposes a high-precision method for detecting dim and small infrared targets based on the YOLOv7 network model, which integrates both local and non-local bidirectional features. Additionally, a local feature extraction branch is introduced to enhance target information by applying local magnification at the feature extraction layer allowing for the capture of more detailed features. To address the challenge of target and background blending, we propose a strategy involving multi-scale fusion of the local branch and global feature extraction. Additionally, the use of a 1 × 1 convolution structure and concat operation reduces model computation. Compared to the baseline, our method shows a 2.9% improvement in mAP50 on a real infrared dataset, with the detection rate reaching 93.84%. These experimental results underscore the effectiveness of our method in extracting relevant features while suppressing background interference in infrared dim and small target detection (IDSTD), making it more robust. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>Improved YOLOv7 network structure diagram. The numbers below represent the size of each feature map. The CBS module is the basic component of the model, consisting of the conv module and the BN module.</p>
Full article ">Figure 2
<p>Vision focus simulation with local input. Dim and small targets are marked with small red rectangle.</p>
Full article ">Figure 3
<p>Dual-path feature fusion network.</p>
Full article ">Figure 4
<p>The structure of feature fusion.</p>
Full article ">Figure 5
<p>A frame of different data sequence images. Dim and small targets are marked with a red rectangle, and all images are the same size.</p>
Full article ">Figure 6
<p>Object detection results: (<b>a</b>) original infrared image; (<b>b</b>) partial results of our method; (<b>c</b>) partial results of YOLOv7.</p>
Full article ">Figure 7
<p>The detection results of each algorithm for typical images in data1to data7 by 11 methods: (<b>a</b>) the detection results of data1; (<b>b</b>) the detection results of data2; (<b>c</b>) the detection results of data3; (<b>d</b>) the detection results of data4; (<b>e</b>) the detection results of data5; (<b>f</b>) the detection results of data6; (<b>g</b>) the detection results of data7. Specifically, the red rectangles indicate the targets in the images.</p>
Full article ">Figure 7 Cont.
<p>The detection results of each algorithm for typical images in data1to data7 by 11 methods: (<b>a</b>) the detection results of data1; (<b>b</b>) the detection results of data2; (<b>c</b>) the detection results of data3; (<b>d</b>) the detection results of data4; (<b>e</b>) the detection results of data5; (<b>f</b>) the detection results of data6; (<b>g</b>) the detection results of data7. Specifically, the red rectangles indicate the targets in the images.</p>
Full article ">Figure 8
<p>ROC curve of data7.</p>
Full article ">Figure 9
<p>The changes in the YOLOv7 model and our model with the training epochs: (<b>a</b>) box loss; (<b>b</b>) mAP<sub>50</sub>; (<b>c</b>) mAP<sub>50:95</sub>.</p>
Full article ">
17 pages, 2136 KiB  
Article
Adaptive Marginal Multi-Target Bayes Filter without Need for Clutter Density for Object Detection and Tracking
by Zongxiang Liu, Chunmei Zhou and Junwen Luo
Appl. Sci. 2023, 13(19), 11053; https://doi.org/10.3390/app131911053 - 7 Oct 2023
Cited by 3 | Viewed by 1135
Abstract
The random finite set (RFS) approach for multi-target tracking is widely researched because it has a rigorous theoretical basis. However, many prior parameters such as the clutter density, survival probability and detection probability of the target, pruning threshold, merging threshold, initial state of [...] Read more.
The random finite set (RFS) approach for multi-target tracking is widely researched because it has a rigorous theoretical basis. However, many prior parameters such as the clutter density, survival probability and detection probability of the target, pruning threshold, merging threshold, initial state of the birth object and its error covariance matrix are required in the standard RFS-based filters. In real application scenes, it is difficult to obtain these prior parameters. To address this problem, an adaptive marginal multi-target Bayes filter without the need for clutter density is proposed. This filter obviates the need for prior clutter density and survival probability. Instead of using the prior initial states of newborn targets and their error covariance matrices, it uses two scans of observations to generate the initial states of potential birth targets and their error covariance matrices according to the least squares technique. Simulation results reveal that the proposed adaptive filter has smaller OSPA and OSPA(2) errors as well as less cardinality error than the adaptive RFS-based filters. The OSPA and OSPA(2) errors have been reduced by more than 20% compared to those of the adaptive RFS-based filters. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>True trajectories of eleven objects in Example 1.</p>
Full article ">Figure 2
<p>OSPA errors of filters in Example 1.</p>
Full article ">Figure 3
<p>OSPA<sup>(2)</sup> errors of AMTB and AGLMB filters in Example 1.</p>
Full article ">Figure 4
<p>Cardinality estimations of filters in Example 1.</p>
Full article ">Figure 5
<p>True trajectories of maneuvering objects in Example 2.</p>
Full article ">Figure 6
<p>OSPA errors of filters in Example 2.</p>
Full article ">Figure 7
<p>OSPA<sup>(2)</sup> errors of AMTB and AGLMB filters in Example 2.</p>
Full article ">Figure 8
<p>Cardinality estimations of filters in Example 2.</p>
Full article ">
19 pages, 17944 KiB  
Article
Underwater Target Recognition via Cayley-Klein Measure and Shape Prior Information in Hyperspectral Imaging
by Bin Zhang, Fan Zhang, Yansen Sun, Xiaojie Li, Pei Liu, Liang Liu and Zelang Miao
Appl. Sci. 2023, 13(13), 7854; https://doi.org/10.3390/app13137854 - 4 Jul 2023
Viewed by 1183
Abstract
Underwater target detection plays a vital role in various application scenarios, ranging from scientific research to military and industrial operations. In this paper, a detection method via the Cayley–Klein measure and a prior information of shape is proposed for the issue of hyperspectral [...] Read more.
Underwater target detection plays a vital role in various application scenarios, ranging from scientific research to military and industrial operations. In this paper, a detection method via the Cayley–Klein measure and a prior information of shape is proposed for the issue of hyperspectral underwater target identification. Firstly, by analyzing the data features of underwater targets and backgrounds, a background suppression algorithm based on Cayley–Klein measure is developed to enhance the differentiation between underwater targets and backgrounds. Then, a local peak-based algorithm is designed to discriminate potential underwater target points based on the local peak features of underwater targets. Finally, pseudo-target points are eliminated based on the priori shape information of underwater targets. Experiments show that the algorithm proposed is efficient and can effectively detect underwater targets from hyperspectral images. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) HY-9010-U UAV -mounted hyperspectral imaging instrument; (<b>b</b>) UUV; (<b>c</b>) The visible light image corresponding to the hyperspectral image where the red area indicates the location of UUV.</p>
Full article ">Figure 2
<p>(<b>a</b>) The pseudo-color image corresponding to the hyperspectral image, where the red box indicates the underwater target region; (<b>b</b>–<b>d</b>) show the results of <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">g</mi><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">b</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">c</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, and <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>, respectively; (<b>e</b>–<b>g</b>) show the 3D views of <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">g</mi><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">b</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">c</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, and <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>, respectively.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) The pseudo-color image corresponding to the hyperspectral image, where the red box indicates the underwater target region; (<b>b</b>–<b>d</b>) show the results of <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">g</mi><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">b</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">c</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, and <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>, respectively; (<b>e</b>–<b>g</b>) show the 3D views of <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">g</mi><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">b</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, <math display="inline"><semantics><mrow><msubsup><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">E</mi></mrow><mrow><mi mathvariant="normal">l</mi><mi mathvariant="normal">o</mi><mi mathvariant="normal">c</mi><mi mathvariant="normal">a</mi><mi mathvariant="normal">l</mi></mrow></msubsup></mrow></semantics></math>, and <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>, respectively.</p>
Full article ">Figure 3
<p>The first column including (<b>a</b>,<b>d</b>,<b>g</b>) shows the false-color images corresponding to the hyperspectral images, (<b>a</b>) represents the target area where the red box indicates the underwater target region, and (<b>d</b>) represents the background area; The second column including (<b>b</b>,<b>e</b>,<b>h</b>) shows the <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math> results, and the third column including (<b>c</b>,<b>f</b>,<b>i</b>) shows the 3D view of <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>.</p>
Full article ">Figure 4
<p>(<b>a</b>) The result of <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>; (<b>b</b>) The result of local gravity clustering corresponding to <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>; (<b>c</b>) The homogeneous region corresponding to the center pixel.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) The result of <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>; (<b>b</b>) The result of local gravity clustering corresponding to <math display="inline"><semantics><mrow><msub><mrow><mi mathvariant="normal">d</mi></mrow><mrow><mi mathvariant="normal">F</mi></mrow></msub></mrow></semantics></math>; (<b>c</b>) The homogeneous region corresponding to the center pixel.</p>
Full article ">Figure 5
<p>Diagram of the target and background window size setting involved in SCR and ROC calculation.</p>
Full article ">Figure 6
<p>Results of experiment 1: (<b>a</b>) A red box indicating the location of the underwater target for test on the image of visible light in Region 1. (<b>b</b>–<b>i</b>) show results of the algorithm proposed in this study, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively. (<b>b’</b>–<b>i’</b>) show a partial 3D view of the surrounding region of the underwater target detected by the presented method, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively.</p>
Full article ">Figure 7
<p>Results of experiment 2: (<b>a</b>) A red box indicating the location of the underwater target for test on the image of visible light in Region 1. (<b>b</b>–<b>i</b>) show results of the algorithm proposed in this study, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively. (<b>b’</b>–<b>i’</b>) show a partial 3D view of the surrounding region of the underwater target detected by the presented method, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively.</p>
Full article ">Figure 8
<p>Results of experiment 3: (<b>a</b>) A red box indicating the location of the underwater target for test on the image of visible light in Region 1. (<b>b</b>–<b>i</b>) show results of the algorithm proposed in this study, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively. (<b>b’</b>–<b>i’</b>) show a partial 3D view of the surrounding region of the underwater target detected by the presented method, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively.</p>
Full article ">Figure 9
<p>Results of experiment 4: (<b>a</b>) A red box indicating the location of the underwater target for test on the image of visible light in Region 1. (<b>b</b>–<b>i</b>) show results of the algorithm proposed in this study, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively. (<b>b’</b>–<b>i’</b>) show a partial 3D view of the surrounding region of the underwater target detected by the presented method, Kernel IF, VABS, SFBA, SSFAD, Global RX, SRX_LOCAL and LRASR, respectively.</p>
Full article ">
18 pages, 1726 KiB  
Article
Radar Target Localization with Multipath Exploitation in Dense Clutter Environments
by Rui Ding, Zhuang Wang, Libing Jiang and Shuyu Zheng
Appl. Sci. 2023, 13(4), 2032; https://doi.org/10.3390/app13042032 - 4 Feb 2023
Cited by 6 | Viewed by 2201
Abstract
The performance of classic radar geometry based on the line-of-sight (LOS) signal transmitted from radar to the target in the free space is affected by multipath echoes in urban areas, where non-line-of-sight (NLOS) signals reflected by obstacles are received by the radar. Based [...] Read more.
The performance of classic radar geometry based on the line-of-sight (LOS) signal transmitted from radar to the target in the free space is affected by multipath echoes in urban areas, where non-line-of-sight (NLOS) signals reflected by obstacles are received by the radar. Based on prior information of the urban situation, this article proposes a novel two-stage localization algorithm with multipath exploitation in a dense clutter environment. In the offline stage, multipath propagation parameters of uniformly distributed samples in the radar field of view are predicted by the ray-tracing technique. In the online stage, a rough location of the target is estimated by the maximum similarity between measurements and the predicted parameters of reference samples at different locations. The similarity is described by the likelihood between measurements and the predicted multipath parameters with respect to all possible associated hypotheses. A gating threshold is derived to exclude less likely hypotheses and reduce the computational burden. The accurate target location is acquired by a non-linear least squares (NLS) optimization of the associated multipath components. Simulation results in various noise conditions show that the proposed method provides robust and accurate target localization results under dense clutter conditions, and the offline pre-calculation of ray-tracing ensures the real-time performance of the proposed localization algorithm. The root mean square error (RMSE) of simulation results shows the advantage of the proposed method over the existing method. The presented results suggest that the proposed method can be applied to NLOS target localization applications in complex environments. Full article
(This article belongs to the Special Issue Object Detection Technology)
Show Figures

Figure 1

Figure 1
<p>Ray-tracing simulation examples: (<b>a</b>) 3D ray-tracing; (<b>b</b>) 2D ray-tracing.</p>
Full article ">Figure 2
<p>Framework of the proposed target localization algorithm.</p>
Full article ">Figure 3
<p>A single-bounce path from radar <span class="html-italic">R</span> to the target <span class="html-italic">T</span> reflected at position <span class="html-italic">O</span> and the equivalent path from virtual radar <math display="inline"><semantics> <msub> <mi>R</mi> <mn>1</mn> </msub> </semantics></math> to the target directly.</p>
Full article ">Figure 4
<p>Simulation scenario setup.</p>
Full article ">Figure 5
<p>Number of round-trip multipaths in the study area: (<b>a</b>) Reflection is limited to single-bounce; (<b>b</b>) reflection is limited to double-bounce.</p>
Full article ">Figure 6
<p>Ground truth and measurements with clutter: (<b>a</b>) target trajectory in the measurement space; (<b>b</b>) measurements versus target index.</p>
Full article ">Figure 7
<p>Localization error of the proposed algorithm at different locations.</p>
Full article ">Figure 8
<p>RMSE of localization versus STD of the measurement: (<b>a</b>) grid-based results; (<b>b</b>) NLS-based results.</p>
Full article ">Figure 9
<p>RMSE of localization versus STD of the measurement: (<b>a</b>) RMSE versus distance measurement; (<b>b</b>) RMSE versus angle measurement.</p>
Full article ">Figure 10
<p>Comparison with the referenced algorithm in [<a href="#B45-applsci-13-02032" class="html-bibr">45</a>].</p>
Full article ">
Back to TopTop