[go: up one dir, main page]

 
 
sensors-logo

Journal Browser

Journal Browser

Radar Sensors for Target Tracking and Localization

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Radar Sensors".

Deadline for manuscript submissions: closed (5 March 2024) | Viewed by 28898

Special Issue Editors

School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
Interests: automotive radar; statistical signal processing; data fusion; integrated circuit design; wireless communication

E-Mail Website
Guest Editor
College of Communication Engineering, Chongqing University, 174 Sha Pingba, Chongqing 400044, China
Interests: MIMO radar; MIMO communication
Special Issues, Collections and Topics in MDPI journals
State Key Laboratory of Millimeter Waves, Southeast University, Nanjing 210096, China
Interests: radar signal processing; millimeter wave radar system; target localization; super-resolution methods
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Tracking and localization play  an  important  role  in  aerospace,  autonomous driving, robotics and environment perception applications. Radar sensor arrays provide an additional degree of freedom in the spatial domain compared to a single antenna, which with the aid of advanced signal processing algorithms, can be exploited as a precious resource for such tasks as target tracking and localization.

This Special Issue focuses on all types of radar sensors for target tracking and localization. We seek original, completed and unpublished work not currently under review by any other journal/magazine/conference. The topics of interest include, but are not limited to:

  • Multi-modal target tracking and localization
  • Target detection, classification
  • Multi-sensor remote sensing
  • Space-time adaptive methods
  • Channel characterization, modelling, estimation and equalization
  • Source localization
  • Target tracking algorithms
  • Multi-modal signal processing
  • Sensor array and multichannel processing
  • Optimization methods for signal processing

Dr. Le Zheng
Dr. Junhui Qian
Dr. Peng Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

25 pages, 483 KiB  
Article
A Robust Interacting Multi-Model Multi-Bernoulli Mixture Filter for Maneuvering Multitarget Tracking under Glint Noise
by Benru Yu, Hong Gu and Weimin Su
Sensors 2024, 24(9), 2720; https://doi.org/10.3390/s24092720 - 24 Apr 2024
Viewed by 996
Abstract
In practical radar systems, changes in the target aspect toward the radar will result in glint noise disturbances in the measurement data. The glint noise has heavy-tailed characteristics and cannot be perfectly modeled by the Gaussian distribution widely used in conventional tracking algorithms. [...] Read more.
In practical radar systems, changes in the target aspect toward the radar will result in glint noise disturbances in the measurement data. The glint noise has heavy-tailed characteristics and cannot be perfectly modeled by the Gaussian distribution widely used in conventional tracking algorithms. In this article, we investigate the challenging problem of tracking a time-varying number of maneuvering targets in the context of glint noise with unknown statistics. By assuming a set of models for the possible motion modes of each single-target hypothesis and leveraging the multivariate Laplace distribution to model measurement noise, we propose a robust interacting multi-model multi-Bernoulli mixture filter based on the variational Bayesian method. Within this filter, the unknown noise statistics are adaptively learned while filtering and the predictive likelihood is approximately calculated by means of the variational lower bound. The effectiveness and superiority of our proposed filter is verified via computer simulations. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Truetrajectories of the targets.</p>
Full article ">Figure 2
<p>GOSPAE, LE, MTE, and FTE of the ML-IMM-MBM filter for different <span class="html-italic">N</span> with <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>GOSPAE, LE, MTE, and FTE of the ML-IMM-MBM filter for different <math display="inline"><semantics> <mi>β</mi> </semantics></math> with <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Cardinality estimates for filters under study.</p>
Full article ">Figure 5
<p>GOSPAE, LE, MTE, and FTE for filters under study.</p>
Full article ">Figure 6
<p>GOSPAE, LE, MTE, and FTE of different filters for varying scale factor.</p>
Full article ">Figure 7
<p>GOSPAE, LE, MTE, and FTE of different filters for varying glint probability.</p>
Full article ">
22 pages, 6904 KiB  
Article
Harmonic FMCW Radar System: Passive Tag Detection and Precise Ranging Estimation
by Ahmed El-Awamry, Feng Zheng, Thomas Kaiser and Maher Khaliel
Sensors 2024, 24(8), 2541; https://doi.org/10.3390/s24082541 - 15 Apr 2024
Cited by 3 | Viewed by 2560
Abstract
This paper details the design and implementation of a harmonic frequency-modulated continuous-wave (FMCW) radar system, specialized in detecting harmonic tags and achieving precise range estimation. Operating within the 2.4–2.5 GHz frequency range for the forward channel and 4.8–5.0 GHz for the backward channel, [...] Read more.
This paper details the design and implementation of a harmonic frequency-modulated continuous-wave (FMCW) radar system, specialized in detecting harmonic tags and achieving precise range estimation. Operating within the 2.4–2.5 GHz frequency range for the forward channel and 4.8–5.0 GHz for the backward channel, this study delves into the various challenges faced during the system’s realization. These challenges include selecting appropriate components, calibrating the system, processing signals, and integrating the system components. In addition, we introduce a single-layer passive harmonic tag, developed specifically for assessing the system, and provide an in-depth theoretical analysis and simulation results. Notably, the system is characterized by its low power consumption, making it particularly suitable for short-range applications. The system’s efficacy is further validated through experimental evaluations in a real-world indoor environment across multiple tag positions. Our measurements underscore the system’s robust ranging accuracy and its ability to mitigate self-interference, showcasing its significant potential for applications in harmonic tag detection and ranging. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Principal block diagram. The forward path is marked in black, and the backward path is marked in blue. The backscattered signal from the passive harmonic tag has twice the bandwidth of the transmitted signal.</p>
Full article ">Figure 2
<p>Proposed system functionality illustration. The dashed black and red curves represent the reflected signals in the fundamental frequency domain that will be received by a linear radar. The blue and dashed blue curves represent the backscattered chirps in the harmonic domain that will be received by the harmonic radar.</p>
Full article ">Figure 3
<p>Harmonic transponder physical layout on the left, and the corresponding radiation pattern at 2.45 GHz on the right.</p>
Full article ">Figure 4
<p>The simplified single-ended transponder circuit model.</p>
Full article ">Figure 5
<p>Circuit simulation vs. EM simulation.</p>
Full article ">Figure 6
<p>System model illustrating a graphical representation of the mathematical model.</p>
Full article ">Figure 7
<p>Detailed block diagram of the realized harmonic FMCW radar. The signal generated from the frequency synthesizer is an FMCW signal of frequency 4.8–5.0 GHz, which then passes through a frequency divider to generate the 2.4–2.5 GHz FMCW signal. This makes the reference signal as clean as possible.</p>
Full article ">Figure 8
<p>Loop filter open loop gain. The blue curve is the gain, and the red curve is the phase. The loop bandwidth happens in the crossover point of the gain curve, which is at 1.5 MHz. The phase margin is the value of phase at the crossover point (−180°); then, it will be around 50°, which indicates a stability in the PLL.</p>
Full article ">Figure 9
<p>Loop filter closed loop gain. The blue curve is for gain (in dB), and the red curve is for phase (in degrees). The gain starts to decay at 1.5 MHz, and the corresponding phase margin is about 50°.</p>
Full article ">Figure 10
<p>Total phase noise of the designed PLL. The intersection points of these noise contributions define the PLL’s phase noise floor and help in identifying the key areas for noise optimization to enhance the PLL’s performance in high-precision applications.</p>
Full article ">Figure 11
<p>Manufactured harmonic FMCW radar. It is a 6-layer PCB architecture. The digital ground and the RF analog ground are separated and connected together in a single point to minimize the noise in both grounds.</p>
Full article ">Figure 12
<p>Power supply network illustration. In this chart, the main actual components are included to estimate the overall maximum power consumption.</p>
Full article ">Figure 13
<p>Beat frequency for a target at a distance of 0.37 m. The peak at 9.961 kHz indicates that there is one predominant harmonic tag existing at a distance of 0.37 m.</p>
Full article ">Figure 14
<p>Beat frequency for a target at a distance of 1.87 m. The position of the peak is increased by increasing the harmonic tag distance.</p>
Full article ">Figure 15
<p>Probability of range error estimation for harmonic and linear FMCW systems in a multipath environment at a target distance of 1.87 m. The clutter-free environment and the wider bandwidth make the harmonic FMCW radar system outperform the traditional linear FMCW system.</p>
Full article ">Figure 16
<p>Slew rate of the VCO used to generate the sweeping signal. The slew rate is 329.93 MHz/V.</p>
Full article ">Figure 17
<p>Measured output signal. The signal is measured by directly connecting an RF cable from the transmitter connector to the spectrum analyzer in order to investigate the power levels of the signal of interest and the harmonics.</p>
Full article ">Figure 18
<p>Diode conversion loss simulation and measurement results. The conversion loss is measured by designing a PCB that includes the diode. The diode’s input is matched to the fundamental frequencies, and the diode output is matched to the harmonic frequencies. A function generator is used to feed the diode input and vary the power, and a spectrum analyzer is used to capture the diode’s output power at the harmonic frequency.</p>
Full article ">Figure 19
<p>Measurement setup. The designed system is connected to a laptop to share the data and plot them. The real-time data that appear on the screen represent the time domain signal and the corresponding frequency domain beat signal that corresponds to the measured distance.</p>
Full article ">Figure 20
<p>Frequency domain of the measured beat frequency. The peak of the black curve is at a frequency of 21.343 kHz, which shows a distance of 1.599 m. The blue curve is at a frequency of 22.251 kHz, which indicates a distance of 1.669 m. The red curve is at a frequency of 27.473 kHz, which indicates a distance of 2.059 m.</p>
Full article ">Figure 21
<p>Cumulative distribution function of the ranging error at actual distances of 1.600 m, 1.700 m, and 2.100 m. The number of measurements made to plot this graph is 500, which also validates the system robustness.</p>
Full article ">
17 pages, 3492 KiB  
Article
Time Convolutional Network-Based Maneuvering Target Tracking with Azimuth–Doppler Measurement
by Jianjun Huang, Haoqiang Hu and Li Kang
Sensors 2024, 24(1), 263; https://doi.org/10.3390/s24010263 - 2 Jan 2024
Cited by 2 | Viewed by 1526
Abstract
In the field of maneuvering target tracking, the combined observations of azimuth and Doppler may cause weak observation or non-observation in the application of traditional target-tracking algorithms. Additionally, traditional target tracking algorithms require pre-defined multiple mathematical models to accurately capture the complex motion [...] Read more.
In the field of maneuvering target tracking, the combined observations of azimuth and Doppler may cause weak observation or non-observation in the application of traditional target-tracking algorithms. Additionally, traditional target tracking algorithms require pre-defined multiple mathematical models to accurately capture the complex motion states of targets, while model mismatch and unavoidable measurement noise lead to significant errors in target state prediction. To address those above challenges, in recent years, the target tracking algorithms based on neural networks, such as recurrent neural networks (RNNs), long short-term memory (LSTM) networks, and transformer architectures, have been widely used for their unique advantages to achieve accurate predictions. To better model the nonlinear relationship between the observation time series and the target state time series, as well as the contextual relationship among time series points, we present a deep learning algorithm called recursive downsample–convolve–interact neural network (RDCINN) based on convolutional neural network (CNN) that downsamples time series into subsequences and extracts multi-resolution features to enable the modeling of complex relationships between time series, which overcomes the shortcomings of traditional target tracking algorithms in using observation information inefficiently due to weak observation or non-observation. The experimental results show that our algorithm outperforms other existing algorithms in the scenario of strong maneuvering target tracking with the combined observations of azimuth and Doppler. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>The relative positions of the observation point and the maneuvering target in the two-dimensional X-Y plane. The green trajectory illustrates the movement of a maneuvering target at a uniform or uniformly variable speed along the target and the observation point. The blue trajectory demonstrates the trajectory of a target moving circularly at a uniform velocity around the observation point. The red trajectory shows the movement of a maneuvering target relative to the observation point.</p>
Full article ">Figure 2
<p>Flowchart of the algorithm training process. Generate target state and observation pairs for network training with motion and observation models first, then train with batch-normalized observation as inputs, and update the parameters with backpropagation of the loss error between the predicted state and the true state iteratively.</p>
Full article ">Figure 3
<p>Min–max normalization of trajectory segments. The (<b>left side</b>) of the figure shows the original trajectories generated by 9 different motion models. The black trajectory line represents only one maneuver, the purple trajectory has two maneuvers, and the green trajectory has four maneuvers. On the (<b>right side</b>), the distributions of the normalized trajectory data can be observed. The X and Y ranges of the trajectories are restricted to the interval [0, 1], which ensures that the LASTD does not have excessively large ranges to pose difficulties during network training.</p>
Full article ">Figure 4
<p>Model architecture of RDCINN. On the left is a diagram of a sub-block structure of the binary tree network architecture, which performs splitting, convolution, and interactive learning operations on the input sequence X sequentially and finally outputs the odd and even subsequences <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>X</mi> </mrow> <mrow> <mi>o</mi> <mi>d</mi> <mi>d</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>c</mi> </mrow> </msubsup> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>X</mi> </mrow> <mrow> <mi>e</mi> <mi>v</mi> <mi>e</mi> <mi>n</mi> </mrow> <mrow> <mi>s</mi> <mi>e</mi> <mi>c</mi> </mrow> </msubsup> </mrow> </semantics></math>. The input to the binary tree network is the 2-dimensional sequence of normalized observations <span class="html-italic">z</span><sub>1:k</sub>. It is first mapped through the fully connected layer into the E-dimension and the final outputs obtained through the network as a 4-dimensional sequence of predicted target states <span class="html-italic">s</span><sub>1:k</sub>.</p>
Full article ">Figure 5
<p>Trajectory segmentation and reconstruction consist of the following steps: Divide the observations <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>z</mi> </mrow> <mrow> <mn>1</mn> <mrow> <mrow> <mo>:</mo> </mrow> <mo>⁡</mo> <mrow> <mi>K</mi> </mrow> </mrow> </mrow> </msub> </mrow> </semantics></math> corresponding to the target states using a sliding window approach. Normalize each segment of the observation time sequence <math display="inline"><semantics> <mrow> <mi>z</mi> <mi>s</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> as <math display="inline"><semantics> <mrow> <mi>z</mi> <mi>s</mi> <mi>*</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>. Apply denormalization on the predicted target state outputs <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>s</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> <mi mathvariant="normal">*</mi> </mrow> </semantics></math> to obtain the corresponding predicted state sequence <math display="inline"><semantics> <mrow> <mi>s</mi> <mi mathvariant="normal">s</mi> <mo stretchy="false">(</mo> <mi>n</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>. Finally, reconstruct each segment of predicted target state sequences after denormalization.</p>
Full article ">Figure 6
<p>Tracking trajectory results of the trajectory A using different algorithms on the X-Y plane.</p>
Full article ">Figure 7
<p>MAE results of position tracking by different algorithms for trajectory A.</p>
Full article ">Figure 8
<p>MAE results of velocity tracking by different algorithms for trajectory A.</p>
Full article ">Figure 9
<p>Tracking trajectory results of the trajectory B using different algorithms on the X-Y plane.</p>
Full article ">Figure 10
<p>MAE results of position tracking by different algorithms for trajectory B.</p>
Full article ">Figure 11
<p>MAE results of velocity tracking by different algorithms for trajectory B.</p>
Full article ">Figure 12
<p>Tracking trajectory results of the trajectory C using different algorithms on the X-Y plane.</p>
Full article ">Figure 13
<p>MAE results of position tracking by different algorithms for trajectory C.</p>
Full article ">Figure 14
<p>MAE results of velocity tracking by different algorithms for trajectory C.</p>
Full article ">
20 pages, 14352 KiB  
Article
Time-Lapse GPR Measurements to Monitor Resin Injection in Fractures of Marble Blocks
by Luigi Zanzi, Marjan Izadi-Yazdanabadi, Saeed Karimi-Nasab, Diego Arosio and Azadeh Hojat
Sensors 2023, 23(20), 8490; https://doi.org/10.3390/s23208490 - 16 Oct 2023
Cited by 3 | Viewed by 1458
Abstract
The objective of this study is to test the feasibility of time-lapse GPR measurements for the quality control of repairing operations (i.e., injections) on marble blocks. For the experimental activities, we used one of the preferred repairing fillers (epoxy resin) and some blocks [...] Read more.
The objective of this study is to test the feasibility of time-lapse GPR measurements for the quality control of repairing operations (i.e., injections) on marble blocks. For the experimental activities, we used one of the preferred repairing fillers (epoxy resin) and some blocks from one of the world’s most famous marble production area (Carrara quarries in Italy). The selected blocks were paired in a laboratory by overlapping one over the other after inserting very thin spacers in order to simulate air-filled fractures. Fractures were investigated with a 3 GHz ground-penetrating radar (GPR) before and after the resin injections to measure the amplitude reduction expected when the resin substitutes the air. The results were compared with theoretical predictions based on the reflection coefficient predicted according to the thin bed theory. A field test was also performed on a naturally fractured marble block selected along the Carrara shore. Both laboratory and field tests validate the GPR as an effective tool for the quality control of resin injections, provided that measurements include proper calibration tests to control the amplitude instabilities and drift effects of the GPR equipment. The method is accurate enough to distinguish the unfilled fractures from the partially filled fractures and from the totally filled fractures. An automatic algorithm was developed and successfully tested for the rapid quantitative analysis of the time-lapse GPR profiles collected before and after the injections. The whole procedure is mature enough to be proposed to the marble industry to improve the effectiveness of repair interventions and to reduce the waste of natural stone reserves. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Top and side views of three fractured samples created for laboratory tests using six Carrara marble blocks. (<b>a</b>) Two small blocks (<b>left</b>) with the fracture modelled between them at the depth of about 5.6 cm with the average fracture opening of 3 mm. Two medium blocks (<b>right</b>) with the fracture modelled between them at the depth of about 6.5 cm with the average fracture opening of 3 mm. (<b>b</b>) Two large blocks with the fracture modelled between them at the depth of about 10.5 cm with the average fracture opening of 8 mm.</p>
Full article ">Figure 2
<p>Carrara marble block for field test. (<b>a</b>) A pile of abandoned blocks on the Marina di Carrara coast. (<b>b</b>) The selected block with a vertical, visible natural fracture. The vertical yellow arrow shows the side selected for the GPR measurements. (<b>c</b>) Top view of fracture thickness. (<b>d</b>) Front view of fracture thickness. (<b>e</b>) Detail of a metal target embedded near the fracture in the lower part of the block.</p>
Full article ">Figure 3
<p>The process of simulating artificial fractures and resin injection in marble specimens. (<b>a</b>) Simulating a fracture by overlapping two blocks. (<b>b</b>) Sealing the fracture perimeter with silicone. (<b>c</b>) Resin injections inside the small blocks (<b>left</b>), medium blocks (<b>middle</b>) and large blocks (<b>right</b>). (<b>d</b>) Removing the dried silicone from around the blocks. (<b>e</b>) Checking that resin arrived at all the peripheral parts of the fracture.</p>
Full article ">Figure 4
<p>(<b>a</b>) The positions of the GPR antenna for different measurements performed in the laboratory. Records 1 and 16 were surveyed on marble blocks of Botticino for amplitude calibration at the beginning and end of each experiment. Records 2 and 3 were surveyed from the central position of the small blocks of Carrara (see <a href="#sensors-23-08490-f001" class="html-fig">Figure 1</a>a, left). The antenna was rotated 90 degrees from record 2 to record 3. Similarly, records 4 and 5 were surveyed from the central position of the medium blocks of Carrara (see <a href="#sensors-23-08490-f001" class="html-fig">Figure 1</a>a, right). Records 6 to 12 were surveyed on the large blocks of Carrara (see <a href="#sensors-23-08490-f001" class="html-fig">Figure 1</a>b) by deploying the antenna at the fixed positions from 6 to 12. Profiles 13, 14 and 15 were recorded as continuous profiles parallel to the longer side of the blocks by moving the antenna from the positions 13, 14, and 15. (<b>b</b>) Measurements on the Carrara marble block with natural fracture used for field test (see <a href="#sensors-23-08490-f002" class="html-fig">Figure 2</a>b). Profile 1 was recorded as a continuous profile from top to down. Records 2 to 6 were surveyed by placing the antenna at the positions 2 to 6, and yellow numbers are the distances of the antenna’s positions from the top of the block.</p>
Full article ">Figure 5
<p>An example of a complete session of the laboratory time-triggered data after data processing and stacking. Traces 1 and 16 are the calibration records measured on the Botticino blocks. Traces 2 and 3 belong to the small blocks of Carrara. Traces 4 and 5 belong to the medium blocks. Traces 6–12 belong to the large blocks. The reflections of the artificial fractures are highlighted by three rectangles.</p>
Full article ">Figure 6
<p>Amplitude percentage variations of the artificial fracture reflections measured on the large block at the selected positions after each injection phase. Positive numbers indicate amplitude decrease. Yellow bars indicate the cumulative amount of the resin injected after each phase.</p>
Full article ">Figure 7
<p>Comparison of profile 14 measured on the large specimen before and after injections on the third day (<b>left</b>) and on the last day (<b>right</b>), respectively. Data are plotted after time and amplitude calibration, dewow filter, and envelope extraction.</p>
Full article ">Figure 8
<p>Data of <a href="#sensors-23-08490-f007" class="html-fig">Figure 7</a> overlapped after decimation by a factor of eight along the horizontal axis. In black, the data before injections, and in red, the data after injections.</p>
Full article ">Figure 9
<p>The radar profile measured on the fractured block on the Marina di Carrara before and after the two injection phases. Data are plotted after time and amplitude calibration, dewow filter, and envelope extraction. The arrows indicate the approximate positions of the point measurements reported in <a href="#sensors-23-08490-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 10
<p>Amplitude percentage reductions of the natural fracture reflection measured at the selected positions after the first and the second injection phases. Position 6 is not reported because the fracture reflection was interfering with a strong diffraction generated by the metal element embedded in the block.</p>
Full article ">Figure 11
<p>Comparison of the predicted amplitudes for the air-filled fractures (purple circles) versus the resin-filled fractures (green squares) for a 3 GHz antenna. Marble relative permittivity: 7.7. Resin relative permittivity: 3.5. The right plot shows the percentage amplitude reduction expected when the resin substitutes the air within the fracture.</p>
Full article ">Figure 12
<p>Percentage reduction in the reflected amplitude automatically picked along profile 14 measured on the large specimen before and after resin injections (third day and last day respectively).</p>
Full article ">
19 pages, 1070 KiB  
Article
A Gated-Recurrent-Unit-Based Interacting Multiple Model Method for Small Bird Tracking on Lidar System
by Bing Han, Hongchang Wang, Zhigang Su, Jingtang Hao, Xinyi Zhao and Peng Ge
Sensors 2023, 23(18), 7933; https://doi.org/10.3390/s23187933 - 16 Sep 2023
Cited by 1 | Viewed by 1221
Abstract
Lidar presents a promising solution for bird surveillance in airport environments. However, the low observation refresh rate of Lidar poses challenges for tracking bird targets. To address this problem, we propose a gated recurrent unit (GRU)-based interacting multiple model (IMM) approach for tracking [...] Read more.
Lidar presents a promising solution for bird surveillance in airport environments. However, the low observation refresh rate of Lidar poses challenges for tracking bird targets. To address this problem, we propose a gated recurrent unit (GRU)-based interacting multiple model (IMM) approach for tracking bird targets at low sampling frequencies. The proposed method constructs various GRU-based motion models to extract different motion patterns and to give different predictions of target trajectory in place of traditional target moving models and uses an interacting multiple model mechanism to dynamically select the most suitable GRU-based motion model for trajectory prediction and tracking. In order to fuse the GRU-based motion model and IMM, the approximation state transfer matrix method is proposed to transform the prediction of GRU-based network into an explicit state transfer model, which enables the calculation of the models’ probability. The simulation carried out on an open bird trajectory dataset proves that our method outperforms classical tracking methods at low refresh rates with at least 26% improvement in tracking error. The results show that the proposed method is effective for tracking small bird targets based on Lidar systems, as well as for other low-refresh-rate tracking systems. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Trajectory at different sampling time intervals.</p>
Full article ">Figure 2
<p>Position of the proposed method in the state of the art.</p>
Full article ">Figure 3
<p>The principle of IMM-GRU.</p>
Full article ">Figure 4
<p>The structure of the GRU-based motion model.</p>
Full article ">Figure 5
<p>The principle of back-tracking correction.</p>
Full article ">Figure 6
<p>Comparison of (<b>a</b>) prediction performance and (<b>b</b>) estimation performance at varying sampling time intervals.</p>
Full article ">Figure 7
<p>Tracking target on a circular trajectory. (<b>a</b>) Tracking trajectory at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s. (<b>b</b>) RMSE performance at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s. (<b>c</b>) The statistic proportion of motion model weights in tracking at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s.</p>
Full article ">Figure 8
<p>Tracking on a same trajectory with different sampling intervals. (<b>a</b>) Tracking trajectory at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s. (<b>b</b>) RMSE performance at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s. (<b>c</b>) The statistic proportion of motion model weights in tracking at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 1 s. (<b>d</b>) Tracking trajectory at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 3 s. (<b>e</b>) RMSE performance at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 3 s. (<b>f</b>) The statistic proportion of motion model weights in tracking at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 3 s. (<b>g</b>) Tracking trajectory at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 5 s. (<b>h</b>) RMSE performance at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 5 s. (<b>i</b>) The statistic proportion of motion model weights in tracking at <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi>t</mi> </mrow> </semantics></math> = 5 s.</p>
Full article ">Figure 9
<p>Comparison of prediction and estimation performance at varying numbers of motion model. (<b>a</b>) Prediction performance. (<b>b</b>) Estimation performance.</p>
Full article ">
31 pages, 32886 KiB  
Article
Remarks on Geomatics Measurement Methods Focused on Forestry Inventory
by Karel Pavelka, Eva Matoušková and Karel Pavelka, Jr.
Sensors 2023, 23(17), 7376; https://doi.org/10.3390/s23177376 - 24 Aug 2023
Cited by 4 | Viewed by 1963
Abstract
This contribution focuses on a comparison of modern geomatics technologies for the derivation of growth parameters in forest management. The present text summarizes the results of our measurements over the last five years. As a case project, a mountain spruce forest with planned [...] Read more.
This contribution focuses on a comparison of modern geomatics technologies for the derivation of growth parameters in forest management. The present text summarizes the results of our measurements over the last five years. As a case project, a mountain spruce forest with planned forest logging was selected. In this locality, terrestrial laser scanning (TLS) and terrestrial and drone close-range photogrammetry were experimentally used, as was the use of PLS mobile technology (personal laser scanning) and ALS (aerial laser scanning). Results from the data joining, usability, and economics of all technologies for forest management and ecology were discussed. ALS is expensive for small areas and the results were not suitable for a detailed parameter derivation. The RPAS (remotely piloted aircraft systems, known as “drones”) method of data acquisition combines the benefits of close-range and aerial photogrammetry. If the approximate height and number of the trees are known, one can approximately calculate the extracted cubage of wood mass before forest logging. The use of conventional terrestrial close-range photogrammetry and TLS proved to be inappropriate and practically unusable in our case, and also in standard forestry practice after consultation with forestry workers. On the other hand, the use of PLS is very simple and allows you to quickly define ordered parameters and further calculate, for example, the cubic volume of wood stockpiles. The results from our research into forestry show that drones can be used to estimate quantities (wood cubature) and inspect the health status of spruce forests, However, PLS seems, nowadays, to be the best solution in forest management for deriving forest parameters. Our results are mainly oriented to practice and in no way diminish the general research in this area. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>–<b>c</b>) Examples of photographs inside the study area, (<b>d</b>) detail of the study area (Ebee drone photo), and (<b>e</b>) localization of the study area (GoogleMaps).</p>
Full article ">Figure 2
<p>Tie points (sparse point cloud; 665,000 points).</p>
Full article ">Figure 3
<p>A fragment of dense point cloud.</p>
Full article ">Figure 4
<p>An example from a photo set with control points (<b>left</b>), processed model to textured mesh (<b>right</b>).</p>
Full article ">Figure 5
<p>TLS in the tested forest area (<b>a</b>) Surphaser 25X and (<b>b</b>) BLK 360.</p>
Full article ">Figure 6
<p>Colored joined point cloud from the laser scanner BLK 360 (520 million of points, 9 scans).</p>
Full article ">Figure 7
<p>Panoramic model viewer, the possibility of a stem diameter measurement—DBH (BLK 360, Recap software, version 6.2.0.66).</p>
Full article ">Figure 8
<p>Problems of scan joining—some stems are not joined precisely (there are errors in decimeters).</p>
Full article ">Figure 9
<p>New measurement with BLK360, April 2020.</p>
Full article ">Figure 10
<p>EBee, Sensefly (<b>a</b>) DJI Phantom 4 (<b>b</b>), and DJI Mavic Pro (<b>c</b>).</p>
Full article ">Figure 11
<p>(<b>a</b>) A fragment from IR photo (6 May 2018) and (<b>b</b>) RGB photo (12 May 2018); both show that the quality of images is not excellent (especially by IR (infra-red) photo, taken in the late afternoon, there are the GCPs (ground control point) poorly visible); eBee cameras Canon S110 IR and ELPH (RGB).</p>
Full article ">Figure 12
<p>(<b>a</b>) DSM from the EBee image set, (<b>b</b>) 3D textured model (some images are not processed due strong winds and bad texture), and (<b>c</b>) orthophoto.</p>
Full article ">Figure 13
<p>The 3D textured model (derived from a photo set from DJI Phantom 3 on 27 April 2018), a visibly poor quality of the model due to strong winds.</p>
Full article ">Figure 14
<p>Photo from DJI Mavic Pro, 6 May 2018, (flight level 25 m, GSD = 1 cm—it seems to be to detailed for the creation of DSM based on image correlation).</p>
Full article ">Figure 15
<p>Photos from DJI Phantom 4, 12 May 2018 (three flight levels—(<b>a</b>) 30 m, (<b>b</b>) 55 m, and (<b>c</b>) 80 m).</p>
Full article ">Figure 16
<p>(<b>a</b>) Scheme of processing, and (<b>b</b>) a fragment of a photo from DJI Phantom 4 with GCP.</p>
Full article ">Figure 17
<p>Orthophotos, DJI Phantom 4 from flight altitude (<b>a</b>) 30 m, (<b>b</b>) 55 m, and (<b>c</b>) 80 m.</p>
Full article ">Figure 18
<p>(<b>a</b>) three flight levels with DJI Phantom, and (<b>b</b>) the 3D model created in Agisoft Photoscan (based on DJI Phantom 4 data).</p>
Full article ">Figure 19
<p>Processing in ArcGIS, (<b>a</b>) selected study area, (<b>b</b>) CHM (DSM − DRM = CHM).</p>
Full article ">Figure 20
<p>(<b>a</b>) Using the function “Focal Statistic” for locating local maxima; in combination with orthophotos, shows the localization of trees. (<b>b</b>) Outside of the forested area, errors occurred in local maximum identification (small changes in DSM on meadow).</p>
Full article ">Figure 21
<p>(<b>a</b>) ZEB-REVO laser scanner, and (<b>b</b>) forest point.</p>
Full article ">Figure 22
<p>Top view on point cloud from testing area. All stems are easy to find.</p>
Full article ">Figure 23
<p>Side view on point cloud. There is possible to make a cut at the height of DBH.</p>
Full article ">Figure 24
<p>Colored point cloud (<b>a</b>) and DSM from LiBackpack DGC50 (<b>b</b>).</p>
Full article ">Figure 24 Cont.
<p>Colored point cloud (<b>a</b>) and DSM from LiBackpack DGC50 (<b>b</b>).</p>
Full article ">Figure 25
<p>Automatically determined tree stems (Limapper).</p>
Full article ">Figure 26
<p>(<b>a</b>) A fragment from ALS data set (the oblique view on the test site), and (<b>b</b>) ALS data (side view of the test site; you can see the terrain, low undergrowth, and partly individual trees).</p>
Full article ">Figure 27
<p>(<b>a</b>) iPad PRO, and (<b>b</b>) the detail of iPAD PRO lidar sensor and camera lenses.</p>
Full article ">Figure 28
<p>Lidar spots from iPad Pro (infrared image, distance 1.5 m, point spacing approximately 10 cm).</p>
Full article ">Figure 29
<p>Scanning with iPAD PRO in forest test area: (<b>a</b>) left relatively well-scanned stems, and (<b>b</b>) after several minutes of scanning, when you return to the beginning, big errors appear (errors reach meters).</p>
Full article ">Figure 30
<p>Results from iPhone PRO scanning; a general view on the test site.</p>
Full article ">Figure 31
<p><b>Left</b>: Smartphone with RTRK GNSS by using videogrammetry (3D Survay), <b>right</b>: viDOC Pix4D iPAD PRO and iPhone PRO.</p>
Full article ">Figure 32
<p>Searching for dead trees (after the attack of bark beetles).</p>
Full article ">
17 pages, 624 KiB  
Article
Passive Radar Tracking in Clutter Using Range and Range-Rate Measurements
by Asma Asif, Sithamparanathan Kandeepan and Robin J. Evans
Sensors 2023, 23(12), 5451; https://doi.org/10.3390/s23125451 - 8 Jun 2023
Cited by 2 | Viewed by 2679
Abstract
Passive bistatic radar research is essential for accurate 3D target tracking, especially in the presence of missing or low-quality bearing information. Traditional extended Kalman filter (EKF) methods often introduce bias in such scenarios. To overcome this limitation, we propose employing the unscented Kalman [...] Read more.
Passive bistatic radar research is essential for accurate 3D target tracking, especially in the presence of missing or low-quality bearing information. Traditional extended Kalman filter (EKF) methods often introduce bias in such scenarios. To overcome this limitation, we propose employing the unscented Kalman filter (UKF) for handling the nonlinearities in 3D tracking, utilizing range and range-rate measurements. Additionally, we incorporate the probabilistic data association (PDA) algorithm with the UKF to handle cluttered environments. Through extensive simulations, we demonstrate a successful implementation of the UKF-PDA framework, showing that the proposed method effectively reduces bias and significantly advances tracking capabilities in passive bistatic radars. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>System model.</p>
Full article ">Figure 2
<p>Illustration of EKF and UKF.</p>
Full article ">Figure 3
<p>Measurement validation illustration.</p>
Full article ">Figure 4
<p>Tracking using EKF.</p>
Full article ">Figure 5
<p>Tracking using EKF and UKF.</p>
Full article ">Figure 6
<p>Tracking using UKF.</p>
Full article ">Figure 7
<p>Tracking using UKF and PDA in Range and Range-rate domain.</p>
Full article ">Figure 8
<p>Tracking using UKF and PDA in 3D Cartesian Domain.</p>
Full article ">Figure 9
<p>Histogram of the tracker at Scan 35.</p>
Full article ">Figure 10
<p>Tracking of the actual aircraft using real-time measurements.</p>
Full article ">
17 pages, 5243 KiB  
Article
Multi-Target Tracking Algorithm Combined with High-Precision Map
by Qingru An, Yawen Cai, Juan Zhu, Sijia Wang and Fengxia Han
Sensors 2022, 22(23), 9371; https://doi.org/10.3390/s22239371 - 1 Dec 2022
Cited by 1 | Viewed by 2131
Abstract
On high-speed roads, there are certain blind areas within the radar coverage, especially when the blind zone is in curved road sections; because the radar does not have the measurement point information in multiple frames, it is easy to have a large deviation [...] Read more.
On high-speed roads, there are certain blind areas within the radar coverage, especially when the blind zone is in curved road sections; because the radar does not have the measurement point information in multiple frames, it is easy to have a large deviation between the real trajectory and the filtered trajectory. In this paper, we propose a track prediction method combined with a high-precision map to solve the problem of scattered tracks when the targets are in the blind area. First, the lane centerline is fitted to the upstream and downstream lane edges obtained from the high-precision map in certain steps, and the off-north angle at each fitted point is obtained. Secondly, the normal trajectory is predicted according to the conventional method; for the extrapolated trajectory, the northerly angle of the lane centerline at the current position of the trajectory is obtained, the current coordinate system is converted from the north-east-up (ENU) coordinate system to the vehicle coordinate system, and the lateral velocity of the target is set to zero in the vehicle coordinate system to reduce the error caused by the lateral velocity drag of the target. Finally, the normal trajectory is updated and corrected, and the normal and extrapolated trajectories are managed and reported. The experimental results show that the accuracy and convergence effect of the proposed methods are much better than the traditional methods. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of LLA coordinate system.</p>
Full article ">Figure 2
<p>Schematic diagram of ECEF coordinate system.</p>
Full article ">Figure 3
<p>Schematic diagram of conversion from ENU coordinate system to radar coordinate system.</p>
Full article ">Figure 4
<p>Radar blind zone on a curved road.</p>
Full article ">Figure 5
<p>Combined with the high-precision map tracking algorithm.</p>
Full article ">Figure 6
<p>Coverage of traffic radar.</p>
Full article ">Figure 7
<p>Radar installation in the actual scene.</p>
Full article ">Figure 8
<p>Radar blind zone on a straight road.</p>
Full article ">Figure 9
<p>Radar track filtering results on the straight road. (<b>a</b>) Traditional method filtering results; (<b>b</b>) track filtering results combined with high-precision map.</p>
Full article ">Figure 10
<p>Radar blind zone on a curved road.</p>
Full article ">Figure 11
<p>Radar track filtering results on the curved road. (<b>a</b>) Traditional method filtering results; (<b>b</b>) track filtering results combined with high-precision map.</p>
Full article ">
20 pages, 2577 KiB  
Article
Meter Wave Polarization-Sensitive Array Radar for Height Measurement Based on MUSIC Algorithm
by Guoxuan Wang, Guimei Zheng, Hongzhen Wang and Chen Chen
Sensors 2022, 22(19), 7298; https://doi.org/10.3390/s22197298 - 26 Sep 2022
Cited by 2 | Viewed by 1491
Abstract
Obtaining good measurement performance with meter wave radar has always been a difficult problem. Especially in low-elevation areas, the multipath effect seriously affects the measurement accuracy of meter wave radar. The generalized multiple signal classification (MUSIC) algorithm is a well-known measurement method that [...] Read more.
Obtaining good measurement performance with meter wave radar has always been a difficult problem. Especially in low-elevation areas, the multipath effect seriously affects the measurement accuracy of meter wave radar. The generalized multiple signal classification (MUSIC) algorithm is a well-known measurement method that dose not require decorrelation processing. The polarization-sensitive array (PSA) has the advantage of polarization diversity, and the polarization smoothing generalized MUSIC algorithm demonstrates good angle estimation performance in low-elevation areas when based on a PSA. Nevertheless, its computational complexity is still high, and the estimation accuracy and discrimination success probability need to be further improved. In addition, it cannot estimate the polarization parameters. To solve these problems, a polarization synthesis steering vector MUSIC algorithm is proposed in this paper. First, the MUSIC algorithm is used to obtain the spatial spectrum of the meter wave PSA. Second, the received data are properly deformed and classified. The Rayleigh–Ritz method is used to decompose the angle to realize the decoupling of polarization and the direction of the arrival angle. Third, the geometric relationship and prior information of the direct wave and the reflected wave are used to continue dimension reduction processing to reduce the computational complexity of the algorithm. Finally, the geometric relationship is used to obtain the target height measurement results. Extensive simulation results illustrate the accuracy and superiority of the proposed algorithm. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Multipath signal model of meter wave PSA.</p>
Full article ">Figure 2
<p>Spectrum estimation of the three methods when <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>20</mn> <mrow> <mo> </mo> <mi>dB</mi> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">T</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Computational complexity with respect to the number of antenna elements.</p>
Full article ">Figure 4
<p>RMSE of elevation with respect to the SNR when <math display="inline"><semantics> <mrow> <mi mathvariant="normal">T</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>RMSE of elevation with respect to the SNR of the two-polarization parameters and their CRBs using the P-SSV-MUSIC algorithm.</p>
Full article ">Figure 6
<p>RMSE of elevation with respect to the snapshot number when <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>20</mn> <mrow> <mo> </mo> <mi>dB</mi> </mrow> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>RMSE of elevation with respect to the phase aberration when <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>20</mn> <mrow> <mo> </mo> <mi>dB</mi> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi mathvariant="normal">T</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>1</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>θ</mi> <mi>d</mi> </msub> <mo>=</mo> <msup> <mn>2</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Discrimination success probability with respect to the search angel when <math display="inline"><semantics> <mrow> <mi mathvariant="normal">T</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>: (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>15</mn> <mrow> <mo> </mo> <mi>dB</mi> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>SNR</mi> <mo>=</mo> <mn>20</mn> <mrow> <mo> </mo> <mi>dB</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Elevation measurement results of target.</p>
Full article ">Figure 10
<p>Height measurement results of target.</p>
Full article ">Figure 11
<p>Error results of target elevation measurement.</p>
Full article ">Figure 12
<p>Error results of target height measurement.</p>
Full article ">
16 pages, 3422 KiB  
Communication
Calibration of Radar RCS Measurement Errors by Observing the Luneburg Lens Onboard the LEO Satellite
by Jie Yang, Ning Li, Pengbin Ma and Bin Liu
Sensors 2022, 22(14), 5421; https://doi.org/10.3390/s22145421 - 20 Jul 2022
Cited by 1 | Viewed by 2272
Abstract
Accurate radar RCS measurements are critical to the feature recognition of spatial targets. A calibration method for radar RCS measurement errors is proposed for the first time in the context of special target tracking by observing the Luneburg Lens onboard the LEO satellite. [...] Read more.
Accurate radar RCS measurements are critical to the feature recognition of spatial targets. A calibration method for radar RCS measurement errors is proposed for the first time in the context of special target tracking by observing the Luneburg Lens onboard the LEO satellite. The Luneburg Lens has favorable RCS scattering properties for the radar microwave. Thus, the laboratory RCS measurements of the Luneburg Lens, with some fixed incident frequency and with different incident orientations for the radar microwave, will be implemented in order to build a database. The incident orientation for the radar microwave in the satellite body frame will be calculated by taking advantage of the precise orbit parameters, with errors only at the magnitude of several centimeters and within the actual satellite attitude parameters. According to the incident orientation, the referenced RCS measurements can be effectively obtained by the bilinear interpolation in the database. The errors of actual RCS measurements can thus be calibrated by comparing the referenced and the actual RCS measurements. In the RCS measurement experiment, which lasts less than 400 s, the actual RCS measurement errors of the Luneburg Lens are nearly less than 0 dBsm, which indicates that the RCS measurement errors of the spatial targets can be effectively calculated by the proposed calibration method. After the elaborated calibration, the RCS measurements of the spatial targets can be accurately obtained by radar tracking. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>The block diagram of the RCS measurement system.</p>
Full article ">Figure 2
<p>The graphical description of different frames.</p>
Full article ">Figure 3
<p>The radar microwave incident angles.</p>
Full article ">Figure 4
<p>A large-scale target-characteristic laboratory.</p>
Full article ">Figure 5
<p>The database of laboratory RCS measurements.</p>
Full article ">Figure 6
<p>The RCS measurement results at the elevation of 0 deg.</p>
Full article ">Figure 7
<p>The RCS measurement results at the elevation of 15 deg.</p>
Full article ">Figure 8
<p>The RCS measurement results at the elevation of 30 deg.</p>
Full article ">Figure 9
<p>The RCS measurement results at the elevation of 45 deg.</p>
Full article ">Figure 10
<p>The RCS measurement results at the elevation of 60 deg.</p>
Full article ">Figure 11
<p>The incident orientation of the radar microwave in tracking the spaceborne Luneburg Lens by a ground radar.</p>
Full article ">Figure 12
<p>Comparison of the actual RCS measurement and the referenced RCS.</p>
Full article ">Figure 13
<p>The radar RCS measurement errors denoted by * during the tracking procedure.</p>
Full article ">Figure 14
<p>The radar RCS measurement errors at different incident angles.</p>
Full article ">
23 pages, 7247 KiB  
Article
Positioning and Tracking of Multiple Humans Moving in Small Rooms Based on a One-Transmitter–Two-Receiver UWB Radar Configuration
by Jana Fortes, Michal Švingál, Tamás Porteleky, Patrik Jurík and Miloš Drutarovský
Sensors 2022, 22(14), 5228; https://doi.org/10.3390/s22145228 - 13 Jul 2022
Cited by 16 | Viewed by 3018
Abstract
The paper aims to propose a sequence of steps that will allow multi-person tracking with a single UWB radar equipped with the minimal antenna array needed for trilateration. Its localization accuracy is admittedly limited, but on the other hand, thoughtfully chosen placement of [...] Read more.
The paper aims to propose a sequence of steps that will allow multi-person tracking with a single UWB radar equipped with the minimal antenna array needed for trilateration. Its localization accuracy is admittedly limited, but on the other hand, thoughtfully chosen placement of antennas can increase the detectability of several humans moving in their immediate vicinity and additionally decrease the computational complexity of the signal processing methods. It is shown that the UWB radar measuring with high rate and fine range resolution in conjunction with properly tuned processing parameters can continually track people even in the case when their radar echoes are crossing or merging. Emphasis is given to the simplified method of the time-of-arrival (TOA) estimation and association and the novel method needed for antenna height compensation. The performance of the proposed human tracking framework is evaluated for the experimental scenario with three people moving closely in a small room. A quantitative analysis of the estimated target tracks confirms the benefits of suggested high antenna placement and application of new signal processing methods in the form of decreasing the mean localization error and increasing the frequency of correct target position estimations. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>M-sequence UWB system: (<b>a</b>) an original m:explore, (<b>b</b>) a customized version of m:explore with antennas.</p>
Full article ">Figure 2
<p>The experimental scenario: (<b>a</b>) scheme, (<b>b</b>) photos.</p>
Full article ">Figure 3
<p>Comparison of outputs measured simultaneously in two antenna heights: (<b>a</b>) detector output from antenna height 1.3 m, (<b>b</b>) detector output from antenna height 2.5 m, (<b>c</b>) tracking output from antenna height 1.3 m, (<b>d</b>) tracking output from antenna height 2.5 m.</p>
Full article ">Figure 4
<p>Proposed human tracking framework.</p>
Full article ">Figure 5
<p>Comparison of outputs measured from antenna height 2.5 m processed by the original and the proposed TOA estimation method: (<b>a</b>) trace connection output (the original method), (<b>b</b>) TOA matching output (the proposed method), (<b>c</b>) final tracks estimated from trace connection output, (<b>d</b>) final tracks estimated from TOA matching output.</p>
Full article ">Figure 6
<p>Illustration of different plains for estimation of TOA and estimation of the final target coordinates.</p>
Full article ">Figure 7
<p>The cross-section of the spheroid <span class="html-italic">S</span> formed from measured TOA around couple <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>x</mi> </msub> <mo>−</mo> <msub> <mi>R</mi> <mi>x</mi> </msub> </mrow> </semantics></math> and the plane <math display="inline"><semantics> <mi>β</mi> </semantics></math> situated in the approximated target height displayed in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>z</mi> </mrow> </semantics></math> plane.</p>
Full article ">Figure 8
<p>Change in the localization accuracy of <span class="html-italic">x</span>-coordinate, <span class="html-italic">y</span>-coordinate and target position <math display="inline"><semantics> <mrow> <mi>P</mi> <mo>=</mo> <mspace width="3.33333pt"/> <mo>[</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>]</mo> </mrow> </semantics></math> depending on the changing value of the input parameter <math display="inline"><semantics> <mrow> <mi>p</mi> <mi>T</mi> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> <mi>t</mi> <mi>H</mi> <mi>e</mi> <mi>i</mi> <mi>g</mi> <mi>h</mi> <mi>t</mi> </mrow> </semantics></math>: (<b>a</b>) mean localization error [m], (<b>b</b>) relative frequency of the correct estimations [%].</p>
Full article ">Figure 9
<p>2D target localization based on one transmitter-two receiver UWB radar configuration.</p>
Full article ">Figure 10
<p>The radargrams from Rx1 and Rx2 with the raw radar data.</p>
Full article ">Figure 11
<p>The radargrams from Rx1 and Rx2 with the subtracted background.</p>
Full article ">Figure 12
<p>The radargrams from Rx1 and Rx2 with the detection output.</p>
Full article ">Figure 13
<p>The radargrams from Rx1 and Rx2 with the estimated TOA.</p>
Full article ">Figure 14
<p>Target localization in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane without height compensation (cyan circles) vs. with height compensation (magenta circles).</p>
Full article ">Figure 15
<p>Target tracking in the <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>−</mo> <mi>y</mi> </mrow> </semantics></math> plane: TOA matching &amp; no height compensation &amp; antenna height 1.3 m (yellow tracks) vs. trace connection &amp; height compensation &amp; antenna height 2.5 m (black tracks) vs. TOA matching &amp; no height compensation &amp; antenna height 2.5 m (cyan tracks) vs. TOA matching &amp; height compensation &amp; antenna height 2.5 m (magenta tracks).</p>
Full article ">
21 pages, 13115 KiB  
Article
Simulation and Analysis of an FMCW Radar against the UWB EMP Coupling Responses on the Wires
by Kaibai Chen, Shaohua Liu, Min Gao and Xiaodong Zhou
Sensors 2022, 22(12), 4641; https://doi.org/10.3390/s22124641 - 20 Jun 2022
Cited by 4 | Viewed by 3008
Abstract
An ultra-wideband electromagnetic pulse (UWB EMP) can be coupled to an FMCW system through metal wires, causing electronic equipment disturbance or damage. In this paper, a hybrid model is proposed to carry out the interference analysis of UWB EMP coupling responses on the [...] Read more.
An ultra-wideband electromagnetic pulse (UWB EMP) can be coupled to an FMCW system through metal wires, causing electronic equipment disturbance or damage. In this paper, a hybrid model is proposed to carry out the interference analysis of UWB EMP coupling responses on the wires to the FMCW radar. First, a field simulation model of the radar is constructed and the wire coupling responses are calculated. Then, the responses are injected into an FMCW circuit model via data format modification. Finally, we use the FFT transform to identify the spectral peak of the intermediate frequency (IF) output signal, which corresponds to the radar’s detection range. The simulation results show that the type of metal wire has the greatest influence on the amplitude of coupling responses. The spectral peak of the IF output changes to the wrong frequency with the increase of injection power. Applying interference at the end of the circuit can more effectively interfere with the detection of the radar. The investigation provides a theoretical basis for the electromagnetic protection design of the radar. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>The brief ranging principle of the FMCW radar.</p>
Full article ">Figure 2
<p>The flowchart of the proposed hybrid model.</p>
Full article ">Figure 3
<p>The external structure of the radar.</p>
Full article ">Figure 4
<p>The internal structure of the radar.</p>
Full article ">Figure 5
<p>The external structure of the field model.</p>
Full article ">Figure 6
<p>The internal structure of the field model.</p>
Full article ">Figure 7
<p>The incident direction of UWB EMP.</p>
Full article ">Figure 8
<p>The composition of the circuit model of the FMCW radar.</p>
Full article ">Figure 9
<p>IF output spectrum.</p>
Full article ">Figure 10
<p>Time-domain waveform and the frequency-domain waveform of UWB EMP. (<b>a</b>) Time-domain waveform of UWB EMP; (<b>b</b>) frequency-domain waveform of UWB EMP.</p>
Full article ">Figure 11
<p>The actual geometry of the wires. (<b>a</b>) A single wire; (<b>b</b>) A twisted wire; (<b>c</b>) A coaxial wire.</p>
Full article ">Figure 12
<p>The structure of electromagnetic field.</p>
Full article ">Figure 13
<p>The responses of different categories of metal wire.</p>
Full article ">Figure 14
<p>The responses of different lengths of single wire.</p>
Full article ">Figure 15
<p>Relationship between peak time and peak voltage with the wire lengths.</p>
Full article ">Figure 16
<p>The responses of the different radii of a single wire.</p>
Full article ">Figure 17
<p>The geometry of the wires.</p>
Full article ">Figure 18
<p>The responses of the different curvatures of a single wire.</p>
Full article ">Figure 19
<p>The geometry of the wires.</p>
Full article ">Figure 20
<p>The responses of the different number of wires.</p>
Full article ">Figure 21
<p>The geometry of the wires.</p>
Full article ">Figure 22
<p>The responses of different distances among the wires.</p>
Full article ">Figure 23
<p>The schematic of the circuit analysis.</p>
Full article ">Figure 24
<p>The injection interference law in node 5.</p>
Full article ">Figure 25
<p>The injection interference law in node 6.</p>
Full article ">Figure 26
<p>The injection interference law in nodes 5 and 6.</p>
Full article ">
14 pages, 3648 KiB  
Article
Deep Learning-Based Device-Free Localization Scheme for Simultaneous Estimation of Indoor Location and Posture Using FMCW Radars
by Jeongpyo Lee, Kyungeun Park and Youngok Kim
Sensors 2022, 22(12), 4447; https://doi.org/10.3390/s22124447 - 12 Jun 2022
Cited by 9 | Viewed by 2596
Abstract
Indoor device-free localization (DFL) systems are used in various Internet-of-Things applications based on human behavior recognition. However, the usage of camera-based intuitive DFL approaches is limited in dark environments and disaster situations. Moreover, camera-based DFL schemes exhibit certain privacy issues. Therefore, DFL schemes [...] Read more.
Indoor device-free localization (DFL) systems are used in various Internet-of-Things applications based on human behavior recognition. However, the usage of camera-based intuitive DFL approaches is limited in dark environments and disaster situations. Moreover, camera-based DFL schemes exhibit certain privacy issues. Therefore, DFL schemes with radars are increasingly being investigated owing to their efficient functioning in dark environments and their ability to prevent privacy issues. This study proposes a deep learning-based DFL scheme for simultaneous estimation of indoor location and posture using 24-GHz frequency-modulated continuous-wave (FMCW) radars. The proposed scheme uses a parallel 1D convolutional neural network structure with a regression and a classification model for localization and posture estimation, respectively. The two-dimensional location information of the target is estimated for localization, and four different postures, namely standing, sitting, lying, and absence, are estimated simultaneously. We experimentally evaluated the proposed scheme and compared its performance with that of conventional schemes under identical conditions. The results indicate that the average localization error of the proposed scheme is 0.23 m, whereas that of the conventional scheme is approximately 0.65 m. The average posture estimation error of the proposed scheme is approximately 1.7%, whereas that of the conventional correlation, CSP, and SVM schemes are 54.8%, 42%, and 10%, respectively. Full article
(This article belongs to the Special Issue Radar Sensors for Target Tracking and Localization)
Show Figures

Figure 1

Figure 1
<p>Time-dependent frequency shape of FMCW radar.</p>
Full article ">Figure 2
<p>Proposed simultaneous location and posture estimation system.</p>
Full article ">Figure 3
<p>Experimental schematic of the two-dimensional (2D) location and posture estimation.</p>
Full article ">Figure 4
<p>Convolutional neural network (CNN) for distance estimation.</p>
Full article ">Figure 5
<p>Conventional 2D localization scheme based on the bilateration method.</p>
Full article ">Figure 6
<p>Proposed parallel structure with one-dimensional (1D) CNN.</p>
Full article ">Figure 7
<p>Schematic of the experimental configuration using two frequency-modulated continuous-wave (FMCW) radars.</p>
Full article ">Figure 8
<p>Three different postures of a target, namely (<b>a</b>) standing, (<b>b</b>) sitting, and (<b>c</b>) lying. (<b>d</b>) The FMCW radar configuration.</p>
Full article ">Figure 9
<p>Comparison of localization results obtained from the (<b>a</b>) conventional and (<b>b</b>) proposed schemes.</p>
Full article ">
Back to TopTop