[go: up one dir, main page]

Next Article in Journal
Development and Utilization of Bridge Data of the United States for Predicting Deck Condition Rating Using Random Forest, XGBoost, and Artificial Neural Network
Previous Article in Journal
Measuring Dam Deformation of Long-Distance Water Transfer Using Multi-Temporal Synthetic Aperture Radar Interferometry: A Case Study in South-to-North Water Diversion Project, China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Louvain-Based Traffic Object Detection for Roadside 4D Millimeter-Wave Radar

1
Department of Traffic Information and Control Engineering, Jilin University, No. 5988, Renmin Street, Changchun 130022, China
2
Jilin Engineering Research Center for Intelligent Transportation System, Changchun 130022, China
3
Department of Civil, Environmental and Construction Engineering, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Remote Sens. 2024, 16(2), 366; https://doi.org/10.3390/rs16020366
Submission received: 16 November 2023 / Revised: 5 January 2024 / Accepted: 14 January 2024 / Published: 16 January 2024
(This article belongs to the Section Urban Remote Sensing)

Abstract

:
Object detection is the fundamental task of vision-based sensors in environmental perception and sensing. To leverage the full potential of roadside 4D MMW radars, an innovative traffic detection method is proposed based on their distinctive data characteristics. First, velocity-based filtering and region of interest (ROI) extraction were employed to filter and associate point data by merging the point cloud frames to enhance the point relationship. Then, the Louvain algorithm was used to divide the graph into modularity by converting the point cloud data into graph structure and amplifying the differences with the Gaussian kernel function. Finally, a detection augmentation method is introduced to address the problems of over-clustering and under-clustering based on the object ID characteristics of 4D MMW radar data. The experimental results showed that the proposed method obtained the highest average precision and F1 score: 98.15% and 98.58%, respectively. In addition, the proposed method showcased the lowest over-clustering and under-clustering errors in various traffic scenarios compared with the other detection methods.

Graphical Abstract">

Graphical Abstract

1. Introduction

The resolution and integrity of collected traffic data directly impact the effectiveness of resulting insights derived from data analysis and computation. Typically, the more affluent and fine-grain traffic data are perceived, the more influential the produced wisdom becomes. Therefore, the desire for high-resolution, highly precise, trajectory-level traffic data has grown significantly in intelligent transportation system (ITS) applications, such as traffic accident prediction and prevention, vehicle-to-infrastructure cooperation, and mobility planning and decision making.
However, conventional traffic detectors have inherent limitations in terms of providing high-resolution and trajectory-level traffic data. Loop detectors are widely used in ITS, but cannot capture the traffic object’s trajectory and are prone to damage under heavy traffic conditions. Video cameras can offer trajectory data, but the accuracy is impeded by the challenges of varying illumination conditions and instances of occlusion [1]. Light detection and ranging (LiDAR) sensors have demonstrated the ability to detect and track road users at a high resolution and with lane-level accuracy through point cloud data processing [2]. Nonetheless, they also encounter obstacles such as occlusion and the requirement for continuous tracking within confined detection ranges. Furthermore, the robustness of LiDAR and cameras in adverse weather conditions, such as heavy rainfall, fog, and dust storms, remains a concern.
A promising solution emerges in the form of the 4D millimeter-wave (MMW) radar sensor. It offers a wealth of high-resolution 4D point cloud data, encompassing distance, azimuth, elevation, and Doppler velocity. In addition, it provides a more extended detection range and exhibits remarkable resilience against variations in light and adverse weather conditions [3]. The unique capabilities, coupled with its low cost, position the 4D MMW radar sensor as a promising candidate for widespread deployment in roadside units to provide high-resolution, trajectory-level traffic data for ITS applications.
Object detection is the fundamental task of vision-based sensors in environment perception and sensing. In the existing research, researchers focus on using a deep learning framework to process onboard MMW radar data for autonomous vehicles. It often works with LiDAR, camera, or GPS data to perceive the surrounding environment to identify potential danger [4,5,6]. However, the data processing procedure for roadside MMW radars differs from that for onboard MMW radars. Roadside MMW radars work independently to perceive all road users, such as vehicles, pedestrians, and bicyclists, within its detection range cost-effectively. The methods for onboard MMW radars cannot be employed in roadside MMW radars directly and cannot obtain the desired results. In addition, to the best of our knowledge, existing studies on object detection and tracking for roadside 4D MMW radar are limited. Therefore, to harness the full potential of the roadside 4D MMW radar data, developing algorithms tailored to its distinctive characteristics is necessary.
Motivated by the objective of leveraging the 4D MMW radar point cloud data to extract high-resolution and lane-level traffic information, this paper proposes a Louvain-based traffic object detection method for roadside 4D MMW radar based on its data characteristics. First, velocity-based filtering and region of interest (ROI) extraction were employed to filter and associate point cloud data by merging the data frames to enhance the point relationship. Second, the point cloud data of the 4D MMW radar were converted into a graph structure using a similarity matrix by mapping the point distance with the Gaussian kernel function. The Doppler velocity of the point was used as the weight of the graph link. Then, the Louvain algorithm [7] was introduced to divide the graph as a function of point cloud clustering. Finally, a detection augmentation method was proposed to address the over-clustering and under-clustering problems raised by the Louvain-based clustering algorithm. As a unique characteristic of 4D MMW radar point cloud data, the associated object ID was also used to verify the clustering results.
The major contributions of this article can be considered as follows:
  • A Louvain-based traffic object detection method for roadside 4D MMW radar was proposed. To leverage the Louvain algorithm, the point cloud data were transformed into a graph structure, and it was first used to process the point cloud data in the current research, to the best of our knowledge.
  • A detection augmentation method was proposed to address the over-clustering and under-clustering problem, leveraging the unique attribute of 4D MMW radar point cloud data.
  • Louvain-based traffic object detection for 4D MMW radar point cloud data performed better than the state-of-the-art in precision and F1 score, with the lowest errors in over-clustering and under-clustering.
The remainder of this paper is organized as follows: Section 2 reviews related object detection methods using different sensors. Section 3 introduces the 4D MMW radar and its data. Section 4 presents the proposed method for traffic object detection for the roadside 4D MMW radar. In Section 5, experiments are conducted to illustrate the performance of the proposed method. Section 6 concludes the paper with a summary and prospects for possible future works.

2. Related Work

In the realm of environmental sensing, modern visual sensors such as cameras, LiDAR, and MMW sensors have gained prominence. These sensors are widely installed in onboard vehicles and infrastructure units to perceive the surrounding environments and extract traffic flow information. To draw a clear distinction between this study and previous research, the literature on camera-based object detection, LiDAR-based object detection, and MMW-based object detection was reviewed. The multi-sensor fusion for object detection is also introduced in this section.

2.1. Camera-Based Object Detection

In the existing research, researchers have extensively investigated object detection using camera-based sensors. It boasts a wealth of mature technologies and extensive research outcomes. As a new-fashioned and effective method, deep learning is used for the mainstream camera-based object detection technology, such as the R-CNN series [8,9] and YOLO [10] series methods. They have surfaced as prominent methodologies recognized for their capacity to attain high accuracy and real-time performance in object detection tasks. However, adverse weather conditions and illumination environments affect the robustness and reliability of the camera-based detection method. In addition, camera-based detection using deep learning requires additional resources for data annotation and training [11].

2.2. LiDAR-Based Object Detection

Compared to cameras, LiDAR takes advantage of resistance to lighting fluctuations, high resolution, and rich information [12]. The utilization of LiDAR sensors spans diverse applications, encompassing object detection and classification [13,14], lane marking detection [15,16], and stained and worn pavement marking identification [17,18]. In the current study, deep learning frameworks are also widely used to process point cloud data directly. For instance, Qi [19] designed a novel neural network architecture, PointNet, to process point cloud data directly. Zhou [20] introduced VoxelNet, which eliminates the need for manual feature extraction by segmenting point clouds into evenly 3D voxels. Machine learning methodologies also hold promise for LiDAR-based object detection. Lin [21] employed the L-shape fitting method to derive more accurate bounding boxes, facilitating object feature extraction. Despite its resilience to varying lighting conditions, LiDAR’s performance can be impacted by extreme weather conditions and occlusion in density traffic flow. Additionally, the expense associated with LiDAR systems remains a deterrent, although efforts have been made to extend the detection range of cost-effective, low-channel LiDAR [22].

2.3. MMW Radar-Based Object Detection

The conventional MMW radar has been widely used for object detection in various applications, motivating extensive research focus on enhancing its detection accuracy. Clustering, often employed as an unsupervised learning technique, is a cornerstone for point cloud data processing in object detection. In the current research, researchers focused on improving the performance of density-based spatial clustering of applications with the noise (DBSCAN) algorithm and combined it with other data aggregation algorithms to enhance object detection accuracy and speed [23,24]. Typically, Wang [25] incorporated DBSCAN into merged data and introduced frame order features to mitigate multipath noise and distinguish target points from noise. Xie [26] utilized a multi-frame merging strategy to bolster single-frame clustering accuracy while leveraging frame sequence attributes to address multi-target noise. However, traditional MMW radar has several limitations, including its incapacity to capture height information [24], inability to detect stationary objects, and limited resolution. Therefore, these constraints render it ill-suited for autonomous driving and traffic perception.
The emerging 4D MMW radar not only preserves the benefits of conventional MMW radar, but also effectively addresses several of its limitations, showcasing the promising potential for robust and high-resolution object detection. In the existing research, investigations into object detection leveraging 4D MMW radar are predominantly focused on autonomous driving. For instance, Lutz [27] employed supervised machine learning techniques to identify noisy points within 4D radar point cloud data. Jin [28] utilized a Gaussian mixture model (GMM) to detect and classify pedestrians and vehicles using 4D MMW radar data. Tan [29] proposed a deep learning framework for object detection by leveraging 4D MMW radar multi-frame point cloud data, considering velocity-compensated in point cloud data processing. The method is a baseline in object detection for onboard 4D MMW radar. Liu [30] introduced an object detection method named SMURF to address the problem of sparse and noisy radar point cloud data by utilizing various representations, such as pillarization and density features obtained through kernel density estimation.
It is imperative to acknowledge that the characteristics of onboard 4D MMW radar data diverge from those of roadside 4D MMW radar. A frame of onboard radar data can often yield vehicle contours and only contain a few vehicles. In contrast, a frame of roadside radar data can contain a large number of vehicles, but cannot yield the vehicles’ contours. Therefore, the previously mentioned algorithms may not yield the anticipated outcomes when applied to roadside 4D MMW radar point cloud data. Therefore, developing dedicated object detection algorithms tailored for roadside 4D MMW radar is essential for ITS application in the future.

2.4. Fusion-Based Object Detection

In light of the inherent limitations present in individual sensors, sensor fusion emerges as a pivotal avenue for exploration within the domain of traffic perception. Researchers have diligently worked to synergistically amalgamate diverse sensor types to amplify the overall performance of object detection.
Current research on 4D MMW radar and other sensor fusions for object detection focuses on fusion with camera and LiDAR. For 4D MMW radar and camera fusion, Cui [5] proposed a convolutional neural network (CNN) and cross-fusion strategy which leverages dual 4D MMW radars with a monovision camera. Zheng [31] proposed an RCFusion model integrating camera and 4D radar data to enhance object detection accuracy and robustness for autonomous vehicles. Xiong [32] proposed a “LiDAR Excluded Lean (LXL)” model that utilizes maps of predicted depth distribution in images and 3D occupancy grids based on radar data by leveraging the “sampling” view transformation strategy to enhance detection accuracy.
For 4D MMW radar and LiDAR fusion, Wang [33] introduced an interaction-based fusion framework leveraging the self-attention mechanism to aggregate features from radar and LiDAR. However, the different characteristics and noise distributions of radar and LiDAR point cloud data affect the detection performance when directly integrating radar and LiDAR data. Therefore, Wang [34] employed a Gaussian distribution model in the M2-Fusion framework to mitigate the variations in point cloud distribution. Chen [35] proposed an end-to-end multi-sensor fusion framework called FUTR3D, which can adapt to most existing sensors in object tracking.
However, the large-scale deployment of roadside sensors for ITS and smart city applications remains challenging due to the significantly increased investment cost of installing multiple sensors at a single site. In addition, such installations may inadvertently curtail the detection range, especially when integrating a 4D MMW radar with other sensors will inadvertently curtail the detection range, as the detection range of a 4D MMW radar far exceeds that of other sensors. Therefore, this paper is dedicated to improving the detection accuracy of individual roadside 4D MMW radar by utilizing its inherent point cloud data.

3. 4D MMW Radar Sensor and Its Data

The 4D MMW radar point cloud data were extracted from the Continental ARS 548 RDI long-range radar sensor. The sensor’s inherent parameters are shown in Table 1.
The 4D MMW radar offers two distinct output modes: detection and object. In object mode, the sensor outputs point objects akin to the traditional MMW radar. The detection mode yields point cloud data similar to the LiDAR sensor’s. However, the availability of reference objects is limited and inaccurate, as shown in Figure 1.
The point cloud data encompass five primary attributes: R (range), θ (azimuth), φ (elevation), V (Doppler velocity), and RCS (radar cross section).
To facilitate the algorithm’s application, the polar coordinates of the point cloud data were converted into Cartesian coordinates:
x = R c o s φ s i n θ y = R c o s φ c o s θ z = R s i n φ
The converted Cartesian coordinates of the radar are shown in Figure 2.
The point cloud data were also associated with additional auxiliary attributes, as listed in Table 2.

4. Method

The proposed method can be divided into two modules: point cloud data preprocessing and Louvain-based point cloud clustering, as shown in Figure 3. The point cloud data preprocessing module includes multi-frame aggregation and background filtering. The Louvain-based point cloud clustering module first converted the point cloud into a graph structure. Then, the Louvain algorithm was employed to segment the graph into different modularities to cluster the point cloud data. Finally, a detection augmentation method was proposed to address the over-clustering and under-clustering problems in the Louvain algorithm and enhance cluster accuracy. The output clusters of the traffic object were identified with a box to roughly identify the vehicle’s 2D outline.

4.1. Point Cloud Data Preprocessing

The raw single frame of point cloud data extracted from the 4D MMW radar includes a limited number of points. To analyze the distribution characteristics and extract more information from 4D MMW radar data, the multi-frame point cloud data merging approach has demonstrated its effectiveness [22,23]. Therefore, every five raw point cloud frames were amalgamated into a singular aggregated frame for the following data processing task. Experimental observation showed that the noisy points can be easily distinguished and the traffic object features can be enhanced by multi-frame consolidation.
The point cloud 4D MMW radar data also encompass background points, such as tree and building points. To improve the traffic object identification accuracy and reduce the computational resources, background point filtering is the primary task in 4D MMW radar point cloud data preprocessing. Therefore, a combined region of interest (ROI) extraction and velocity-based filtering method was employed to exclude the background points as follows:
Step 1: A Cartesian coordinate system was established with the radar deployment location as the origin point.
Step 2: The ROI was defined by constraining the 3D coordinates. Any points falling outside the selected range were identified as background points and excluded.
Step 3: If the velocity of the radar point was lower than the setting threshold, then the radar point was identified as the noise point and filtered out.
After the above processing, the points attributed to actual objects evolve into concise trajectories, as shown in Figure 4.

4.2. Traffic Object Detection

The point cloud data extracted from the 4D MMW radar are sparse. However, is the data are associated with velocity and auxiliary information. A two-stage traffic object detection method was proposed for roadside 4D MMW radar to improve the identification accuracy and robustness, encompassing Louvain-based point cloud clustering and detection augmentation.

4.2.1. Louvain-Based Point Cloud Clustering

The Louvain algorithm is specifically designed to identify communities or groups of nodes within a network or graph, leveraging their connections and interactions. The modularity of the graph can be used to divide it into different sections, which can be used to analyze the spatial relationship of the point cloud data. Therefore, the Louvain algorithm was used to cluster the 4D MMW radar point cloud data as follows:
Step 1: graph link generation. Louvain is a graph-based algorithm. The initial step is creating nodes for the graph. For point cloud data, they can be derived directly from the point. The primary task is establishing the connection between the points, which will form the graph structure’s links and edges. The distance between the points can be described as:
M d = d 11 d 12 d 21 d 22 d 1 n d 2 n d 31 d 32 d n 1 d n 2 d 3 n d n n
where d i j is the distance between points i and j in the XY plane.
The matrix M d is a symmetric matrix, which means that d i j = d j i . The Z-coordinate was not used in distance computation as millimeter waves may generate echoes at different vehicle locations, resulting in the possibility of developing point clouds of varying heights for the same object.
To amplify the differences and showcase the connections between the points, the Gaussian kernel function was used to transform the distance matrix M d into a similarity matrix:
K ( x ,   x ' ) = e x x 2 2 σ 2
where x x is the distance between the data point and reference point. σ is a hyper-parameterization that controls the influence range of the kernel function.
The values of the mapped similarity matrix were between 0 and 1. A larger value means that the two nodes were more similar. Then, the nodes were linked if their similarity was greater than the set threshold. The experiment’s threshold value was set between 0.4 and 0.6 according to the experimental observations to find the best results.
Step 2: link weight formulation. The weights of the edges are usually used to indicate the strength of links between the nodes. For the 4D MMW radar, points associated with the same object should have the same motion properties. Therefore, the point’s Doppler velocity difference was calculated and used to form a weight matrix. The Gaussian kernel function also mapped the weight matrix. Therefore, the graph structure was combined with the point’s positional and motion information for subsequent graph-based processing.
Step 3: Louvain-based point clustering. Modularity is the essential concept of the Louvain algorithm for processing graph structure, which was used to explore the potential relationships of graph components. The modularity is defined as:
Q = 1 2 m i j [ A i , j k i k j 2 m ] δ ( c i , c j )
where A i , j is the weight of the connected edges between nodes i and j . m =   i j A i , j represents the total weights of edges in the graph. k i = j A i , j indicates the sum of the weights of all of the connected links pointing to node i . δ ( c i , c j ) is the flag for whether two points c i , c j are in the same community.
Modularity is a measure that quantifies how well a network can be divided into distinct communities. The algorithm starts with each node belonging to its own community and then merges communities to maximize the modularity. This process continues iteratively until no further improvement in modularity can be achieved. In essence, the Louvain algorithm aims to find a partition of the network into communities, such that the connections within communities are dense and the connections between communities are sparse.

4.2.2. Object Detection Augmentation

As a clustering algorithm, the Louvain-based point cloud clustering for roadside 4D MMW radar data is not free from two types of errors: segmenting points belonging to the same object into micro-groups (error 1: over-clustering) and merging points from distinct objects into a single group (error 2: under-clustering). To improve the robustness and reliability, the problems of over-clustering and under-clustering should be addressed in the traffic object detection process for roadside 4D MMW radar. The points associated with the object ID information are the unique characteristics of 4D MMW radar point cloud data. It can be used to verify the object clustering results. Therefore, an object detection augmentation method was proposed to solve the problems of errors 1 and 2, as shown in Figure 5.
For error 1, over-clustering clusters need to be merged into one cluster. For error 2, the under-clustering cluster need to be divided into different clusters. To facilitate a detailed description of the algorithm, a point is defined as P = ( x , y , A O ) T , where x and y are the point coordinates, and A O is the associated object. A cluster is defined as C n = P i ,   i = 1 ,   2 ,   3 · · · .
The over-clustering problem was addressed as follows:
Step 1: The most frequent associated objects for each cluster were counted and stored in a list.
Step 2: We checked for non-zero duplicate elements in the list and identified their positions.
Step 3: The difference in the average x-coordinate of points belonging to the cluster that shares the same associated object was calculated. The difference x was calculated as:
x = 1 n 1 i = 1 n 1 P x i 1 n 2 j = 1 n 2 P x j ,   P x i C n 1 ,   P x j C n 2
where   C n 1 and C n 2 are the clusters with the same associated object.
If x is less than the threshold T 1 = 1, the positional relationship between the clusters should be merged. The purpose of the threshold T 1 was set as 1 to avoid merging objects with the same ID into neighboring lanes.
The under-clustering problem was solved as follows:
An object with more than five points can be defined as a valid object because five frames of raw point cloud were aggregated in the point cloud data preprocessing module. If there is more than one object ID within the cluster and x ' 1   o r   L > L m a x , the cluster is defined as under-clustered and should be re-clustered.
Where x ' is the difference between the clusters’ x-coordinates. The threshold is set to 1 to avoid the cluster merging objects in adjacent lanes incorrectly. L is the length of the cluster. L m a x is the maximum trajectory length.

5. Experiments and Validation

5.1. Data Collection

In the experiments, the roadside 4D MMW radar was deployed on two pedestrian overpasses: Ziyou Road (site I) and Yatai Expressway (site II). The deployment height was 6.0 m. A tripod 4.2 m in height deployed the 4D MMW radar on the roadside of Yatai Expressway (site III) and was used for comparison in different traffic scenarios. In addition, an 8K video (7680 × 4320 px resolution) was used to obtain a ground truth traffic environment by leveraging a cellphone in the same position as a roadside 4D MMW radar. The y-axis of the radar coordinate system was parallel to the roads at all sites. The 4D MMW radar sensor locations and the geographic information of the sites are shown in Figure 6.
The proposed method and comparison algorithms were tested in Python and executed on a laptop equipped with Intel(R) Core(TM) i7-8750H CPU @ 2.20 GHz, NVIDIA GeForce GTX 1050 Ti, and 32 GB of RAM.

5.2. Evaluation

5.2.1. Detection Evaluation

A rough 2D outline was generated to present the cluster based on the distribution of the points in the cluster in order to compute and evaluate the detection result more easily, as shown in Figure 7.
To verify the effectiveness of the proposed method, DBSCAN with Mahalanobis distance [23], improved DBSCAN [2], and the Leiden algorithm [36] was used to compare the performance with the same metrics, such as precision, recall, and F1 score. The performances of different object clustering methods using 4D MMW radar point cloud data are shown in Table 3.
The results show that the proposed method exhibited a higher degree of precision and a higher F1 score, and showcased minimal errors. The recall of the proposed method was the same as the best performance. In addition, the proposed method had the same advantage as the DBSCAN algorithm without predetermining the number of clusters.
In the field, the traffic signs were mainly made up of metal. Millimeter waves are sensitive to metal and are wave-reflective. They can dismiss the points that belong to the traffic object, but reflect on the traffic signs. This is also called the multipath effect. At sites II and III, the recalls were lower than at site I due to the traffic signs of sites II and III. Therefore, the clustering algorithms were all affected by the multipath effect of 4D MMW radar.
Leiden is also a Louvain-based clustering algorithm. However, its performance seems unsteady in different scenarios. It performed better than DBSCAN in sites II and III, but not in site I. In addition, Error 1 occurred more frequently than the proposed method using the Leiden algorithm.

5.2.2. Ablation Experiments

Ablation experiments were performed to evaluate the module of the proposed method, leveraging the same point cloud data as in the previous section. The results are shown in Table 4.
The effect of a single module of the proposed method was unstable at different sites. However, the three evaluation metrics using only the Louvain algorithm at all three sites were better than detection augmentation. Errors 1 and 2 occurred less frequently when using detection augmentation at Site I than when using only Louvain. Suppose the distance between two objects is small or the speed is close. In that case, it is easier to identify the objects to the same object, leading to the object detection augmentation method being less effective in these situations. However, if it is combined with the Louvain algorithm, the results are far better than those of the module running individually.

5.2.3. Evaluation of the Performance in Distance

The point cloud density of 4D MMW radar and LiDAR decreases as the distance increases [22]. The variation in the point cloud was measured using the average number of points within the object at different distances, as shown in Figure 8.
To evaluate the performance of the proposed method at different distances, the detected distance was divided into five areas: 50 to 100 m, 100 to 150 m, 150 to 200 m, 200 to 250 m, and 250 to 300 m. The precision of the proposed method in different areas was calculated, as shown in Table 5.
It can be seen that the accuracy of object detection was not correlated with the decay of the number of point clouds. At Site I, the highest precision occurred within 50~100 m, but the highest precision occurred within 250~300 m at Site II and 200~250 at Site III. This result demonstrates that the attenuation of the number of 4D MMW radar point clouds did not affect the object detection performance. The object detection range of the radar was the same as the sensor detection range, except for the 0~50 m data, which had a detection blind spot due to the deployment height. The distance-irrelevant characteristics of 4D MMW radar can be considered an advantage in traffic object detection.

6. Conclusions

This paper introduces a novel approach to traffic object detection by leveraging the capabilities of advanced 4D MMW radar sensors. The method is designed as a multi-step process to enhance object detection process’ accuracy and practicality. The initial focus is refining the raw point cloud data obtained from the radar sensor. This refinement process involves combining techniques, including ROI extraction and velocity-based filtering. These steps are crucial to simplifying the dataset by isolating relevant information while filtering extraneous noise.
The core goal of our method is to convert point cloud data into graph data. The similarity and velocity matrices are calculated and used as the conversion base in this phase. This conversion provides a new idea for processing low-density point cloud data and effectively integrates the multidimensional features of 4D MMW radar point clouds.
A two-stage method based on the Louvain algorithm was proposed to elevate the precision of traffic object detection. This algorithm is a corrective mechanism that addresses any irregularities during the grouping process. By refining the groups and addressing potential errors, this algorithm fine-tunes the detection results, ultimately contributing to our methodology’s overall reliability and effectiveness. Collectively, these steps showcase a comprehensive approach to enhancing object detection using 4D MMW radar sensors, with the potential for significant impact in various real-world applications.
The field experiments were assessed across three diverse sites, revealing impressive precision at different sites with 97.73%, 99.77%, and 96.95%, respectively. However, the proposed method is still not free from limitations:
(1)
Occlusions and larger vehicles: One of the challenges that still affects detection accuracy is occlusion, especially when larger vehicles obstruct the emitted millimeter waves. This challenge leads to an increased likelihood of missed detections. Future research should focus on developing strategies to mitigate the impact of occlusions on detection accuracy, thereby enhancing the robustness of the proposed method.
(2)
Intersection detection: Our study’s scope excluded intersection sites due to the current limitations of 4D radar detection capabilities. However, ongoing advancements in radar technology could potentially enable the detection of traffic objects at intersections. Future research should investigate the potential of extending the proposed method to intersection scenarios, which could open up new and exciting possibilities for broader applications.
Future research directions encompass several pivotal areas:
(1)
Sensor adaptability: To assess the adaptability and robustness of the proposed method, future work should involve testing it with a variety of 4D MMW radar sensors. This approach will help to determine the method’s effectiveness across different sensor configurations and specifications.
(2)
Pedestrian detection: Exploring pedestrian detection using 4D MMW radar is a compelling avenue within traffic safety. Developing algorithms that accurately detect and track pedestrians using radar data can significantly enhance road safety.
(3)
Object-tracking algorithms: Acknowledging that detection is the foundational step in traffic perception, future research should focus on developing dedicated object-tracking algorithms tailored explicitly for 4D MMW radar. Accurate tracking of detected objects over time can provide valuable insights for traffic management and control systems.
In conclusion, while the proposed method has demonstrated promising results, there remain challenges to address and exciting research directions to pursue. By overcoming limitations and delving into the suggested future avenues, the capabilities of 4D MMW radar in object detection and traffic perception applications can be further studied.

Author Contributions

The authors confirm their contributions to this paper as follows: study conception and design: B.G. and C.L.; model design and implementation: J.S. and G.S.; analysis and interpretation of results: H.L., G.S. and C.L.; draft manuscript preparation: B.G., J.S. and C.L.; funding acquisition: B.G. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Scientific Research Project of the Education Department of Jilin Province (Grant No. JJKH20221020KJ) and the Qingdao Social Science Planning Research Project (Grant No. 2022-389).

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wu, Y.-J.; Lian, F.-L.; Chang, T.-H. Traffic monitoring and vehicle tracking using roadside cameras. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, 8–11 October 2006; IEEE: Piscataway, NJ, USA, 2006; pp. 4631–4636. [Google Scholar]
  2. Zhao, J.; Xu, H.; Liu, H.; Wu, J.; Zheng, Y.; Wu, D. Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors. Transp. Res. Part C Emerg. Technol. 2019, 100, 68–87. [Google Scholar] [CrossRef]
  3. Zheng, L.; Ma, Z.; Zhu, X.; Tan, B.; Li, S.; Long, K.; Sun, W.; Chen, S.; Zhang, L.; Wan, M.; et al. TJ4DRadSet: A 4D Radar Dataset for Autonomous Driving. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8–12 October 2022; pp. 493–498. [Google Scholar]
  4. Xu, B.; Zhang, X.; Wang, L.; Hu, X.; Li, Z.; Pan, S.; Li, J.; Deng, Y. RPFA-Net: A 4D RaDAR Pillar Feature Attention Network for 3D Object Detection. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 3061–3066. [Google Scholar]
  5. Cui, H.; Wu, J.; Zhang, J.; Chowdhary, G.; Norris, W.R. 3D Detection and Tracking for On-road Vehicles with a Monovision Camera and Dual Low-cost 4D mmWave Radars. In Proceedings of the 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 2931–2937. [Google Scholar]
  6. Meyer, M.; Kuschk, G. Automotive Radar Dataset for Deep Learning Based 3D Object Detection. In Proceedings of the 16th European Radar Conference (EuRAD)/European Microwave Week, Paris, France, 2–4 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 129–132. [Google Scholar]
  7. Blondel, V.D.; Guillaume, J.-L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, 2008, P10008. [Google Scholar] [CrossRef]
  8. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Region-Based Convolutional Networks for Accurate Object Detection and Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 142–158. [Google Scholar] [CrossRef] [PubMed]
  9. Girshick, R. Fast R-CNN. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Washington, DC, USA, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
  10. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 27–30 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 779–788. [Google Scholar]
  11. Mukhtar, A.; Xia, L.; Tang, T.B. Vehicle Detection Techniques for Collision Avoidance Systems: A Review. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2318–2338. [Google Scholar] [CrossRef]
  12. Lin, C.; Zhang, H.; Gong, B.; Wu, D.; Wang, Y.-J. Density variation-based background filtering algorithm for low-channel roadside lidar data. Opt. Laser Technol. 2023, 158, 108852. [Google Scholar] [CrossRef]
  13. Lin, C.; Zhang, S.; Gong, B.; Liu, H.; Sun, G. Identification and Tracking of Takeout Delivery Motorcycles Using Low-Channel Roadside LiDAR. IEEE Sens. J. 2023, 23, 9786–9795. [Google Scholar] [CrossRef]
  14. Liu, H.; Lin, C.; Wu, D.; Gong, B. Slice-Based Instance and Semantic Segmentation for Low-Channel Roadside LiDAR Data. Remote Sens. 2020, 12, 3830. [Google Scholar] [CrossRef]
  15. Lin, C.; Guo, Y.; Li, W.; Liu, H.; Wu, D. An Automatic Lane Marking Detection Method with Low-Density Roadside LiDAR Data. IEEE Sens. J. 2021, 21, 10029–10038. [Google Scholar] [CrossRef]
  16. Gong, B.; Zhao, B.; Wang, Y.; Lin, C.; Liu, H. Lane Marking Detection Using Low-Channel Roadside LiDAR. IEEE Sens. J. 2023, 23, 14640–14649. [Google Scholar] [CrossRef]
  17. Lin, C.; Sun, G.; Tan, L.; Gong, B.; Wu, D. Mobile LiDAR Deployment Optimization: Towards Application for Pavement Marking Stained and Worn Detection. IEEE Sens. J. 2022, 22, 3270–3280. [Google Scholar] [CrossRef]
  18. Lin, Y.-C.; Manish, R.; Bullock, D.; Habib, A. Comparative Analysis of Different Mobile LiDAR Mapping Systems for Ditch Line Characterization. Remote Sens. 2021, 13, 2485. [Google Scholar] [CrossRef]
  19. Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–27 July 2017; pp. 77–85. [Google Scholar]
  20. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar]
  21. Lin, C.; Wang, Y.; Gong, B.; Liu, H. Vehicle detection and tracking using low-channel roadside LiDAR. Measurement 2023, 218, 113159. [Google Scholar] [CrossRef]
  22. Liu, H.; Lin, C.; Gong, B.; Wu, D. Extending the Detection Range for Low-Channel Roadside LiDAR by Static Background Construction. IEEE Trans. Geosci. Remote Sens. 2022, 60, 1–12. [Google Scholar] [CrossRef]
  23. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In KDD; AAAI: Washington, DC, USA, 1996. [Google Scholar]
  24. Tan, B.; Ma, Z.; Zhu, X.; Li, S.; Zheng, L.; Huang, L.; Bai, J. Tracking of Multiple Static and Dynamic Targets for 4D Automotive Millimeter-Wave Radar Point Cloud in Urban Environments. Remote Sens. 2023, 15, 2923. [Google Scholar] [CrossRef]
  25. Wang, M.; Wang, F.; Liu, C.; Ai, M.; Yan, G.; Fu, Q. DBSCAN Clustering Algorithm of Millimeter Wave Radar Based on Multi Frame Joint. In Proceedings of the 2022 4th International Conference on Intelligent Control, Measurement and Signal Processing (ICMSP), Hangzhou, China, 8–10 July 2022; pp. 1049–1053. [Google Scholar]
  26. Xie, S.; Wang, C.; Yang, X.; Wan, Y.; Zeng, T.; Liu, Z. Millimeter-Wave Radar Target Detection Based on Inter-frame DBSCAN Clustering. In Proceedings of the 2022 IEEE 22nd International Conference on Communication Technology (ICCT), Nanjing, China, 11–14 November 2022; pp. 1703–1708. [Google Scholar]
  27. Lutz, M.; Biswal, M. Supervised Noise Reduction for Clustering on Automotive 4D Radar. In Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA, 5–7 December 2021; pp. 1–7. [Google Scholar]
  28. Jin, F.; Sengupta, A.; Cao, S.; Wu, Y.-J. MmWave Radar Point Cloud Segmentation using GMM in Multimodal Traffic Monitoring. In Proceedings of the IEEE International Radar Conference (RADAR), 28–30 April 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 732–737. [Google Scholar]
  29. Tan, B.; Ma, Z.; Zhu, X.; Li, S.; Zheng, L.; Chen, S.; Huang, L.; Bai, J. 3-D Object Detection for Multiframe 4-D Automotive Millimeter-Wave Radar Point Cloud. IEEE Sens. J. 2023, 23, 11125–11138. [Google Scholar] [CrossRef]
  30. Liu, J.; Zhao, Q.; Xiong, W.; Huang, T.; Han, Q.-L.; Zhu, B. SMURF: Spatial Multi-Representation Fusion for 3D Object Detection with 4D Imaging Radar. IEEE Trans. Intell. Veh. 2023, 1–14. [Google Scholar] [CrossRef]
  31. Zheng, L.; Li, S.; Tan, B.; Yang, L.; Chen, S.; Huang, L.; Bai, J.; Zhu, X.; Ma, Z. RCFusion: Fusing 4-D Radar and Camera with Bird’s-Eye View Features for 3-D Object Detection. IEEE Trans. Instrum. Meas. 2023, 72, 8503814. [Google Scholar] [CrossRef]
  32. Xiong, W.; Liu, J.; Huang, T.; Han, Q.-L.; Xia, Y.; Zhu, B. LXL: LiDAR Excluded Lean 3D Object Detection with 4D Imaging Radar and Camera Fusion. IEEE Trans. Intell. Veh. 2023, 72, 1–14. [Google Scholar] [CrossRef]
  33. Wang, L.; Zhang, X.; Xv, B.; Zhang, J.; Fu, R.; Wang, X.; Zhu, L.; Ren, H.; Lu, P.; Li, J.; et al. InterFusion: Interaction-based 4D Radar and LiDAR Fusion for 3D Object Detection. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 12247–12253. [Google Scholar]
  34. Wang, L.; Zhang, X.; Li, J.; Xv, B.; Fu, R.; Chen, H.; Yang, L.; Jin, D.; Zhao, L. Multi-Modal and Multi-Scale Fusion 3D Object Detection of 4D Radar and LiDAR for Autonomous Driving. IEEE Trans. Veh. Technol. 2023, 72, 5628–5641. [Google Scholar] [CrossRef]
  35. Chen, X.; Zhang, T.; Wang, Y.; Wang, Y.; Zhao, H. FUTR3D: A Unified Sensor Fusion Framework for 3D Detection. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 17–24 June 2023; pp. 172–181. [Google Scholar]
  36. Traag, V.A.; Waltman, L.; van Eck, N.J. From Louvain to Leiden: Guaranteeing well-connected communities. Sci. Rep. 2019, 9, 5233. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The point cloud and reference objects of 4D MMW radar.
Figure 1. The point cloud and reference objects of 4D MMW radar.
Remotesensing 16 00366 g001
Figure 2. The converted Cartesian coordinates of the radar.
Figure 2. The converted Cartesian coordinates of the radar.
Remotesensing 16 00366 g002
Figure 3. The flowchart of the proposed method.
Figure 3. The flowchart of the proposed method.
Remotesensing 16 00366 g003
Figure 4. The point cloud data after background point filtering.
Figure 4. The point cloud data after background point filtering.
Remotesensing 16 00366 g004
Figure 5. The flowchart to solve the under-clustering problem.
Figure 5. The flowchart to solve the under-clustering problem.
Remotesensing 16 00366 g005
Figure 6. A snapshot of 4D MMW radar deployment locations.
Figure 6. A snapshot of 4D MMW radar deployment locations.
Remotesensing 16 00366 g006
Figure 7. A snapshot of a rough vehicle’s 2D outline.
Figure 7. A snapshot of a rough vehicle’s 2D outline.
Remotesensing 16 00366 g007
Figure 8. The average point number within the object at different distances.
Figure 8. The average point number within the object at different distances.
Remotesensing 16 00366 g008
Table 1. The basic parameters of ARS548RDI.
Table 1. The basic parameters of ARS548RDI.
ItemParameters
Scanning frequency20 Hz
Distance range0.2~300 m
Azimuth angle augmentation±60°
Elevation angle augmentation±4° ±20°-±4° @ 300 m-±20° @ <50 m
Speed range−400 km/h +200 km/h
Maximum number of point clouds per frame800
Maximum number of reference objects per frame50
Table 2. The auxiliary attributes of 4D MMW radar point cloud data.
Table 2. The auxiliary attributes of 4D MMW radar point cloud data.
Point Cloud AttributesExplanation
Detection IDThe ID of every single point cloud
Associated ObjectThe ID of the reference object to which the point cloud belongs
Existence ProbabilityThe existence probability of the point cloud
Detection ClassificationA rough classification of objects to which the point cloud belongs
Table 3. The object detection performance for 4D MMW radar point cloud data.
Table 3. The object detection performance for 4D MMW radar point cloud data.
SiteMethodPrecisionRecallF1 ScoreError 1Error 2
Site IDBSCAN0.9208450.9994750.958550139213
Improved DBSCAN0.9484890.9992460.97320611547
Leiden0.8757850.9977420.9327943760
proposed method0.9772640.9979070.987478393
Site IIDBSCAN0.8694010.9780360.920525251294
Improved DBSCAN0.9761450.9824720.9792986729
Leiden0.9763870.9870000.981577660
proposed method0.9976950.9863300.99198047
Site IIIDBSCAN0.8780980.9835850.927852262191
Improved DBSCAN0.9050650.9898400.945556231167
Leiden0.9317920.9870000.9586742571
proposed method0.9694700.9864450.9778848133
Table 4. The ablation experiment results of the proposed method.
Table 4. The ablation experiment results of the proposed method.
MethodPrecisionRecallF1 ScoreError 1Error 2
Louvain0.9567260.9979230.9768907478
detection augmentation0.9534100.9949440.9737344024
proposed method0.9772640.9979070.987478393
Louvain0.9966530.9870280.991817513
detection augmentation0.9685050.9831570.975776489
proposed method0.9976950.9863300.99198047
Louvain0.9550870.9870090.97078612274
detection augmentation0.9405530.9803470.96003796118
proposed method0.9694700.9864450.9778848133
Table 5. The precision of the proposed method in different distance areas.
Table 5. The precision of the proposed method in different distance areas.
Site50~100 m100~150 m150~200 m200~250 m250~300 m
Site I0.9898560.9851990.9876420.9737170.92455
Site II0.9992760.9956830.9954580.9982031.000000
Site III0.9749850.9181200.9766080.9888480.984586
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, B.; Sun, J.; Lin, C.; Liu, H.; Sun, G. Louvain-Based Traffic Object Detection for Roadside 4D Millimeter-Wave Radar. Remote Sens. 2024, 16, 366. https://doi.org/10.3390/rs16020366

AMA Style

Gong B, Sun J, Lin C, Liu H, Sun G. Louvain-Based Traffic Object Detection for Roadside 4D Millimeter-Wave Radar. Remote Sensing. 2024; 16(2):366. https://doi.org/10.3390/rs16020366

Chicago/Turabian Style

Gong, Bowen, Jinghang Sun, Ciyun Lin, Hongchao Liu, and Ganghao Sun. 2024. "Louvain-Based Traffic Object Detection for Roadside 4D Millimeter-Wave Radar" Remote Sensing 16, no. 2: 366. https://doi.org/10.3390/rs16020366

APA Style

Gong, B., Sun, J., Lin, C., Liu, H., & Sun, G. (2024). Louvain-Based Traffic Object Detection for Roadside 4D Millimeter-Wave Radar. Remote Sensing, 16(2), 366. https://doi.org/10.3390/rs16020366

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop