[go: up one dir, main page]

CN110246159A - The 3D target motion analysis method of view-based access control model and radar information fusion - Google Patents

The 3D target motion analysis method of view-based access control model and radar information fusion Download PDF

Info

Publication number
CN110246159A
CN110246159A CN201910515176.3A CN201910515176A CN110246159A CN 110246159 A CN110246159 A CN 110246159A CN 201910515176 A CN201910515176 A CN 201910515176A CN 110246159 A CN110246159 A CN 110246159A
Authority
CN
China
Prior art keywords
target
point cloud
frame
cloud data
pedestrian
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910515176.3A
Other languages
Chinese (zh)
Other versions
CN110246159B (en
Inventor
李智勇
伍轶强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan University
Original Assignee
Hunan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan University filed Critical Hunan University
Priority to CN201910515176.3A priority Critical patent/CN110246159B/en
Publication of CN110246159A publication Critical patent/CN110246159A/en
Application granted granted Critical
Publication of CN110246159B publication Critical patent/CN110246159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

本发明公开了一种基于视觉和雷达信息融合的3D目标运动分析方法,包括如构建初始目标检测模型并训练得到目标检测模型;实时获取相机图像;对相机图像进行检测得到目标的2D框和目标掩码;利用多目标跟踪算法对目标进行跟踪并得到目标id;实时获取激光雷达点云数据;对相机图像和激光雷达点云数据进行联合标定得到坐标转换关系;将激光雷达点云数据投影到图像并得到点云数据;过滤点云数据得到只属于目标的点云数据;进行3D矩形边框拟合得到目标的3D坐标;计目标的速度大小和速度方向并完成3D目标运动分析。本发明方法能够快速、准确和科学的对3D目标进行运行分析和预测,而且可靠性高、准确性好且性能优异。

The invention discloses a 3D target motion analysis method based on vision and radar information fusion, including such as constructing an initial target detection model and training to obtain a target detection model; obtaining a camera image in real time; detecting the camera image to obtain a 2D frame of the target and the target Mask; use multi-target tracking algorithm to track the target and get the target id; real-time acquisition of lidar point cloud data; jointly calibrate the camera image and lidar point cloud data to obtain the coordinate transformation relationship; project the lidar point cloud data to Image and get point cloud data; filter point cloud data to get point cloud data only belonging to the target; perform 3D rectangular frame fitting to get the 3D coordinates of the target; measure the speed and direction of the target and complete the 3D target motion analysis. The method of the invention can quickly, accurately and scientifically analyze and predict the operation of the 3D target, and has high reliability, good accuracy and excellent performance.

Description

基于视觉和雷达信息融合的3D目标运动分析方法3D target motion analysis method based on vision and radar information fusion

技术领域technical field

本发明具体涉及一种基于视觉和雷达信息融合的3D目标运动分析方法。The invention specifically relates to a 3D target motion analysis method based on vision and radar information fusion.

背景技术Background technique

随着经济技术的发展,无人驾驶和自动驾驶技术已经得到了长足的发展,而且也是未来的科技趋势之一。With the development of economy and technology, unmanned and autonomous driving technology has been greatly developed, and it is also one of the future technological trends.

对于自动驾驶车辆而言,一个典型的自动驾驶系统包括感知、预测、规划和控制四个部分。感知是对场景中感兴趣的物体(如车辆、行人等)进行检测,并随着时间的推移对目标进行跟踪。因此,感知(运动分析)就成为了自动驾驶技术的核心部分之一。For autonomous vehicles, a typical autonomous driving system includes four parts: perception, prediction, planning and control. Perception is the detection of objects of interest (such as vehicles, pedestrians, etc.) in a scene and the tracking of objects over time. Therefore, perception (motion analysis) has become one of the core parts of autonomous driving technology.

目前,对于3D目标的检测方法,大致分类以下三类:a.基于双目视觉的3D检测与跟踪方法,利用双目视觉来估计视差和深度,但双目视觉的立体匹配与标定是其一大难点;b.基于单目视觉的3D检测与跟踪方法,由于难以获取物体的立体与深度信息,因此这种方法性能较差;c.基于激光雷达点云的3D检测与跟踪,点云数据只包含3D结构信息而缺少RGB信息,因此,单纯依靠点云的跟踪是不可靠的。At present, the detection methods for 3D objects can be roughly classified into the following three categories: a. 3D detection and tracking methods based on binocular vision, which use binocular vision to estimate parallax and depth, but stereo matching and calibration of binocular vision is one of them. Big difficulty; b. 3D detection and tracking method based on monocular vision, because it is difficult to obtain the stereo and depth information of the object, so this method has poor performance; c. 3D detection and tracking based on lidar point cloud, point cloud data Contains only 3D structural information and lacks RGB information, therefore, tracking purely relying on point clouds is unreliable.

发明内容Contents of the invention

本发明的目的在于提供一种可靠性高、准确性好且性能优异的基于视觉和雷达信息融合的3D目标运动分析方法。The object of the present invention is to provide a 3D target motion analysis method based on vision and radar information fusion with high reliability, good accuracy and excellent performance.

本发明提供的这种基于视觉和雷达信息融合的3D目标运动分析方法,包括如下步骤:This 3D target motion analysis method based on vision and radar information fusion provided by the present invention comprises the following steps:

S1.构建初始目标检测模型,并对模型进行训练,从而得到目标检测模型;S1. Construct an initial target detection model, and train the model to obtain a target detection model;

S2.实时获取相机图像;S2. Acquiring camera images in real time;

S3.利用步骤S1得到的目标检测模型对步骤S2得到的相机图像进行检测,从而得到目标的2D框和目标掩码;S3. Using the target detection model obtained in step S1 to detect the camera image obtained in step S2, so as to obtain the 2D frame and target mask of the target;

S4.利用多目标跟踪算法对目标进行跟踪,从而得到目标id;S4. Use the multi-target tracking algorithm to track the target, so as to obtain the target id;

S5.实时获取激光雷达点云数据;S5. Real-time acquisition of lidar point cloud data;

S6.对相机图像和激光雷达点云数据进行联合标定,从而得到相机和激光雷达之间的坐标转换关系;S6. Jointly calibrate the camera image and the lidar point cloud data, so as to obtain the coordinate transformation relationship between the camera and the lidar;

S7.根据步骤S6得到的坐标转换关系,将激光雷达点云数据投影到图像中,从而得到包含在2D框中的点云数据;S7. According to the coordinate conversion relationship obtained in step S6, the laser radar point cloud data is projected into the image, thereby obtaining the point cloud data contained in the 2D frame;

S8.利用前景背景分离算法过滤步骤S7得到的包含在2D框中的点云数据,从而得到只属于目标的点云数据;S8. Utilize the foreground and background separation algorithm to filter the point cloud data contained in the 2D frame obtained in step S7, so as to obtain the point cloud data only belonging to the target;

S9.对步骤S8得到的点云数据进行3D矩形边框拟合,从而得到目标的3D坐标;S9. Carrying out 3D rectangular frame fitting to the point cloud data obtained in step S8, thereby obtaining the 3D coordinates of the target;

S10.计算当前帧目标的位置质心与前一帧目标的位置质心之间的位置偏差,从而得到目标的速度大小和速度方向,完成3D目标运动分析。S10. Calculate the position deviation between the position centroid of the target in the current frame and the position centroid of the target in the previous frame, so as to obtain the velocity and direction of the target, and complete the 3D target motion analysis.

步骤S1所述的构建初始目标检测模型,并对模型进行训练,具体为初始目标检测模型采用Mask-RCNN深度学习检测算法,并采用公开的BDD100K自动驾驶数据集对Mask-RCNN深度学习检测算法进行训练。Construct the initial target detection model described in step S1, and train the model. Specifically, the initial target detection model adopts the Mask-RCNN deep learning detection algorithm, and uses the public BDD100K automatic driving data set to perform the Mask-RCNN deep learning detection algorithm. train.

步骤S4所述的利用多目标跟踪算法对目标进行跟踪,从而得到目标id,具体为采用如下步骤进行目标跟踪并得到目标id:The step S4 uses the multi-target tracking algorithm to track the target to obtain the target id. Specifically, the following steps are used to track the target and obtain the target id:

A.对于行人目标框,使用公开的Market1501行人再识别数据集训练行人特征提取模型;所述的行人特征提取模型采用ResNet50模型;A. For the pedestrian target frame, use the public Market1501 pedestrian re-identification data set to train the pedestrian feature extraction model; the pedestrian feature extraction model adopts the ResNet50 model;

B.对于车辆目标框,使用公开的VRID车辆再识别数据集训练车辆特征提取模型;所述的车辆特征提取模型采用ResNet50模型;B. For the vehicle target frame, use the public VRID vehicle re-identification data set to train the vehicle feature extraction model; the vehicle feature extraction model adopts the ResNet50 model;

C.采用步骤A得到的行人特征提取模型提取行人目标框的深度表现特征,并计算相邻两帧之间的行人的表现特征之间的余弦距离,从而得到第一行人距离度量矩阵;C. The pedestrian feature extraction model obtained in step A is used to extract the depth performance characteristics of the pedestrian target frame, and calculate the cosine distance between the performance characteristics of pedestrians between two adjacent frames, thereby obtaining the first pedestrian distance measurement matrix;

D.采用步骤B得到的车辆特征提取模型提取车辆目标框的深度表现特征,并计算相邻两帧之间的车辆的表现特征之间的余弦距离,从而得到第一车辆距离度量矩阵;D. The vehicle feature extraction model obtained by step B is used to extract the depth performance characteristics of the vehicle target frame, and calculate the cosine distance between the performance characteristics of the vehicle between two adjacent frames, thereby obtaining the first vehicle distance measurement matrix;

E.根据行人的上一帧目标框位置,预测行人的当前帧的预测目标框位置,计算行人的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二行人距离度量矩阵;E. According to the target frame position of the pedestrian in the previous frame, predict the predicted target frame position of the pedestrian in the current frame, and calculate the distance between the predicted target frame position of the pedestrian and the target frame position of the previous frame, so as to obtain the second pedestrian distance measurement matrix ;

F.根据车辆的上一帧目标框位置,预测车辆的当前帧的预测目标框位置,计算车辆的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二车辆距离度量矩阵;F. Predict the predicted target frame position of the current frame of the vehicle according to the target frame position of the vehicle, and calculate the distance between the predicted target frame position of the vehicle and the target frame position of the previous frame, thereby obtaining the second vehicle distance metric matrix ;

G.对第一行人距离度量矩阵和第二行人距离度量矩阵进行加权求和,并对相邻帧之间的行人目标进行匹配,从而得到当前帧的行人目标对应的id;G. Carry out weighted summation to the first pedestrian distance measurement matrix and the second pedestrian distance measurement matrix, and match the pedestrian target between adjacent frames, thereby obtain the id corresponding to the pedestrian target of the current frame;

H.对第一车辆距离度量矩阵和第二车辆距离度量矩阵进行加权求和,并对相邻帧之间的车辆目标进行匹配,从而得到当前帧的车辆目标对应的id。H. Perform weighted summation of the first vehicle distance metric matrix and the second vehicle distance metric matrix, and match the vehicle targets between adjacent frames, so as to obtain the id corresponding to the vehicle target in the current frame.

步骤S6所述的得到相机和激光雷达之间的坐标转换关系,具体为采用如下算式作为相机和激光雷达之间的坐标转换关系:Obtaining the coordinate transformation relationship between the camera and the laser radar described in step S6 is specifically to use the following formula as the coordinate transformation relationship between the camera and the laser radar:

式中(xp,yp,zp)表为激光雷达点云中的点坐标,(x',y')为激光雷达点云中的点坐标投影到图像上的坐标,T为投影矩阵。In the formula, (x p , y p , z p ) represent the point coordinates in the lidar point cloud, (x', y') are the coordinates of the point coordinates in the lidar point cloud projected onto the image, and T is the projection matrix .

步骤S7所述的将激光雷达点云数据投影到图像中,具体为首先遍历所有的激光雷达点云数据,利用步骤S6的坐标转换关系将激光雷达点云数据投影到图像上并得到图像上的坐标,然后再判断坐标是否属于目标:若是,则将该坐标加入对应的数组;从而得到每个目标对应的激光雷达点云数据。Projecting the lidar point cloud data into the image described in step S7 is specifically to first traverse all the lidar point cloud data, use the coordinate transformation relationship in step S6 to project the lidar point cloud data onto the image and obtain the coordinates, and then judge whether the coordinates belong to the target: if so, add the coordinates to the corresponding array; thus obtain the lidar point cloud data corresponding to each target.

步骤S8所述的利用前景背景分离算法过滤步骤S7得到的包含在2D框中的点云数据,从而得到只属于目标的点云数据,具体为对于每个目标的激光雷达点云数据,使用欧式聚类算法进行聚类,得到1至n个聚类簇;然后挑选聚类簇中激光雷达点云的点数量最多的簇的作为最终的目标点云。The point cloud data contained in the 2D frame obtained in step S7 is filtered using the foreground and background separation algorithm described in step S8, thereby obtaining point cloud data that only belongs to the target, specifically for the lidar point cloud data of each target, using the European method The clustering algorithm performs clustering to obtain 1 to n clusters; and then selects the cluster with the largest number of points in the lidar point cloud among the clusters as the final target point cloud.

步骤S9所述的对步骤S8得到的点云数据进行3D矩形边框拟合,从而得到目标的3D坐标,具体为对于目标点云,首先得到目标点云的最小高度和最大高度值,然后将目标点云投影到二维平面,并计算二维平面的投影点集的最小凸包;然后计算最小凸包的面积最小外接矩形,并得到面积最小外接矩形的四个顶点坐标;最后结合最小高度和最大高度值,得到目标的3D矩形框。In step S9, the point cloud data obtained in step S8 is fitted with a 3D rectangular frame to obtain the 3D coordinates of the target. Specifically, for the target point cloud, the minimum height and maximum height of the target point cloud are first obtained, and then the target The point cloud is projected onto a two-dimensional plane, and the minimum convex hull of the projection point set of the two-dimensional plane is calculated; then the area of the minimum convex hull is calculated with the smallest circumscribed rectangle, and the coordinates of the four vertices of the smallest circumscribed rectangle are obtained; finally, the minimum height and The maximum height value to get the 3D rectangular frame of the target.

步骤S10所述的计算当前帧目标的位置质心与前一帧目标的位置质心之间的位置偏差,从而得到目标的速度大小和速度方向,具体为采用如下步骤计算得到速度大小和速度方向:The calculation of the position deviation between the position centroid of the target in the current frame and the position centroid of the target in the previous frame as described in step S10, so as to obtain the velocity and velocity direction of the target, specifically, the following steps are used to calculate the velocity and velocity direction:

a.采用如下算式计算目标点云的形心pca. Use the following formula to calculate the centroid p c of the target point cloud:

式中N为目标点云的数量;pi为目标点云的3D坐标;where N is the number of target point clouds; p i is the 3D coordinates of the target point cloud;

b.采用如下算式计算目标的速度大小v和速度方向θ:b. Use the following formula to calculate the velocity v and direction θ of the target:

式中θ为目标的相对运动方向,pc(x)和pc(y)为当前帧的目标点云的形心坐标,plastc(x)和plastc(y)为上一帧的目标点云的形心坐标,t为两帧之间的间隔时间。where θ is the relative motion direction of the target, p c (x) and p c (y) are the centroid coordinates of the target point cloud in the current frame, and plast c (x) and plast c (y) are the target of the previous frame The centroid coordinates of the point cloud, t is the time interval between two frames.

本发明提供的这种基于视觉和雷达信息融合的3D目标运动分析方法,通过科学可靠的计算过程,将视觉图像和激光雷达点云数据进行融合,从而对3D目标进行运动分析,因此本发明方法能够快速、准确和科学的对3D目标进行运行分析和预测,而且可靠性高、准确性好且性能优异。The 3D target motion analysis method based on the fusion of vision and radar information provided by the present invention fuses visual images and laser radar point cloud data through a scientific and reliable calculation process, thereby performing motion analysis on 3D targets. Therefore, the method of the present invention It can quickly, accurately and scientifically analyze and predict the operation of 3D targets, and has high reliability, good accuracy and excellent performance.

附图说明Description of drawings

图1为本发明方法的方法流程示意图。Fig. 1 is a schematic flow chart of the method of the present invention.

具体实施方式Detailed ways

如图1所示为本发明方法的方法流程示意图:本发明提供的这种基于视觉和雷达信息融合的3D目标运动分析方法,包括如下步骤:As shown in Figure 1, it is a schematic flow chart of the method of the present invention: this 3D target motion analysis method based on vision and radar information fusion provided by the present invention includes the following steps:

S1.构建初始目标检测模型,并对模型进行训练,从而得到目标检测模型;具体为初始目标检测模型采用Mask-RCNN深度学习检测算法,并采用公开的BDD100K自动驾驶数据集对Mask-RCNN深度学习检测算法进行训练;S1. Construct the initial target detection model and train the model to obtain the target detection model; specifically, the initial target detection model uses the Mask-RCNN deep learning detection algorithm, and uses the public BDD100K automatic driving dataset to learn the Mask-RCNN deep learning The detection algorithm is trained;

S2.实时获取相机图像;S2. Acquiring camera images in real time;

S3.利用步骤S1得到的目标检测模型对步骤S2得到的相机图像进行检测,从而得到目标的2D框和目标掩码;S3. Using the target detection model obtained in step S1 to detect the camera image obtained in step S2, so as to obtain the 2D frame and target mask of the target;

S4.利用多目标跟踪算法对目标进行跟踪,从而得到目标id;具体为采用如下步骤进行目标跟踪并得到目标id:S4. Use the multi-target tracking algorithm to track the target to obtain the target id; specifically, the following steps are used to track the target and obtain the target id:

A.对于行人目标框,使用公开的Market1501行人再识别数据集训练行人特征提取模型;所述的行人特征提取模型采用ResNet50模型;A. For the pedestrian target frame, use the public Market1501 pedestrian re-identification data set to train the pedestrian feature extraction model; the pedestrian feature extraction model adopts the ResNet50 model;

B.对于车辆目标框,使用公开的VRID车辆再识别数据集训练车辆特征提取模型;所述的车辆特征提取模型采用ResNet50模型;B. For the vehicle target frame, use the public VRID vehicle re-identification data set to train the vehicle feature extraction model; the vehicle feature extraction model adopts the ResNet50 model;

C.采用步骤A得到的行人特征提取模型提取行人目标框的深度表现特征,并计算相邻两帧之间的行人的表现特征之间的余弦距离,从而得到第一行人距离度量矩阵;C. The pedestrian feature extraction model obtained in step A is used to extract the depth performance characteristics of the pedestrian target frame, and calculate the cosine distance between the performance characteristics of pedestrians between two adjacent frames, thereby obtaining the first pedestrian distance measurement matrix;

D.采用步骤B得到的车辆特征提取模型提取车辆目标框的深度表现特征,并计算相邻两帧之间的车辆的表现特征之间的余弦距离,从而得到第一车辆距离度量矩阵;D. The vehicle feature extraction model obtained by step B is used to extract the depth performance characteristics of the vehicle target frame, and calculate the cosine distance between the performance characteristics of the vehicle between two adjacent frames, thereby obtaining the first vehicle distance measurement matrix;

E.根据行人的上一帧目标框位置,通过卡尔曼滤波器预测行人的当前帧的预测目标框位置,计算行人的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二行人距离度量矩阵;E. According to the target frame position of the pedestrian in the previous frame, the predicted target frame position of the pedestrian's current frame is predicted by the Kalman filter, and the distance between the predicted target frame position of the pedestrian and the target frame position of the previous frame is calculated to obtain the first Two pedestrian distance measure matrix;

F.根据车辆的上一帧目标框位置,通过卡尔曼滤波器预测车辆的当前帧的预测目标框位置,计算车辆的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二车辆距离度量矩阵;F. According to the target frame position of the previous frame of the vehicle, the predicted target frame position of the current frame of the vehicle is predicted by the Kalman filter, and the distance between the predicted target frame position of the vehicle and the target frame position of the previous frame is calculated, thereby obtaining the first Two vehicle distance metric matrix;

G.对第一行人距离度量矩阵和第二行人距离度量矩阵进行加权求和,并采用匈牙利匹配算法对相邻帧之间的行人目标进行匹配,从而得到当前帧的行人目标对应的id;G. Carry out weighted summation to the first pedestrian distance measurement matrix and the second pedestrian distance measurement matrix, and adopt the Hungarian matching algorithm to match the pedestrian target between adjacent frames, thereby obtain the id corresponding to the pedestrian target of the current frame;

H.对第一车辆距离度量矩阵和第二车辆距离度量矩阵进行加权求和,并采用匈牙利匹配算法对相邻帧之间的车辆目标进行匹配,从而得到当前帧的车辆目标对应的id;H. Carry out weighted summation to the first vehicle distance measurement matrix and the second vehicle distance measurement matrix, and adopt Hungarian matching algorithm to match the vehicle target between adjacent frames, thereby obtain the id corresponding to the vehicle target of the current frame;

同时,更新预测器中目标的位置,并加入新的目标,同时剔除连续xx帧未跟踪到的目标;At the same time, update the position of the target in the predictor, and add a new target, and remove the target that has not been tracked in consecutive xx frames;

S5.实时获取激光雷达点云数据;S5. Real-time acquisition of lidar point cloud data;

S6.对相机图像和激光雷达点云数据进行联合标定,从而得到相机和激光雷达之间的坐标转换关系;具体为采用如下算式作为相机和激光雷达之间的坐标转换关系:S6. Carry out joint calibration on the camera image and the lidar point cloud data, so as to obtain the coordinate transformation relationship between the camera and the lidar; specifically, the following formula is used as the coordinate transformation relationship between the camera and the lidar:

式中(xp,yp,zp)表为激光雷达点云中的点坐标,(x',y')为激光雷达点云中的点坐标投影到图像上的坐标,T为投影矩阵;In the formula, (x p , y p , z p ) represent the point coordinates in the lidar point cloud, (x', y') are the coordinates of the point coordinates in the lidar point cloud projected onto the image, and T is the projection matrix ;

S7.根据步骤S6得到的坐标转换关系,将激光雷达点云数据投影到图像中,从而得到包含在2D框中的点云数据;具体为首先遍历所有的激光雷达点云数据,利用步骤S6的坐标转换关系将激光雷达点云数据投影到图像上并得到图像上的坐标,然后再判断坐标是否属于目标:若是,则将该坐标加入对应的数组;从而得到每个目标对应的激光雷达点云数据;S7. According to the coordinate conversion relationship obtained in step S6, the laser radar point cloud data is projected into the image, thereby obtaining the point cloud data contained in the 2D frame; specifically, at first traversing all the laser radar point cloud data, using the step S6 Coordinate transformation relationship Project the lidar point cloud data onto the image and get the coordinates on the image, and then judge whether the coordinates belong to the target: if so, add the coordinates to the corresponding array; thus get the lidar point cloud corresponding to each target data;

S8.利用前景背景分离算法过滤步骤S7得到的包含在2D框中的点云数据,从而得到只属于目标的点云数据;具体为对于每个目标的激光雷达点云数据,使用欧式聚类算法进行聚类,得到1至n个聚类簇;然后挑选聚类簇中激光雷达点云的点数量最多的簇的作为最终的目标点云;S8. Use the foreground and background separation algorithm to filter the point cloud data contained in the 2D frame obtained in step S7, so as to obtain the point cloud data only belonging to the target; specifically, for the laser radar point cloud data of each target, use the European clustering algorithm Perform clustering to obtain 1 to n clusters; then select the cluster with the largest number of points in the lidar point cloud in the cluster as the final target point cloud;

S9.对步骤S8得到的点云数据进行3D矩形边框拟合,从而得到目标的3D坐标;具体为对于目标点云,首先得到目标点云的最小高度和最大高度值,然后将目标点云投影到二维平面,并计算二维平面的投影点集的最小凸包;然后计算最小凸包的面积最小外接矩形,并得到面积最小外接矩形的四个顶点坐标;最后结合最小高度和最大高度值,得到目标的3D矩形框;S9. Carry out 3D rectangular frame fitting to the point cloud data obtained in step S8, thereby obtain the 3D coordinates of the target; specifically for the target point cloud, first obtain the minimum height and maximum height value of the target point cloud, and then project the target point cloud To a two-dimensional plane, and calculate the minimum convex hull of the projected point set of the two-dimensional plane; then calculate the minimum circumscribed rectangle of the area of the minimum convex hull, and obtain the coordinates of the four vertices of the minimum circumscribed rectangle; finally combine the minimum height and maximum height value , get the 3D rectangular frame of the target;

S10.计算当前帧目标的位置质心与前一帧目标的位置质心之间的位置偏差,从而得到目标的速度大小和速度方向,完成3D目标运动分析;具体为采用如下步骤计算得到速度大小和速度方向:S10. Calculate the position deviation between the position centroid of the target in the current frame and the position centroid of the target in the previous frame, so as to obtain the velocity and velocity direction of the target, and complete the 3D target motion analysis; specifically, the velocity and velocity are obtained by calculating the following steps direction:

a.采用如下算式计算目标点云的形心pca. Use the following formula to calculate the centroid p c of the target point cloud:

式中N为目标点云的数量;pi为目标点云的3D坐标;where N is the number of target point clouds; p i is the 3D coordinates of the target point cloud;

b.采用如下算式计算目标的速度大小v和速度方向θ:b. Use the following formula to calculate the velocity v and direction θ of the target:

式中θ为目标的相对运动方向,pc(x)和pc(y)为当前帧的目标点云的形心坐标,plastc(x)和plastc(y)为上一帧的目标点云的形心坐标,t为两帧之间的间隔时间。where θ is the relative motion direction of the target, p c (x) and p c (y) are the centroid coordinates of the target point cloud in the current frame, and plast c (x) and plast c (y) are the target of the previous frame The centroid coordinates of the point cloud, t is the time interval between two frames.

Claims (8)

1.一种基于视觉和雷达信息融合的3D目标运动分析方法,包括如下步骤:1. A 3D target motion analysis method based on vision and radar information fusion, comprising the steps: S1.构建初始目标检测模型,并对模型进行训练,从而得到目标检测模型;S1. Construct an initial target detection model, and train the model to obtain a target detection model; S2.实时获取相机图像;S2. Acquiring camera images in real time; S3.利用步骤S1得到的目标检测模型对步骤S2得到的相机图像进行检测,从而得到目标的2D框和目标掩码;S3. Using the target detection model obtained in step S1 to detect the camera image obtained in step S2, so as to obtain the 2D frame and target mask of the target; S4.利用多目标跟踪算法对目标进行跟踪,从而得到目标id;S4. Use the multi-target tracking algorithm to track the target, so as to obtain the target id; S5.实时获取激光雷达点云数据;S5. Real-time acquisition of lidar point cloud data; S6.对相机图像和激光雷达点云数据进行联合标定,从而得到相机和激光雷达之间的坐标转换关系;S6. Jointly calibrate the camera image and the lidar point cloud data, so as to obtain the coordinate transformation relationship between the camera and the lidar; S7.根据步骤S6得到的坐标转换关系,将激光雷达点云数据投影到图像中,从而得到包含在2D框中的点云数据;S7. According to the coordinate conversion relationship obtained in step S6, the laser radar point cloud data is projected into the image, thereby obtaining the point cloud data contained in the 2D frame; S8.利用前景背景分离算法过滤步骤S7得到的包含在2D框中的点云数据,从而得到只属于目标的点云数据;S8. Utilize the foreground and background separation algorithm to filter the point cloud data contained in the 2D frame obtained in step S7, so as to obtain the point cloud data only belonging to the target; S9.对步骤S8得到的点云数据进行3D矩形边框拟合,从而得到目标的3D坐标;S9. Carrying out 3D rectangular frame fitting to the point cloud data obtained in step S8, thereby obtaining the 3D coordinates of the target; S10.计算当前帧目标的位置质心与前一帧目标的位置质心之间的位置偏差,从而得到目标的速度大小和速度方向,完成3D目标运动分析。S10. Calculate the position deviation between the position centroid of the target in the current frame and the position centroid of the target in the previous frame, so as to obtain the velocity and direction of the target, and complete the 3D target motion analysis. 2.根据权利要求1所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S1所述的构建初始目标检测模型,并对模型进行训练,具体为初始目标检测模型采用Mask-RCNN深度学习检测算法,并采用公开的BDD100K自动驾驶数据集对Mask-RCNN深度学习检测算法进行训练。2. The 3D target motion analysis method based on vision and radar information fusion according to claim 1, characterized in that the initial target detection model is constructed in step S1, and the model is trained, specifically the initial target detection model adopts Mask -RCNN deep learning detection algorithm, and use the public BDD100K autonomous driving dataset to train the Mask-RCNN deep learning detection algorithm. 3.据权利要求1或2所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S4所述的利用多目标跟踪算法对目标进行跟踪,从而得到目标id,具体为采用如下步骤进行目标跟踪并得到目标id:3. according to the 3D target motion analysis method based on vision and radar information fusion described in claim 1 or 2, it is characterized in that the multi-target tracking algorithm described in step S4 is used to track the target, thereby obtaining the target id, specifically using Follow the steps below to track the target and get the target id: A.对于行人目标框,使用公开的Market1501行人再识别数据集训练行人特征提取模型;所述的行人特征提取模型采用ResNet50模型;A. For the pedestrian target frame, use the public Market1501 pedestrian re-identification data set to train the pedestrian feature extraction model; the pedestrian feature extraction model adopts the ResNet50 model; B.对于车辆目标框,使用公开的VRID车辆再识别数据集训练车辆特征提取模型;所述的车辆特征提取模型采用ResNet50模型;B. For the vehicle target frame, use the public VRID vehicle re-identification data set to train the vehicle feature extraction model; the vehicle feature extraction model adopts the ResNet50 model; C.采用步骤A得到的行人特征提取模型提取行人目标框的深度表现特征,并计算相邻两帧之间的行人的表现特征之间的余弦距离,从而得到第一行人距离度量矩阵;C. The pedestrian feature extraction model obtained in step A is used to extract the depth performance characteristics of the pedestrian target frame, and calculate the cosine distance between the performance characteristics of pedestrians between two adjacent frames, thereby obtaining the first pedestrian distance measurement matrix; D.采用步骤B得到的车辆特征提取模型提取车辆目标框的深度表现特征,并计算相邻两帧之间的车辆的表现特征之间的余弦距离,从而得到第一车辆距离度量矩阵;D. The vehicle feature extraction model obtained by step B is used to extract the depth performance characteristics of the vehicle target frame, and calculate the cosine distance between the performance characteristics of the vehicle between two adjacent frames, thereby obtaining the first vehicle distance measurement matrix; E.根据行人的上一帧目标框位置,预测行人的当前帧的预测目标框位置,计算行人的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二行人距离度量矩阵;E. According to the target frame position of the pedestrian in the previous frame, predict the predicted target frame position of the pedestrian in the current frame, and calculate the distance between the predicted target frame position of the pedestrian and the target frame position of the previous frame, so as to obtain the second pedestrian distance measurement matrix ; F.根据车辆的上一帧目标框位置,预测车辆的当前帧的预测目标框位置,计算车辆的预测目标框位置与上一帧目标框位置之间的距离,从而得到第二车辆距离度量矩阵;F. Predict the predicted target frame position of the current frame of the vehicle according to the target frame position of the vehicle, and calculate the distance between the predicted target frame position of the vehicle and the target frame position of the previous frame, thereby obtaining the second vehicle distance metric matrix ; G.对第一行人距离度量矩阵和第二行人距离度量矩阵进行加权求和,并对相邻帧之间的行人目标进行匹配,从而得到当前帧的行人目标对应的id;G. Carry out weighted summation to the first pedestrian distance measurement matrix and the second pedestrian distance measurement matrix, and match the pedestrian target between adjacent frames, thereby obtain the id corresponding to the pedestrian target of the current frame; H.对第一车辆距离度量矩阵和第二车辆距离度量矩阵进行加权求和,并对相邻帧之间的车辆目标进行匹配,从而得到当前帧的车辆目标对应的id。H. Perform weighted summation of the first vehicle distance metric matrix and the second vehicle distance metric matrix, and match the vehicle targets between adjacent frames, so as to obtain the id corresponding to the vehicle target in the current frame. 4.据权利要求3所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S6所述的得到相机和激光雷达之间的坐标转换关系,具体为采用如下算式作为相机和激光雷达之间的坐标转换关系:4. according to the 3D object motion analysis method based on vision and radar information fusion according to claim 3, it is characterized in that the coordinate transformation relationship obtained between the camera and the lidar described in step S6 is specifically to adopt the following formula as the camera and Coordinate transformation relationship between lidar: 式中(xp,yp,zp)表为激光雷达点云中的点坐标,(x',y')为激光雷达点云中的点坐标投影到图像上的坐标,T为投影矩阵。In the formula, (x p , y p , z p ) represent the point coordinates in the lidar point cloud, (x', y') are the coordinates of the point coordinates in the lidar point cloud projected onto the image, and T is the projection matrix . 5.据权利要求4所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S7所述的将激光雷达点云数据投影到图像中,具体为首先遍历所有的激光雷达点云数据,利用步骤S6的坐标转换关系将激光雷达点云数据投影到图像上并得到图像上的坐标,然后再判断坐标是否属于目标:若是,则将该坐标加入对应的数组;从而得到每个目标对应的激光雷达点云数据。5. The 3D target motion analysis method based on vision and radar information fusion according to claim 4, characterized in that the laser radar point cloud data is projected into the image in step S7, specifically for first traversing all laser radar points Cloud data, using the coordinate conversion relationship in step S6 to project the lidar point cloud data onto the image and obtain the coordinates on the image, and then judge whether the coordinates belong to the target: if so, add the coordinates to the corresponding array; thus get each The lidar point cloud data corresponding to the target. 6.据权利要求5所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S8所述的利用前景背景分离算法过滤步骤S7得到的包含在2D框中的点云数据,从而得到只属于目标的点云数据,具体为对于每个目标的激光雷达点云数据,使用欧式聚类算法进行聚类,得到1至n个聚类簇;然后挑选聚类簇中激光雷达点云的点数量最多的簇的作为最终的目标点云。6. according to the 3D target motion analysis method based on vision and radar information fusion according to claim 5, it is characterized in that the point cloud data that utilizes the foreground background separation algorithm described in step S8 to filter step S7 to obtain is contained in the 2D frame, In order to obtain the point cloud data that only belongs to the target, specifically, for the lidar point cloud data of each target, use the European clustering algorithm for clustering to obtain 1 to n clusters; then select the lidar points in the clusters The cluster with the largest number of cloud points is the final target point cloud. 7.据权利要求6所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S9所述的对步骤S8得到的点云数据进行3D矩形边框拟合,从而得到目标的3D坐标,具体为对于目标点云,首先得到目标点云的最小高度和最大高度值,然后将目标点云投影到二维平面,并计算二维平面的投影点集的最小凸包;然后计算最小凸包的面积最小外接矩形,并得到面积最小外接矩形的四个顶点坐标;最后结合最小高度和最大高度值,得到目标的3D矩形框。7. The 3D target motion analysis method based on vision and radar information fusion according to claim 6, characterized in that the point cloud data obtained in step S8 is carried out to 3D rectangular frame fitting described in step S9, thereby obtaining the 3D target Coordinates, specifically for the target point cloud, first obtain the minimum and maximum height values of the target point cloud, then project the target point cloud onto a two-dimensional plane, and calculate the minimum convex hull of the projected point set of the two-dimensional plane; then calculate the minimum The circumscribed rectangle with the smallest area of the convex hull, and the coordinates of the four vertices of the circumscribed rectangle with the smallest area are obtained; finally, the 3D rectangular frame of the target is obtained by combining the minimum and maximum height values. 8.据权利要求7所述的基于视觉和雷达信息融合的3D目标运动分析方法,其特征在于步骤S10所述的计算当前帧目标的位置质心与前一帧目标的位置质心之间的位置偏差,从而得到目标的速度大小和速度方向,具体为采用如下步骤计算得到速度大小和速度方向:8. The 3D target motion analysis method based on vision and radar information fusion according to claim 7, characterized in that the position deviation between the position centroid of the current frame target and the position centroid of the previous frame target is calculated in step S10 , so as to obtain the velocity and direction of the target. Specifically, the following steps are used to calculate the velocity and direction: a.采用如下算式计算目标点云的形心pca. Use the following formula to calculate the centroid p c of the target point cloud: 式中N为目标点云的数量;pi为目标点云的3D坐标;where N is the number of target point clouds; p i is the 3D coordinates of the target point cloud; b.采用如下算式计算目标的速度大小v和速度方向θ:b. Use the following formula to calculate the velocity v and direction θ of the target: 式中θ为目标的相对运动方向,pc(x)和pc(y)为当前帧的目标点云的形心坐标,plastc(x)和plastc(y)为上一帧的目标点云的形心坐标,t为两帧之间的间隔时间。where θ is the relative motion direction of the target, p c (x) and p c (y) are the centroid coordinates of the target point cloud in the current frame, and plast c (x) and plast c (y) are the target of the previous frame The centroid coordinates of the point cloud, t is the time interval between two frames.
CN201910515176.3A 2019-06-14 2019-06-14 3D target motion analysis method based on vision and radar information fusion Active CN110246159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910515176.3A CN110246159B (en) 2019-06-14 2019-06-14 3D target motion analysis method based on vision and radar information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910515176.3A CN110246159B (en) 2019-06-14 2019-06-14 3D target motion analysis method based on vision and radar information fusion

Publications (2)

Publication Number Publication Date
CN110246159A true CN110246159A (en) 2019-09-17
CN110246159B CN110246159B (en) 2023-03-28

Family

ID=67887180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910515176.3A Active CN110246159B (en) 2019-06-14 2019-06-14 3D target motion analysis method based on vision and radar information fusion

Country Status (1)

Country Link
CN (1) CN110246159B (en)

Cited By (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728210A (en) * 2019-09-25 2020-01-24 上海交通大学 Semi-supervised target labeling method and system for three-dimensional point cloud data
CN110728753A (en) * 2019-10-09 2020-01-24 湖南大学 A 3D Bounding Box Fitting Method for Target Point Clouds Based on Linear Fitting
CN110807264A (en) * 2019-11-07 2020-02-18 四川航天神坤科技有限公司 Real-time monitoring and early warning method and device for radar target in three-dimensional system
CN110827358A (en) * 2019-10-15 2020-02-21 深圳数翔科技有限公司 Camera calibration method applied to automatic driving automobile
CN110909656A (en) * 2019-11-18 2020-03-24 中电海康集团有限公司 Pedestrian detection method and system with integration of radar and camera
CN110929567A (en) * 2019-10-17 2020-03-27 北京全路通信信号研究设计院集团有限公司 Monocular camera monitoring scene-based target position and speed measuring method and system
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN111028544A (en) * 2019-12-06 2020-04-17 无锡物联网创新中心有限公司 Pedestrian early warning system with V2V technology and vehicle-mounted multi-sensor integration
CN111275075A (en) * 2020-01-10 2020-06-12 山东超越数控电子股份有限公司 Vehicle detection and tracking method based on 3D laser radar
CN111640158A (en) * 2020-06-11 2020-09-08 武汉斌果科技有限公司 End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN111666855A (en) * 2020-05-29 2020-09-15 中国科学院地理科学与资源研究所 Unmanned aerial vehicle-based animal three-dimensional parameter extraction method and system and electronic equipment
CN111899279A (en) * 2020-07-10 2020-11-06 浙江大华技术股份有限公司 Method and device for detecting movement speed of target object
CN111986232A (en) * 2020-08-13 2020-11-24 上海高仙自动化科技发展有限公司 Target object detection method, target object detection device, robot and storage medium
CN112085101A (en) * 2020-09-10 2020-12-15 湖南大学 High-performance and high-reliability environment fusion sensing method and system
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment perception method and system
CN112233097A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN112487919A (en) * 2020-11-25 2021-03-12 吉林大学 3D target detection and tracking method based on camera and laser radar
CN112622923A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for controlling a vehicle
WO2021068210A1 (en) * 2019-10-11 2021-04-15 深圳市大疆创新科技有限公司 Method and apparatus for monitoring moving object, and computer storage medium
CN112882059A (en) * 2021-01-08 2021-06-01 中国船舶重工集团公司第七0七研究所 Unmanned ship inland river obstacle sensing method based on laser radar
CN112896879A (en) * 2021-02-24 2021-06-04 同济大学 Environment sensing system for intelligent sanitation vehicle
CN112989997A (en) * 2021-03-11 2021-06-18 中国科学技术大学 3D target detection method and system based on multi-information fusion
CN113066100A (en) * 2021-03-25 2021-07-02 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113173502A (en) * 2021-01-15 2021-07-27 福建电子口岸股份有限公司 Anti-collision method and system based on laser visual fusion and deep learning
CN113490965A (en) * 2019-12-30 2021-10-08 深圳元戎启行科技有限公司 Image tracking processing method and device, computer equipment and storage medium
CN113674355A (en) * 2021-07-06 2021-11-19 中国北方车辆研究所 Target identification and positioning method based on camera and laser radar
CN113689471A (en) * 2021-09-09 2021-11-23 中国联合网络通信集团有限公司 Target tracking method and device, computer equipment and storage medium
CN113743391A (en) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN113743385A (en) * 2021-11-05 2021-12-03 陕西欧卡电子智能科技有限公司 Unmanned ship water surface target detection method and device and unmanned ship
CN113763423A (en) * 2021-08-03 2021-12-07 中国北方车辆研究所 Multi-mode data based systematic target recognition and tracking method
CN113903029A (en) * 2021-12-10 2022-01-07 智道网联科技(北京)有限公司 Method and device for marking 3D frame in point cloud data
CN114120255A (en) * 2021-10-29 2022-03-01 际络科技(上海)有限公司 Target identification method and device based on laser radar speed measurement
CN114118253A (en) * 2021-11-23 2022-03-01 合肥工业大学 Vehicle detection method and detection device based on multi-source data fusion
CN114155720A (en) * 2021-11-29 2022-03-08 上海交通大学 A vehicle detection and trajectory prediction method for roadside lidar
CN114167404A (en) * 2020-09-11 2022-03-11 华为技术有限公司 Target tracking method and device
CN114239706A (en) * 2021-12-08 2022-03-25 山东新一代信息产业技术研究院有限公司 Target fusion method and system based on multiple cameras and laser radar
CN114241195A (en) * 2021-12-20 2022-03-25 北京亮道智能汽车技术有限公司 Target identification method and device, electronic equipment and storage medium
CN114295139A (en) * 2021-12-14 2022-04-08 武汉依迅北斗时空技术股份有限公司 Cooperative sensing positioning method and system
CN114332158A (en) * 2021-12-17 2022-04-12 重庆大学 A 3D real-time multi-target tracking method based on fusion of camera and lidar
CN114332784A (en) * 2021-12-30 2022-04-12 江苏集萃深度感知技术研究所有限公司 Port hull identification method based on machine vision and radar
CN114419571A (en) * 2022-03-30 2022-04-29 北京理工大学 A method and system for target detection and positioning for unmanned vehicles
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114428259A (en) * 2021-12-13 2022-05-03 武汉中海庭数据技术有限公司 Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN114545434A (en) * 2022-01-13 2022-05-27 燕山大学 Road side visual angle speed measurement method and system, electronic equipment and storage medium
CN114545435A (en) * 2021-12-21 2022-05-27 武汉市众向科技有限公司 A dynamic target perception system and method integrating camera and lidar
CN114684154A (en) * 2022-03-24 2022-07-01 重庆长安汽车股份有限公司 Method for correcting visual detection target course angle based on radar point cloud and storage medium
CN114724109A (en) * 2022-04-06 2022-07-08 深兰人工智能(深圳)有限公司 Target detection method, device, equipment and storage medium
CN114779271A (en) * 2022-06-16 2022-07-22 杭州宏景智驾科技有限公司 Target detection method and device, electronic equipment and storage medium
CN114782496A (en) * 2022-06-20 2022-07-22 杭州闪马智擎科技有限公司 Object tracking method and device, storage medium and electronic device
CN114972758A (en) * 2022-06-06 2022-08-30 上海人工智能创新中心 Instance segmentation method based on point cloud weak supervision
WO2022183685A1 (en) * 2021-03-01 2022-09-09 亿咖通(湖北)科技有限公司 Target detection method, electronic medium and computer storage medium
CN115113206A (en) * 2022-06-23 2022-09-27 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground railcar
WO2022262594A1 (en) * 2021-06-15 2022-12-22 同方威视技术股份有限公司 Method and apparatus for following target, robot, and computer-readable storage medium
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
CN116523962A (en) * 2023-04-20 2023-08-01 北京百度网讯科技有限公司 Visual tracking method, device, system, equipment and medium for target object
WO2023202335A1 (en) * 2022-04-20 2023-10-26 深圳市普渡科技有限公司 Target tracking method, robot, computer device, and storage medium
CN117991250A (en) * 2024-01-04 2024-05-07 广州里工实业有限公司 Mobile robot positioning detection method, system, equipment and medium
EP4414746A1 (en) 2023-02-08 2024-08-14 Continental Autonomous Mobility Germany GmbH Multi-object detection and tracking
CN119406785A (en) * 2025-01-06 2025-02-11 北京霍里思特科技有限公司 Material sorting method, material sorting device, material sorting equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281477A (en) * 2013-05-17 2013-09-04 天津大学 Multi-level characteristic data association-based multi-target visual tracking method
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN109541632A (en) * 2018-09-30 2019-03-29 天津大学 A kind of target detection missing inspection improved method based on four line laser radars auxiliary

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103281477A (en) * 2013-05-17 2013-09-04 天津大学 Multi-level characteristic data association-based multi-target visual tracking method
CN103559791A (en) * 2013-10-31 2014-02-05 北京联合大学 Vehicle detection method fusing radar and CCD camera signals
CN106407947A (en) * 2016-09-29 2017-02-15 百度在线网络技术(北京)有限公司 Target object recognition method and device applied to unmanned vehicle
CN108932475A (en) * 2018-05-31 2018-12-04 中国科学院西安光学精密机械研究所 Three-dimensional target identification system and method based on laser radar and monocular vision
CN109541632A (en) * 2018-09-30 2019-03-29 天津大学 A kind of target detection missing inspection improved method based on four line laser radars auxiliary

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112622923A (en) * 2019-09-24 2021-04-09 北京百度网讯科技有限公司 Method and device for controlling a vehicle
CN110728210A (en) * 2019-09-25 2020-01-24 上海交通大学 Semi-supervised target labeling method and system for three-dimensional point cloud data
CN110728753A (en) * 2019-10-09 2020-01-24 湖南大学 A 3D Bounding Box Fitting Method for Target Point Clouds Based on Linear Fitting
CN110728753B (en) * 2019-10-09 2022-04-15 湖南大学 Target point cloud 3D bounding box fitting method based on linear fitting
WO2021068210A1 (en) * 2019-10-11 2021-04-15 深圳市大疆创新科技有限公司 Method and apparatus for monitoring moving object, and computer storage medium
CN112956187A (en) * 2019-10-11 2021-06-11 深圳市大疆创新科技有限公司 Method and device for monitoring moving object and computer storage medium
CN110827358A (en) * 2019-10-15 2020-02-21 深圳数翔科技有限公司 Camera calibration method applied to automatic driving automobile
CN110827358B (en) * 2019-10-15 2023-10-31 深圳数翔科技有限公司 Camera calibration method applied to automatic driving automobile
CN110929567A (en) * 2019-10-17 2020-03-27 北京全路通信信号研究设计院集团有限公司 Monocular camera monitoring scene-based target position and speed measuring method and system
CN110929567B (en) * 2019-10-17 2022-09-27 北京全路通信信号研究设计院集团有限公司 Monocular camera monitoring scene-based target position and speed measuring method and system
CN110942449A (en) * 2019-10-30 2020-03-31 华南理工大学 Vehicle detection method based on laser and vision fusion
CN110942449B (en) * 2019-10-30 2023-05-23 华南理工大学 A Vehicle Detection Method Based on Fusion of Laser and Vision
CN110807264B (en) * 2019-11-07 2023-09-01 四川航天神坤科技有限公司 Real-time monitoring and early warning method and device for radar target in three-dimensional system
CN110807264A (en) * 2019-11-07 2020-02-18 四川航天神坤科技有限公司 Real-time monitoring and early warning method and device for radar target in three-dimensional system
CN110909656B (en) * 2019-11-18 2023-10-13 中电海康集团有限公司 Pedestrian detection method and system integrating radar and camera
CN110909656A (en) * 2019-11-18 2020-03-24 中电海康集团有限公司 Pedestrian detection method and system with integration of radar and camera
CN111028544A (en) * 2019-12-06 2020-04-17 无锡物联网创新中心有限公司 Pedestrian early warning system with V2V technology and vehicle-mounted multi-sensor integration
CN113490965A (en) * 2019-12-30 2021-10-08 深圳元戎启行科技有限公司 Image tracking processing method and device, computer equipment and storage medium
CN111275075A (en) * 2020-01-10 2020-06-12 山东超越数控电子股份有限公司 Vehicle detection and tracking method based on 3D laser radar
CN111275075B (en) * 2020-01-10 2023-05-02 超越科技股份有限公司 Vehicle detection and tracking method based on 3D laser radar
CN112396650A (en) * 2020-03-30 2021-02-23 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN112396650B (en) * 2020-03-30 2023-04-07 青岛慧拓智能机器有限公司 Target ranging system and method based on fusion of image and laser radar
CN111666855A (en) * 2020-05-29 2020-09-15 中国科学院地理科学与资源研究所 Unmanned aerial vehicle-based animal three-dimensional parameter extraction method and system and electronic equipment
CN111666855B (en) * 2020-05-29 2023-06-30 中国科学院地理科学与资源研究所 Method, system and electronic equipment for extracting three-dimensional parameters of animals based on drone
CN111640158B (en) * 2020-06-11 2023-11-10 武汉斌果科技有限公司 End-to-end camera and laser radar external parameter calibration method based on corresponding mask
CN111640158A (en) * 2020-06-11 2020-09-08 武汉斌果科技有限公司 End-to-end camera based on corresponding mask and laser radar external reference calibration method
CN111899279A (en) * 2020-07-10 2020-11-06 浙江大华技术股份有限公司 Method and device for detecting movement speed of target object
CN112101092A (en) * 2020-07-31 2020-12-18 北京智行者科技有限公司 Automatic driving environment perception method and system
CN111986232B (en) * 2020-08-13 2021-09-14 上海高仙自动化科技发展有限公司 Target object detection method, target object detection device, robot and storage medium
CN111986232A (en) * 2020-08-13 2020-11-24 上海高仙自动化科技发展有限公司 Target object detection method, target object detection device, robot and storage medium
CN112085101A (en) * 2020-09-10 2020-12-15 湖南大学 High-performance and high-reliability environment fusion sensing method and system
CN114167404A (en) * 2020-09-11 2022-03-11 华为技术有限公司 Target tracking method and device
WO2022052765A1 (en) * 2020-09-11 2022-03-17 华为技术有限公司 Target tracking method and device
CN112233097B (en) * 2020-10-19 2022-10-28 中国科学技术大学 System and method for other vehicle detection in road scene based on multi-dimensional fusion of space-time domain
CN112233097A (en) * 2020-10-19 2021-01-15 中国科学技术大学 Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
CN112487919A (en) * 2020-11-25 2021-03-12 吉林大学 3D target detection and tracking method based on camera and laser radar
CN112882059B (en) * 2021-01-08 2023-01-17 中国船舶重工集团公司第七0七研究所 Unmanned ship inland river obstacle sensing method based on laser radar
CN112882059A (en) * 2021-01-08 2021-06-01 中国船舶重工集团公司第七0七研究所 Unmanned ship inland river obstacle sensing method based on laser radar
CN113173502A (en) * 2021-01-15 2021-07-27 福建电子口岸股份有限公司 Anti-collision method and system based on laser visual fusion and deep learning
CN113173502B (en) * 2021-01-15 2023-06-06 福建电子口岸股份有限公司 Anticollision method and system based on laser vision fusion and deep learning
CN112896879A (en) * 2021-02-24 2021-06-04 同济大学 Environment sensing system for intelligent sanitation vehicle
WO2022183685A1 (en) * 2021-03-01 2022-09-09 亿咖通(湖北)科技有限公司 Target detection method, electronic medium and computer storage medium
CN112989997A (en) * 2021-03-11 2021-06-18 中国科学技术大学 3D target detection method and system based on multi-information fusion
CN113066100B (en) * 2021-03-25 2024-09-10 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
CN113066100A (en) * 2021-03-25 2021-07-02 东软睿驰汽车技术(沈阳)有限公司 Target tracking method, device, equipment and storage medium
WO2022262594A1 (en) * 2021-06-15 2022-12-22 同方威视技术股份有限公司 Method and apparatus for following target, robot, and computer-readable storage medium
CN113674355A (en) * 2021-07-06 2021-11-19 中国北方车辆研究所 Target identification and positioning method based on camera and laser radar
CN113763423A (en) * 2021-08-03 2021-12-07 中国北方车辆研究所 Multi-mode data based systematic target recognition and tracking method
CN113689471A (en) * 2021-09-09 2021-11-23 中国联合网络通信集团有限公司 Target tracking method and device, computer equipment and storage medium
CN113689471B (en) * 2021-09-09 2023-08-18 中国联合网络通信集团有限公司 Target tracking method, device, computer equipment and storage medium
CN114120255A (en) * 2021-10-29 2022-03-01 际络科技(上海)有限公司 Target identification method and device based on laser radar speed measurement
CN113743385A (en) * 2021-11-05 2021-12-03 陕西欧卡电子智能科技有限公司 Unmanned ship water surface target detection method and device and unmanned ship
CN113743391A (en) * 2021-11-08 2021-12-03 江苏天策机器人科技有限公司 Three-dimensional obstacle detection system and method applied to low-speed autonomous driving robot
CN114118253B (en) * 2021-11-23 2024-02-20 合肥工业大学 Vehicle detection method and device based on multi-source data fusion
CN114118253A (en) * 2021-11-23 2022-03-01 合肥工业大学 Vehicle detection method and detection device based on multi-source data fusion
CN114155720A (en) * 2021-11-29 2022-03-08 上海交通大学 A vehicle detection and trajectory prediction method for roadside lidar
CN114239706A (en) * 2021-12-08 2022-03-25 山东新一代信息产业技术研究院有限公司 Target fusion method and system based on multiple cameras and laser radar
CN113903029A (en) * 2021-12-10 2022-01-07 智道网联科技(北京)有限公司 Method and device for marking 3D frame in point cloud data
CN114428259A (en) * 2021-12-13 2022-05-03 武汉中海庭数据技术有限公司 Automatic vehicle extraction method in laser point cloud of ground library based on map vehicle acquisition
CN114295139A (en) * 2021-12-14 2022-04-08 武汉依迅北斗时空技术股份有限公司 Cooperative sensing positioning method and system
CN114332158B (en) * 2021-12-17 2024-05-07 重庆大学 3D real-time multi-target tracking method based on fusion of camera and laser radar
CN114332158A (en) * 2021-12-17 2022-04-12 重庆大学 A 3D real-time multi-target tracking method based on fusion of camera and lidar
CN114241195A (en) * 2021-12-20 2022-03-25 北京亮道智能汽车技术有限公司 Target identification method and device, electronic equipment and storage medium
CN114545435A (en) * 2021-12-21 2022-05-27 武汉市众向科技有限公司 A dynamic target perception system and method integrating camera and lidar
CN114332784A (en) * 2021-12-30 2022-04-12 江苏集萃深度感知技术研究所有限公司 Port hull identification method based on machine vision and radar
CN114545434A (en) * 2022-01-13 2022-05-27 燕山大学 Road side visual angle speed measurement method and system, electronic equipment and storage medium
CN114684154A (en) * 2022-03-24 2022-07-01 重庆长安汽车股份有限公司 Method for correcting visual detection target course angle based on radar point cloud and storage medium
CN114419571A (en) * 2022-03-30 2022-04-29 北京理工大学 A method and system for target detection and positioning for unmanned vehicles
CN114419571B (en) * 2022-03-30 2022-06-17 北京理工大学 Target detection and positioning method and system for unmanned vehicle
CN114419572B (en) * 2022-03-31 2022-06-17 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114419572A (en) * 2022-03-31 2022-04-29 国汽智控(北京)科技有限公司 Multi-radar target detection method and device, electronic equipment and storage medium
CN114724109A (en) * 2022-04-06 2022-07-08 深兰人工智能(深圳)有限公司 Target detection method, device, equipment and storage medium
WO2023202335A1 (en) * 2022-04-20 2023-10-26 深圳市普渡科技有限公司 Target tracking method, robot, computer device, and storage medium
CN114972758B (en) * 2022-06-06 2024-05-31 上海人工智能创新中心 Instance segmentation method based on point cloud weak supervision
CN114972758A (en) * 2022-06-06 2022-08-30 上海人工智能创新中心 Instance segmentation method based on point cloud weak supervision
CN114779271A (en) * 2022-06-16 2022-07-22 杭州宏景智驾科技有限公司 Target detection method and device, electronic equipment and storage medium
CN114782496A (en) * 2022-06-20 2022-07-22 杭州闪马智擎科技有限公司 Object tracking method and device, storage medium and electronic device
CN115113206A (en) * 2022-06-23 2022-09-27 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground railcar
CN115113206B (en) * 2022-06-23 2024-04-12 湘潭大学 Pedestrian and obstacle detection method for assisting driving of underground rail car
CN115546749B (en) * 2022-09-14 2023-05-30 武汉理工大学 Pavement pothole detection, cleaning and avoiding method based on camera and laser radar
CN115546749A (en) * 2022-09-14 2022-12-30 武汉理工大学 Road surface depression detection, cleaning and avoidance method based on camera and laser radar
EP4414746A1 (en) 2023-02-08 2024-08-14 Continental Autonomous Mobility Germany GmbH Multi-object detection and tracking
CN116523962A (en) * 2023-04-20 2023-08-01 北京百度网讯科技有限公司 Visual tracking method, device, system, equipment and medium for target object
CN117991250A (en) * 2024-01-04 2024-05-07 广州里工实业有限公司 Mobile robot positioning detection method, system, equipment and medium
CN119406785A (en) * 2025-01-06 2025-02-11 北京霍里思特科技有限公司 Material sorting method, material sorting device, material sorting equipment and storage medium

Also Published As

Publication number Publication date
CN110246159B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110246159B (en) 3D target motion analysis method based on vision and radar information fusion
US11461912B2 (en) Gaussian mixture models for temporal depth fusion
CN106681353B (en) Obstacle avoidance method and system for UAV based on binocular vision and optical flow fusion
CN104197928B (en) Multi-camera collaboration-based method for detecting, positioning and tracking unmanned aerial vehicle
Zou et al. Real-time full-stack traffic scene perception for autonomous driving with roadside cameras
WO2024114119A1 (en) Sensor fusion method based on binocular camera guidance
CN112740225B (en) A kind of pavement element determination method and device
CN110221603A (en) A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud
CN111881790A (en) Automatic extraction method and device for road crosswalk in high-precision map making
AU2021255130B2 (en) Artificial intelligence and computer vision powered driving-performance assessment
CN115144828B (en) An automatic online calibration method for spatiotemporal fusion of multi-sensors for intelligent vehicles
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
Parra et al. Robust visual odometry for vehicle localization in urban environments
CN106446785A (en) Passable road detection method based on binocular vision
CN104121902A (en) Implementation method of indoor robot visual odometer based on Xtion camera
CN114998276B (en) A real-time detection method for robot dynamic obstacles based on 3D point cloud
CN115797408A (en) Target tracking method and device for fusing multi-view images and 3D point clouds
Sun et al. Automatic targetless calibration for LiDAR and camera based on instance segmentation
WO2021063756A1 (en) Improved trajectory estimation based on ground truth
CN112699748B (en) Estimation method of distance between people and vehicles based on YOLO and RGB images
Omar et al. Detection and localization of traffic lights using YOLOv3 and Stereo Vision
Majdik et al. Micro air vehicle localization and position tracking from textured 3d cadastral models
CN111126363B (en) Object recognition method and device for automatic driving vehicle
CN113920254A (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN117470259A (en) Primary and secondary type space-ground cooperative multi-sensor fusion three-dimensional map building system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant