CN102737236B - Method for automatically acquiring vehicle training sample based on multi-modal sensor data - Google Patents
Method for automatically acquiring vehicle training sample based on multi-modal sensor data Download PDFInfo
- Publication number
- CN102737236B CN102737236B CN201210234127.0A CN201210234127A CN102737236B CN 102737236 B CN102737236 B CN 102737236B CN 201210234127 A CN201210234127 A CN 201210234127A CN 102737236 B CN102737236 B CN 102737236B
- Authority
- CN
- China
- Prior art keywords
- vehicle
- data
- laser
- candidate
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
一种基于多模态传感器数据自动获取车辆训练样本方法,基于激光、定位数据的车辆检测步骤:根据激光数据的距离、角度和激光传感器标定参数,得到相对于数据采集车辆的二维坐标,以描述物体水平的轮廓信息;通过形状的分析,以及移动物体的检测追踪,提取出候选车辆相对于数据采集车辆的位置方向等参数的时序列;视觉图像样本提取步骤:根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将候选车辆投射到图像中,产生感兴趣区域,使用检测器来修正感兴趣区域,对每辆候选车辆,根据其位置方向等参数计算该候选车辆相对于摄像机的相对视角,去除视角相近的图像帧样本,自动地提取该候选车辆在不同视角下的样本图片。
A method for automatically obtaining vehicle training samples based on multi-modal sensor data, the vehicle detection step based on laser and positioning data: according to the distance, angle and laser sensor calibration parameters of the laser data, the two-dimensional coordinates relative to the data acquisition vehicle are obtained, and the Describe the contour information of the object level; through the analysis of the shape, and the detection and tracking of the moving object, extract the time series of parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle; the visual image sample extraction step: according to the candidate vehicle at each moment According to the geometric relationship between the laser sensor and the image acquisition device, the candidate vehicle is projected into the image to generate a region of interest, and the detector is used to correct the region of interest. For each candidate vehicle, according to its position and direction, etc. The parameter calculates the relative angle of view of the candidate vehicle relative to the camera, removes image frame samples with similar angles of view, and automatically extracts sample pictures of the candidate vehicle under different angles of view.
Description
技术领域technical field
本发明涉及计算机视觉、机器人以及机器学习技术领域,尤其涉及一种基于多模态传感器数据自动获取车辆训练样本方法。The invention relates to the technical fields of computer vision, robot and machine learning, in particular to a method for automatically acquiring vehicle training samples based on multimodal sensor data.
背景技术Background technique
车辆检测是汽车安全辅助驾驶(ADAS)领域的一个重要问题。在车辆检测领域已经有了大量的相关研究,研究证明,使用激光、雷达、单目/立体相机以及多传感器融合均能对车辆进行检测。Vehicle detection is an important problem in the field of automotive driver assistance systems (ADAS). There has been a large amount of related research in the field of vehicle detection, which has proved that vehicles can be detected using lasers, radars, monocular/stereo cameras, and multi-sensor fusion.
由于单目相机成本低,而且标定问题简单,基于单目视觉的检测方法在计算机视觉以及机器人领域有广泛的研究。使用视觉传感器时,车辆的本身表观以及车辆在不同角度的表观有很大不同,给检测带来很大的困难。近期越来越多的研究人员尝试使用机器学习的方法检测车辆。Due to the low cost of monocular cameras and the simple calibration problem, detection methods based on monocular vision have been extensively studied in the fields of computer vision and robotics. When using visual sensors, the appearance of the vehicle itself and the appearance of the vehicle at different angles are very different, which brings great difficulties to detection. Recently, more and more researchers try to use machine learning methods to detect vehicles.
这些方法中,检测器是预先使用一系列的样本图片训练出来的。有很多的数据集开放用于训练检测器。PASCAL提供了很多标准化的数据集用于物体检测。In these methods, the detector is pre-trained using a series of sample images. There are many datasets open for training detectors. PASCAL provides many standardized datasets for object detection.
其中UIUC数据集是专门用于车辆检测识别的数据集,包括550张分辨率为100×40的车辆图片作为训练正样本,而且包含两个测试集:170张和训练正样本相同分辨率的均一尺度的车辆,和108张包含139辆多尺度车辆的图片。Among them, the UIUC data set is a data set specially used for vehicle detection and recognition, including 550 vehicle pictures with a resolution of 100×40 as training positive samples, and contains two test sets: 170 uniform images with the same resolution as the training positive samples. scale vehicles, and 108 images containing 139 multi-scale vehicles.
很多研究中都使用此数据集来说明研究结果。然而,UIUC数据集中的车辆全部为侧面视角的图像,在道路车辆检测中,所检测的车辆多数为前面或者后面的视角,所以此数据集并不适用。This dataset is used in many studies to illustrate research findings. However, all the vehicles in the UIUC data set are images from the side view. In road vehicle detection, most of the detected vehicles are from the front or rear view, so this data set is not applicable.
另外一个缺点是UIUC数据集中图片均为黑白图片,使用此数据集对检测器的特征空间有很大限制。与UIUC数据集不同,MIT的数据集包括516张正样本图片,分辨率为128×128,全部为前方或者后方的视角。Another disadvantage is that the pictures in the UIUC dataset are all black and white pictures, and the use of this dataset has a great limitation on the feature space of the detector. Unlike the UIUC dataset, the MIT dataset includes 516 positive sample images with a resolution of 128×128, all of which are from the front or rear perspective.
对现在表现良好的方法,训练样本是影响其表现的重要因素。为了研究建立多视角的车辆检测器,USC的研究人员建立了多视角的车辆样本图像以及测试图像的数据集。数据集包括1028张分辨率为128×64的来自各个角度的车辆正样本图像,并且有196张测试图像工包含410个不同尺度不同角度的车辆。For methods that perform well today, training samples are an important factor affecting their performance. In order to study the establishment of a multi-view vehicle detector, USC researchers established a dataset of multi-view vehicle sample images and test images. The dataset includes 1028 positive images of vehicles from various angles with a resolution of 128×64, and 196 test images containing 410 vehicles of different scales and angles.
然而,此数据集中所有的样本均不包含车辆的位姿信息,通常训练数据根据训练需求需要手工进行标注分类,这对样本的数量、表现都有很大限制。这成为限制算法发展的一个瓶颈。检测器的表现,对环境的变化通常缺乏稳定性。However, all the samples in this dataset do not contain the pose information of the vehicle. Usually, the training data needs to be manually labeled and classified according to the training requirements, which greatly limits the number and performance of samples. This becomes a bottleneck restricting the development of algorithms. The performance of the detector is usually unstable to changes in the environment.
发明内容Contents of the invention
本发明的目的在于提供一种基于多模态传感器数据自动获取车辆训练样本方法,涉及一个自动生成多角度车辆样本图像并且包含位姿信息的方法。The purpose of the present invention is to provide a method for automatically acquiring vehicle training samples based on multi-modal sensor data, which relates to a method for automatically generating multi-angle vehicle sample images and including pose information.
本发明公开了一种基于多模态传感器数据自动获取车辆训练样本方法,包括以下步骤:The invention discloses a method for automatically acquiring vehicle training samples based on multi-modal sensor data, comprising the following steps:
基于激光、定位数据的车辆检测步骤:根据激光数据的距离、角度和激光传感器标定参数,得到相对于数据采集车辆的二维坐标,以描述物体水平的轮廓信息;通过形状的分析,以及移动物体的检测追踪,提取出候选车辆相对于数据采集车辆的位置方向等参数的时序列;Vehicle detection steps based on laser and positioning data: According to the distance, angle and laser sensor calibration parameters of the laser data, obtain the two-dimensional coordinates of the vehicle relative to the data acquisition to describe the contour information of the object level; through the analysis of the shape, and the moving object The detection and tracking of the candidate vehicle, extracting the time series of parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle;
视觉图像样本提取步骤:根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将该候选车辆投射到图像中,产生感兴趣区域,并使用检测器来修正感兴趣区域,对每一辆候选车辆,根据其位置方向等参数计算该候选车辆相对于摄像机的相对视角,去除视角相近的图像帧样本,自动地提取该候选车辆在不同视角下的样本图片。Visual image sample extraction step: According to the position and direction of the candidate vehicle at each moment, according to the geometric relationship between the laser sensor and the image acquisition device, project the candidate vehicle into the image, generate the region of interest, and use the detector to correct In the area of interest, for each candidate vehicle, calculate the relative angle of view of the candidate vehicle relative to the camera according to its position and direction parameters, remove image frame samples with similar angles of view, and automatically extract sample pictures of the candidate vehicle under different angles of view.
进一步,作为一种优选,所述基于激光、定位数据的车辆检测步骤进一步包括:Further, as a preference, the vehicle detection step based on laser and positioning data further includes:
数据融合:将来自于各个激光传感器的相同或者临近时间的数据融合;Data Fusion: Fusion of data from the same or near time from each laser sensor;
聚类:将来自于各个激光传感器的数据,根据相邻两点间的距离,进行聚类;Clustering: cluster the data from each laser sensor according to the distance between two adjacent points;
标注:将聚类区分为静止物体、移动物体或者不确定三种类型;地图生成:生成描述数据采集车辆移动轨迹周围静态环境的地图;Labeling: distinguishing clusters into three types: stationary objects, moving objects, or uncertain; map generation: generating a map describing the static environment around the moving track of the data collection vehicle;
检测:在当前的进行分类后的激光融合数据中找到候选车辆;Detection: find candidate vehicles in the current classified laser fusion data;
追踪:关联检测结果和之前的追踪结果,更新追踪状态和车体、运动参数;Tracking: Associating detection results with previous tracking results, updating tracking status and car body and motion parameters;
验证:通过追踪物体的运动和形状信息来对其进行验证。Validation: Validate objects by tracking their motion and shape information.
进一步,作为一种优选,所述视觉图像样本提取步骤进一步包括:Further, as a preference, the visual image sample extraction step further includes:
基于激光数据的感兴趣区域提取:根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将该候选车辆投射到图像中,提取包含候选车辆的感兴趣区域;Region of interest extraction based on laser data: According to the position and direction of the candidate vehicle at each moment, according to the geometric relationship between the laser sensor and the image acquisition device, the candidate vehicle is projected into the image, and the region of interest containing the candidate vehicle is extracted ;
基于图像技术的感兴趣区域修正:使用基于图像的检测方法,对感兴趣区域进行修正,找到其中的候选车辆;Correction of the region of interest based on image technology: use the image-based detection method to correct the region of interest and find the candidate vehicle;
车辆样本图像提取和去重:根据修正结果,去除视角相近的图像帧样本,自动地提取候选车辆在不同视角下的样本图片。Vehicle sample image extraction and deduplication: According to the correction results, image frame samples with similar viewing angles are removed, and sample pictures of candidate vehicles under different viewing angles are automatically extracted.
进一步,作为一种优选,所述多模态传感器包括:多视角激光传感器、多视角图像采集设备和定位系统;所述多视角激光传感器、多视角图像采集设备用来监测数据采集车辆四周环境,构成多个观测角度上的覆盖。Further, as a preference, the multi-modal sensor includes: a multi-view laser sensor, a multi-view image acquisition device and a positioning system; the multi-view laser sensor and the multi-view image acquisition device are used to monitor the surrounding environment of the data acquisition vehicle, Constitute coverage on multiple viewing angles.
进一步,作为一种优选,所述定位系统用于测量车辆的6自由度的位姿。Further, as a preference, the positioning system is used to measure the pose of the vehicle with 6 degrees of freedom.
进一步,作为一种优选,修正感兴趣区域所用检测器采用基于图像的检测方法。Further, as a preference, the detector used for correcting the region of interest adopts an image-based detection method.
进一步,作为一种优选,所述追踪具体为:对于那些没有找到可关联追踪结果的检测结果,视为新的追踪车辆;而那些没有关联到检测结果的追踪车辆,则认为其消失在车辆的监测范围,从追踪结果中去除。Further, as a preference, the tracking is specifically as follows: for those detection results that cannot be associated with the tracking results, they are regarded as new tracking vehicles; and those tracking vehicles that are not associated with the detection results are considered to have disappeared in the vehicle's Monitoring scope, removed from trace results.
进一步,作为一种优选,所述验证具体为:如果某追踪物体在一段时间内没有运动,则视为静止的并将其融入地图信息中;如果追踪物体的运动和形状信息在短时间内发生了不规律的变化,则去掉此结果;只有那些运动和形状正常变化的追踪结果才被视为候选车辆。Further, as a preference, the verification specifically includes: if a tracked object does not move within a period of time, it is regarded as stationary and integrated into the map information; if the tracked object’s movement and shape information occur within a short period of time If there are irregular changes, the results are removed; only those tracking results with normal changes in motion and shape are considered as candidate vehicles.
进一步,作为一种优选,所述感兴趣区域的修正具体为:根据感兴趣区域中候选车辆的位姿,选择基于特定位姿车辆训练的检测器对其进行修正。Further, as a preference, the correction of the region of interest specifically includes: according to the pose of the candidate vehicle in the region of interest, selecting a detector based on vehicle training with a specific pose to correct it.
进一步,作为一种优选,所述去除重复样本具体为:根据被追踪车辆相对于数据采集车辆的运动方向和车头朝向,对表观相同或相近的样本图像进行筛选。Further, as a preference, the removing duplicate samples specifically includes: screening sample images that appear to be the same or similar according to the moving direction and head orientation of the tracked vehicle relative to the data collection vehicle.
本发明通过多模态传感器数据,自动生成多角度车辆样本图像并且包含位姿信息,能够有效避免人工操作,给检测算法的研究提供了更多的自由,更少的限制。而且,自动生成训练样本使在线训练成为可能,使得能够自动的改善分类器以适应光照等环境的变化。The invention automatically generates multi-angle vehicle sample images and includes pose information through multi-modal sensor data, can effectively avoid manual operations, and provides more freedom and less restrictions for the research of detection algorithms. Moreover, the automatic generation of training samples makes online training possible, making it possible to automatically improve the classifier to adapt to changes in lighting and other environments.
本发明是全自动不需要人工干预,能得到大量的车辆样本,使训练集更加丰富,并且得到的训练图片含有车辆的位姿信息,方便训练基于不同位姿的分类器。The invention is fully automatic without manual intervention, can obtain a large number of vehicle samples, enriches the training set, and obtains training pictures containing pose information of vehicles, which is convenient for training classifiers based on different poses.
附图说明Description of drawings
当结合附图考虑时,通过参照下面的详细描述,能够更完整更好地理解本发明以及容易得知其中许多伴随的优点,但此处所说明的附图用来提供对本发明的进一步理解,构成本发明的一部分,本发明的示意性实施例及其说明用于解释本发明,并不构成对本发明的不当限定,其中:A more complete and better understanding of the invention, and many of its attendant advantages, will readily be learned by reference to the following detailed description when considered in conjunction with the accompanying drawings, but the accompanying drawings illustrated herein are intended to provide a further understanding of the invention and constitute A part of the present invention, the exemplary embodiment of the present invention and its description are used to explain the present invention, and do not constitute an improper limitation of the present invention, wherein:
图1是本发明实施例流程图;Fig. 1 is a flowchart of an embodiment of the present invention;
图2是基于激光、定位数据的车辆检测步骤实施例流程图;Fig. 2 is a flow chart of an embodiment of the vehicle detection step based on laser light and positioning data;
图3是视觉图像样本提取实施例流程图。Fig. 3 is a flowchart of an embodiment of visual image sample extraction.
具体实施方式Detailed ways
参照图1至图3对本发明的实施例进行说明。Embodiments of the present invention will be described with reference to FIGS. 1 to 3 .
为使上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。In order to make the above objects, features and advantages more comprehensible, the present invention will be further described in detail below in conjunction with the accompanying drawings and specific embodiments.
如图1所示,一种基于多模态传感器数据自动获取车辆训练样本方法,包括以下步骤:As shown in Figure 1, a method for automatically obtaining vehicle training samples based on multimodal sensor data includes the following steps:
S1、基于激光、定位数据的车辆检测步骤:根据激光数据的距离、角度和激光传感器标定参数,得到相对于数据采集车辆的二维坐标,以描述物体水平的轮廓信息;通过形状的分析,以及移动物体的检测追踪,提取出候选车辆;S1. The vehicle detection step based on laser and positioning data: according to the distance and angle of the laser data and the calibration parameters of the laser sensor, obtain the two-dimensional coordinates of the vehicle relative to the data collection to describe the contour information of the object level; through the analysis of the shape, and Detection and tracking of moving objects to extract candidate vehicles;
S2、视觉图像样本提取步骤:根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将该候选车辆投射到图像中,产生感兴趣区域,并使用检测器来修正感兴趣区域,对每一辆候选车辆,根据其位置方向等参数计算该候选车辆相对于摄像机的相对视角,去除视角相近的图像帧样本,自动地提取该候选车辆在不同视角下的样本图片。S2. Visual image sample extraction step: according to the position and direction of the candidate vehicle at each moment, according to the geometric relationship between the laser sensor and the image acquisition device, project the candidate vehicle into the image to generate an area of interest, and use the detector To modify the region of interest, for each candidate vehicle, calculate the relative viewing angle of the candidate vehicle relative to the camera according to its position, direction and other parameters, remove the image frame samples with similar viewing angles, and automatically extract the samples of the candidate vehicle under different viewing angles picture.
本发明是建立一个自动生成车辆样本数据集用于训练视觉车辆检测器训练的系统。数据集包括多角度的车辆样本图像,各个图像都包括其位姿信息,这样对不同角度车辆的车辆检测器的训练成为可能。The present invention is to establish a system for automatically generating a vehicle sample data set for training a visual vehicle detector. The data set includes multi-angle vehicle sample images, and each image includes its pose information, so that it is possible to train vehicle detectors for vehicles from different angles.
多模态传感器包括激光传感器、图像采集设备和定位系统。激光传感器包括激光扫描仪、激光测距仪等,图像采集设备可以使用摄像头,也可以使用集成了单个或者多个摄像头的相机系统。定位系统以能获取装载了该系统设备的物体的位置信息为目的,如GPS全球卫星定位系统、伽利略卫星定位系统、北斗卫星定位系统等。Multimodal sensors include laser sensors, image acquisition devices, and positioning systems. Laser sensors include laser scanners, laser rangefinders, etc., and image acquisition devices can use cameras or camera systems integrating single or multiple cameras. The purpose of the positioning system is to obtain the position information of the object loaded with the system equipment, such as GPS global satellite positioning system, Galileo satellite positioning system, Beidou satellite positioning system, etc.
各传感器共同对数据采集车辆的四周形成多视角范围的覆盖,根据实际需要和数据采集平台限制,可选择不同的传感器架设来实现覆盖不同视角范围的检测。Each sensor together forms a multi-view coverage around the data collection vehicle. According to the actual needs and the limitations of the data collection platform, different sensor installations can be selected to achieve detection covering different viewing angles.
传感器系统:Sensor system:
本发明公开了一个车载传感器系统。系统包括三种传感器:激光扫描仪、视频摄像机和GPS/IMU。GPS/IMU是工业设备用于测量车辆的6自由度的位姿(三维位置和角度)。激光和摄像机都是用来监测车辆四周环境的,每种传感器都能构成全方位的覆盖。三个Hokuyo UTM-30LX激光扫描仪安装在车辆的前左、前右、后中,构成水平全方位的覆盖,然而Hokuyo UTM-30LX激光扫描仪的监测距离比较短,通常在室外交通环境中只能达到25m,所以在前向正中使用SICK LMS291激光来覆盖监测一个半径达45m的半圆区域。全方位的视频传感器的覆盖可以使用多个摄像头来实现,这里使用集成了多个相机的Ladybug摄像机,采集结果融合了6个摄像头的图片,构成全景图像。为了减少遮挡,Ladybug摄像机架设在车辆的顶部。使用两个经过同步时间的计算机采集传感器数据:一个用于激光扫描仪数据和GPS/IMU数据的采集,另一个用于Ladybug视频数据采集。对于每帧数据,给予记录时计算机的时间作为时间戳。数据传输过程中的延时视为一个恒定值,可以事先测试得到。传感器标定之后,所有的激光扫描仪数据通过转化传输到同一坐标下,此处为数据采集车辆的局部坐标,激光数据的处理结果投射到全景图像上来提取图像样本图片。The invention discloses a vehicle sensor system. The system includes three sensors: laser scanner, video camera and GPS/IMU. GPS/IMU is an industrial device used to measure the 6-DOF pose (3D position and angle) of a vehicle. Lasers and cameras are used to monitor the environment around the vehicle, and each sensor can form a full range of coverage. Three Hokuyo UTM-30LX laser scanners are installed in the front left, front right, and rear middle of the vehicle to form a horizontal and all-round coverage. However, the monitoring distance of the Hokuyo UTM-30LX laser scanner is relatively short, usually only in the outdoor traffic environment It can reach 25m, so the SICK LMS291 laser is used in the center of the front to cover and monitor a semicircle area with a radius of 45m. A full range of video sensor coverage can be achieved using multiple cameras. Here, a Ladybug camera integrated with multiple cameras is used, and the collection results are fused with pictures from 6 cameras to form a panoramic image. In order to reduce occlusion, the Ladybug camera is mounted on the top of the vehicle. Sensor data was acquired using two time-synchronized computers: one for laser scanner data and GPS/IMU data acquisition, and the other for Ladybug video data acquisition. For each frame of data, the computer time at the time of recording is given as a timestamp. The delay during data transmission is regarded as a constant value, which can be obtained by testing in advance. After the sensor is calibrated, all the laser scanner data are transformed and transmitted to the same coordinates, here are the local coordinates of the data collection vehicle, and the processing results of the laser data are projected onto the panoramic image to extract image sample pictures.
处理流程:Processing flow:
如图1所示,本发明包括二个步骤:As shown in Figure 1, the present invention comprises two steps:
S1、基于激光、定位数据的车辆检测;S1. Vehicle detection based on laser and positioning data;
S2、视觉样本图片提取。S2. Visual sample image extraction.
激光扫描仪能够直接测量物体的距离值。根据角度和传感器标定参数,能够得到相对于车辆的二维坐标,以描述物体水平的轮廓信息。通过形状的分析,以及移动物体的检测追踪,能够很快的提取出候选车辆相对于数据采集车辆的位置方向等参数的时序列。Laser scanners are able to directly measure the distance value of an object. According to the angle and sensor calibration parameters, two-dimensional coordinates relative to the vehicle can be obtained to describe the contour information of the object level. Through shape analysis and detection and tracking of moving objects, the time series of parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle can be quickly extracted.
根据激光和摄像机之间的几何关系,这些候选车辆再被投射到全景图像中,产生感兴趣区域,这种感兴趣区域同时包含当前时刻车辆的位姿信息。According to the geometric relationship between the laser and the camera, these candidate vehicles are then projected into the panoramic image to generate a region of interest, which also contains the pose information of the vehicle at the current moment.
然而,激光在物体上扫描得到的点是稀疏的,而且在特殊颜色材料上会产生反射失败。特别是在动态的交通环境中,有很多的遮挡存在,对周围车辆的观测可能只是局部的。这就给激光数据处理带来了很大挑战,处理结果存在一些错误。However, the points scanned by the laser on the object are sparse, and reflection failures will occur on special colored materials. Especially in dynamic traffic environments, where there are many occlusions, the observation of surrounding vehicles may only be partial. This brings great challenges to laser data processing, and there are some errors in the processing results.
不同于激光数据,视频图像数据包含丰富的信息,能够用来修正根据激光处理结果提供的感兴趣区域。在本发明中,使用基于HOG特征的检测器来修正感兴趣区域。Unlike laser data, video image data contains rich information that can be used to refine regions of interest based on laser processing results. In the present invention, a detector based on HOG features is used to refine the region of interest.
此外,在对候选车辆的追踪过程中,其在图像中的表观变化是很缓慢的。在全部的图像中提取车辆图像时会产生大量的视角相近图像,需要一个挑选不同位姿车辆图片的过程。Furthermore, during the tracking of a candidate vehicle, its apparent change in the image is slow. When extracting vehicle images from all images, a large number of images with similar perspectives will be generated, which requires a process of selecting vehicle images in different poses.
如图2所示,所述基于激光、定位数据的车辆检测步骤进一步包括:As shown in Figure 2, the vehicle detection step based on laser light and positioning data further includes:
S11、数据融合:将来自于各个激光扫描仪的相同或者临近时间的数据融合;S11. Data fusion: fusion of data from the same or near time from each laser scanner;
S12、聚类:将来自于各个激光传感器的数据,根据相邻两点间的距离,进行聚类;S12. Clustering: clustering the data from each laser sensor according to the distance between two adjacent points;
S13、标注:将聚类区分为静止物体、移动物体或者不确定三种类型;S13. Labeling: distinguishing clusters into three types: stationary objects, moving objects, or uncertain types;
S14、地图生成:生成描述数据采集车辆移动轨迹周围静态环境的地图;S14. Map generation: generate a map describing the static environment around the moving track of the data collection vehicle;
S15、检测:在当前的进行分类后的激光融合数据中找到候选车辆;S15. Detection: find candidate vehicles in the current classified laser fusion data;
S16、追踪:关联检测结果和之前的追踪结果,更新追踪状态和车体、运动参数;S16. Tracking: Associating the detection results with the previous tracking results, updating the tracking status and vehicle body and motion parameters;
S17、验证:通过追踪物体的运动和形状信息来对其进行验证。S17. Verification: verify the object by tracking its motion and shape information.
以下具体说明上述工作过程。The above working process will be described in detail below.
S1、基于激光、定位数据的车辆检测:S1. Vehicle detection based on laser and positioning data:
本发明中,公开一个基于多个单线激光扫描仪数据及定位数据进行道路车辆检测追踪的方法,根据激光数据的距离、角度和激光传感器标定参数,得到相对于数据采集车辆的二维坐标,以描述物体水平的轮廓信息;通过形状的分析,以及移动物体的检测追踪,提取出候选车辆相对于数据采集车辆的位置方向等参数的时序列。该数据为S2视觉样本图片提取提供每一时刻候选车辆相对于数据采集车辆的位置方向等参数。基于激光扫描仪数据及定位数据的车辆检测框架如图2所示,下面对各个模块进行介绍。In the present invention, a method for detecting and tracking road vehicles based on multiple single-line laser scanner data and positioning data is disclosed. According to the distance and angle of the laser data and the calibration parameters of the laser sensor, the two-dimensional coordinates of the vehicle relative to the data acquisition are obtained. Describe the contour information of the object level; through the analysis of the shape and the detection and tracking of the moving object, the time series of parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle are extracted. This data provides parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle at each moment for S2 visual sample image extraction. The vehicle detection framework based on laser scanner data and positioning data is shown in Figure 2, and each module is introduced below.
S11、数据融合:将来自于各个激光扫描仪的相同或者临近时间的数据融合,为了减少使用内存,同时还要保存数据的角度信息,融合后的数据按序列记录来自不同激光传感器的距离数据。每个距离数据根据其次序获得角度信息,同时按照激光之间的标定参数,能够转化为相对于数据采集车辆的二维坐标。融合后的激光数据能够描述从数据采集车辆视角观看的物体的轮廓信息。S11. Data fusion: The data from the same or near time from each laser scanner is fused. In order to reduce the use of memory and save the angle information of the data, the fused data records the distance data from different laser sensors in sequence. Each distance data obtains angle information according to its sequence, and at the same time, according to the calibration parameters between lasers, it can be converted into two-dimensional coordinates relative to the data collection vehicle. The fused laser data can describe the contour information of the object as viewed from the perspective of the data collection vehicle.
S12、聚类:将来自于各个激光传感器的数据,根据相邻两点间的距离,进行聚类。这里使用两点间的欧式距离,同时考虑了角度的间隔。如果距离大于给定阈值,则产生新的聚类。一个聚类可以是做是对一个物体的观测,可能是移动也可能是静止的,此处的聚类只在相同激光传感器数据中进行。S12. Clustering: clustering the data from each laser sensor according to the distance between two adjacent points. The Euclidean distance between two points is used here, taking into account the angular interval. If the distance is greater than a given threshold, a new cluster is generated. A cluster can be an observation of an object, which may be moving or stationary, where the clustering is only performed on the same laser sensor data.
S13、标注:将聚类区分为静止物体、移动物体或者不确定三种类型。首先根据数据采集车辆的GPS/IMU记录的位置信息,将各聚类投射到全局的坐标系中。对于各聚类,如果它能够和上一帧的静止的环境的预期估计匹配上,则可以认为是静止的,如果和某移动物体的预期估计匹配,则视为移动的,否则认为是不确定的。而且还可以根据先验知识对聚类补充分类,如车辆的大小、道路几何信息等。分类后的激光数据将在地图生成和移动物体检测模块中应用。S13. Labeling: distinguishing the clusters into three types: stationary objects, moving objects, or uncertain. First, according to the location information recorded by the GPS/IMU of the data collection vehicle, each cluster is projected into the global coordinate system. For each cluster, if it can match the expected estimate of the static environment of the previous frame, it can be considered stationary, if it matches the expected estimate of a moving object, it is considered moving, otherwise it is considered uncertain of. Moreover, clusters can be supplemented by classification based on prior knowledge, such as vehicle size, road geometry information, etc. The classified laser data will be applied in map generation and moving object detection modules.
S14、地图生成:生成描述数据采集车辆移动轨迹周围静态环境的地图。地图用网格来描述,各个网格的数值表示网格被物体占有的概率。S14. Map generation: generate a map describing the static environment around the moving track of the data collection vehicle. The map is described by a grid, and the value of each grid indicates the probability that the grid is occupied by an object.
S15、检测:在当前的进行分类后的激光融合数据中找到候选车辆。这里局部观测的和重合的观测值是检测中的两个困难部分。为了提高特征参数估计的准确性,并减少错误检测,将聚类结果进行合并是很有必要的。在本发明中,简单的用车辆的轮廓方框定义了车辆的模型,并且开发了聚类合并和模型估计的算法。此外,对于之前检测追踪结果的预期估计能够帮助减少聚类合并中的错误,特别是对那些在同一车辆上但并不连续的聚类。S15. Detection: find candidate vehicles in the current classified laser fusion data. Here locally observed and coincident observations are the two difficult parts in detection. In order to improve the accuracy of feature parameter estimation and reduce false detection, it is necessary to combine the clustering results. In the present invention, the model of the vehicle is simply defined by the outline box of the vehicle, and the algorithm of cluster merging and model estimation is developed. In addition, the expected estimation of the results of previous detection tracks can help reduce errors in cluster merging, especially for clusters that share the same vehicle but are not consecutive.
S16、追踪:关联检测结果和之前的追踪结果,更新追踪状态和车体、运动参数。对于那些没有找到可关联追踪结果的检测结果,视为新的追踪车辆。而那些没有关联到检测结果的追踪车辆,则认为其消失在车辆的监测范围,从追踪结果中去除。S16. Tracking: Associating the detection results with the previous tracking results, updating the tracking status and vehicle body and motion parameters. For those detection results that do not find correlative tracking results, they are regarded as new tracking vehicles. Those tracking vehicles that are not associated with the detection results are considered to have disappeared from the monitoring range of the vehicle and are removed from the tracking results.
S17、验证:通过追踪物体的运动和形状信息来对其进行验证。如果某追踪物体在一段时间内没有运动,则视为静止的并将其融入地图信息中。如果追踪物体的运动和形状信息在短时间内发生了不规律的变化,则去掉此结果。只有那些运动和形状正常变化的追踪结果才被视为候选车辆。S17. Verification: verify the object by tracking its motion and shape information. If a tracked object has not moved for a period of time, it is considered stationary and incorporated into the map information. If the motion and shape information of the tracked object changes irregularly in a short period of time, this result is removed. Only those tracking results with normal changes in motion and shape are considered as candidate vehicles.
如图3所示,所述视觉图像样本提取步骤进一步包括:As shown in Figure 3, the visual image sample extraction step further includes:
S2、视觉样本图片提取:S2. Visual sample image extraction:
本发明中,公开一个视觉样本图片提取方法,通过利用S1所获取候选车辆相对于数据采集车辆的位置方向等参数的时序列,根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将候选车辆投射到图像中,产生感兴趣区域,使用检测器来修正感兴趣区域,对每辆候选车辆,根据其位置方向等参数计算该候选车辆相对于摄像机的相对视角,去除视角相近的图像帧样本,自动地提取该候选车辆在不同视角下的样本图片。视觉样本图片提取框架如图3所示,下面对各个模块进行介绍。In the present invention, a visual sample picture extraction method is disclosed. By using the time series of parameters such as the position and direction of the candidate vehicle relative to the data collection vehicle obtained by S1, according to the position and direction of the candidate vehicle at each moment, according to the laser sensor and image acquisition The geometric relationship between the devices, project the candidate vehicle into the image, generate the region of interest, use the detector to correct the region of interest, for each candidate vehicle, calculate the relative position of the candidate vehicle relative to the camera according to its position, direction and other parameters The angle of view removes image frame samples with similar angles of view, and automatically extracts sample pictures of the candidate vehicle under different angles of view. The visual sample image extraction framework is shown in Figure 3, and each module is introduced below.
S21、基于激光数据的感兴趣区域提取:根据候选车辆在每一时刻的位置方向,根据激光传感器和图像采集设备之间的几何关系,将该候选车辆投射到图像中,提取包含候选车辆的感兴趣区域;S21. Region of interest extraction based on laser data: According to the position and direction of the candidate vehicle at each moment, according to the geometric relationship between the laser sensor and the image acquisition device, project the candidate vehicle into the image, and extract the sensory information containing the candidate vehicle. area of interest;
S22、基于图像技术的感兴趣区域修正:使用基于图像的检测方法,对感兴趣区域进行修正,找到其中的车辆;S22. Correction of the region of interest based on image technology: using an image-based detection method to correct the region of interest and find the vehicle therein;
S23、车辆样本图像提取和去重:根据修正结果,去除视角相近的图像帧样本,自动地提取车辆在不同视角下的样本图片。S23. Vehicle sample image extraction and deduplication: according to the correction result, image frame samples with similar viewing angles are removed, and sample images of vehicles under different viewing angles are automatically extracted.
以下具体说明上述工作过程。The above working process will be described in detail below.
S2、视觉图像样本提取:S2. Visual image sample extraction:
使用对激光数据处理得到的候选车辆,在视频图像数据中提取车辆样本图片的流程如图3所示。Using the candidate vehicles obtained from laser data processing, the process of extracting vehicle sample pictures from video image data is shown in Figure 3.
S21、基于激光数据的感兴趣区域提取:激光数据处理得到的候选车辆包括车辆的位置、大小、运动信息,根据激光扫描仪和摄像机之间的标定关系,将候选车辆投射到对应时间的图像中,得到包含候选车辆的感兴趣区域,同时根据对应的运动信息得到感兴趣区域中候选车辆的位姿信息。由于动态交通环境中激光数据的处理存在很大困难,感兴趣区域通常含有错误,需要进行修正。S21. Extraction of regions of interest based on laser data: the candidate vehicles obtained by laser data processing include the position, size, and motion information of the vehicles, and project the candidate vehicles into the image at the corresponding time according to the calibration relationship between the laser scanner and the camera , get the ROI containing the candidate vehicle, and get the pose information of the candidate vehicle in the ROI according to the corresponding motion information. Due to the great difficulty in processing laser data in dynamic traffic environments, regions of interest often contain errors that need to be corrected.
S22、基于图像技术的感兴趣区域修正:在本发明中,使用的是集成了6个摄像头的Ladybug全景相机系统监测车辆四周环境,产生的全景图像在几何上存在一定的畸变。在各个感兴趣区域上的几何畸变需要提前消除,通过将感兴趣区域上的像素点投射到其在球体坐标系中的切面上,能够有效的去除畸变。S22. Image-based region-of-interest correction: In the present invention, a Ladybug panoramic camera system integrating 6 cameras is used to monitor the environment around the vehicle, and the resulting panoramic image has certain geometric distortions. The geometric distortion on each region of interest needs to be eliminated in advance, and the distortion can be effectively removed by projecting the pixel points on the region of interest to its cut plane in the spherical coordinate system.
由于感兴趣区域中车辆的位姿是已知的,所以可以基于不同位姿角度对感兴趣区域进行修正。使用基于HOG特征的检测器来修正感兴趣区域。使用USC多角度车辆数据集训练不同角度车辆的分类器,训练数据手工进行分类。每类选取200张图片作为正样本,训练得到4个分类器。在感兴趣区域上进行车辆的检测,选取得分最高并且高于给定阈值的检测车辆,作为样本图片。Since the pose of the vehicle in the ROI is known, the ROI can be corrected based on different pose angles. A detector based on HOG features is used to refine the region of interest. The USC multi-angle vehicle dataset is used to train classifiers for vehicles with different angles, and the training data is manually classified. For each category, 200 images are selected as positive samples, and 4 classifiers are trained. Vehicle detection is performed on the region of interest, and the detected vehicle with the highest score and higher than a given threshold is selected as a sample picture.
S23、车辆样本提取和去重:候选车辆的追踪过程中,其在图片中的表现变化非常缓慢。同一车辆在图像中的表现通常要在很多帧之后才有较大的区别。而且,在道路上行驶时,车辆之间相对静止的情况常有发生,这样车辆的表现会在很长时间内没有变化。需要对表现相同或者相近的车辆图片进行筛选。车辆相对数据采集车辆的运动方向和其车头朝向是对候选车辆在图片中表现影响最大的因素,对于每个车辆,其在各个时刻的相对于数据采集车辆的运动方向α和车头朝向β都能够计算得到。将这两个角度离散为10°×10°的角度栅格,在每个角度栅格内对同一车辆提取修正过程中得分最高的图像作为样本。而且,根据这两个角度的差值来确定车辆在样本图片中的位姿,将样本图片分为8个类别。S23. Vehicle sample extraction and deduplication: During the tracking process of the candidate vehicle, its appearance in the picture changes very slowly. The performance of the same vehicle in the image usually has a large difference after many frames. Moreover, when driving on the road, it often happens that the vehicles are relatively stationary, so that the performance of the vehicle will not change for a long time. It is necessary to filter pictures of vehicles with the same or similar performance. The direction of motion of the vehicle relative to the data collection vehicle and its head orientation are the most influential factors on the performance of candidate vehicles in the picture. For each vehicle, its movement direction α and head orientation β relative to the data collection vehicle at each moment can be calculated. The two angles are discretized into an angle grid of 10°×10°, and the image with the highest score during the correction process is extracted for the same vehicle in each angle grid as a sample. Moreover, the pose of the vehicle in the sample picture is determined according to the difference between the two angles, and the sample picture is divided into 8 categories.
如上所述,对本发明的实施例进行了详细地说明,但是只要实质上没有脱离本发明的发明点及效果可以有很多的变形,这对本领域的技术人员来说是显而易见的。因此,这样的变形例也全部包含在本发明的保护范围之内。As mentioned above, although the Example of this invention was demonstrated in detail, it is obvious to those skilled in the art that many modifications can be made as long as the inventive point and effect of this invention are not substantially deviated. Therefore, all such modified examples are also included in the protection scope of the present invention.
Claims (8)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210234127.0A CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201210234127.0A CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN102737236A CN102737236A (en) | 2012-10-17 |
| CN102737236B true CN102737236B (en) | 2015-06-24 |
Family
ID=46992705
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201210234127.0A Expired - Fee Related CN102737236B (en) | 2012-07-06 | 2012-07-06 | Method for automatically acquiring vehicle training sample based on multi-modal sensor data |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN102737236B (en) |
Families Citing this family (39)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN104276102B (en) * | 2013-07-10 | 2016-05-11 | 德尔福电子(苏州)有限公司 | A kind of viewing system caliberating device detecting based on vehicle location |
| CN106569840B (en) * | 2015-10-08 | 2020-10-30 | 上海智瞳通科技有限公司 | A method for automatic acquisition of samples by a machine vision driving assistance system to improve recognition accuracy |
| CN105303837A (en) * | 2015-11-24 | 2016-02-03 | 东南大学 | Method and system for detecting following behavior characteristic parameter of driver |
| CN106291736A (en) * | 2016-08-16 | 2017-01-04 | 张家港长安大学汽车工程研究院 | Pilotless automobile track dynamic disorder object detecting method |
| CN106529417A (en) * | 2016-10-17 | 2017-03-22 | 北海益生源农贸有限责任公司 | Visual and laser data integrated road detection method |
| US10678244B2 (en) | 2017-03-23 | 2020-06-09 | Tesla, Inc. | Data synthesis for autonomous control systems |
| US10097757B1 (en) * | 2017-03-24 | 2018-10-09 | Fotonation Limited | Method for determining bias in an inertial measurement unit of an image acquisition device |
| CN106969923B (en) * | 2017-05-26 | 2023-09-15 | 交通运输部公路科学研究所 | A vehicle channel circular trajectory testing system and method |
| US11893393B2 (en) | 2017-07-24 | 2024-02-06 | Tesla, Inc. | Computational array microprocessor system with hardware arbiter managing memory requests |
| US10671349B2 (en) | 2017-07-24 | 2020-06-02 | Tesla, Inc. | Accelerated mathematical engine |
| US11157441B2 (en) | 2017-07-24 | 2021-10-26 | Tesla, Inc. | Computational array microprocessor system using non-consecutive data formatting |
| US11409692B2 (en) | 2017-07-24 | 2022-08-09 | Tesla, Inc. | Vector computational unit |
| US12307350B2 (en) | 2018-01-04 | 2025-05-20 | Tesla, Inc. | Systems and methods for hardware-based pooling |
| CN108398083B (en) * | 2018-01-29 | 2021-03-16 | 湖南三德科技股份有限公司 | Carriage positioning method and positioning device |
| US11561791B2 (en) | 2018-02-01 | 2023-01-24 | Tesla, Inc. | Vector computational unit receiving data elements in parallel from a last row of a computational array |
| CN108535707B (en) * | 2018-03-30 | 2020-11-03 | 北京润科通用技术有限公司 | Radar performance prediction model establishing method and device |
| US11215999B2 (en) | 2018-06-20 | 2022-01-04 | Tesla, Inc. | Data pipeline and deep learning system for autonomous driving |
| US11361457B2 (en) | 2018-07-20 | 2022-06-14 | Tesla, Inc. | Annotation cross-labeling for autonomous control systems |
| US11636333B2 (en) | 2018-07-26 | 2023-04-25 | Tesla, Inc. | Optimizing neural network structures for embedded systems |
| US11562231B2 (en) | 2018-09-03 | 2023-01-24 | Tesla, Inc. | Neural networks for embedded devices |
| CA3115784A1 (en) | 2018-10-11 | 2020-04-16 | Matthew John COOPER | Systems and methods for training machine models with augmented data |
| CN112805200B (en) * | 2018-10-11 | 2024-10-29 | 宝马股份公司 | Snapshot image of traffic scene |
| US11196678B2 (en) | 2018-10-25 | 2021-12-07 | Tesla, Inc. | QOS manager for system on a chip communications |
| JP6928917B2 (en) * | 2018-11-06 | 2021-09-01 | パナソニックIpマネジメント株式会社 | Mobile management system, mobile, management device, control method, and program |
| US11816585B2 (en) | 2018-12-03 | 2023-11-14 | Tesla, Inc. | Machine learning models operating at different frequencies for autonomous vehicles |
| US11537811B2 (en) | 2018-12-04 | 2022-12-27 | Tesla, Inc. | Enhanced object detection for autonomous vehicles based on field view |
| US11610117B2 (en) | 2018-12-27 | 2023-03-21 | Tesla, Inc. | System and method for adapting a neural network model on a hardware platform |
| US10997461B2 (en) | 2019-02-01 | 2021-05-04 | Tesla, Inc. | Generating ground truth for machine learning from time series elements |
| US11150664B2 (en) | 2019-02-01 | 2021-10-19 | Tesla, Inc. | Predicting three-dimensional features for autonomous driving |
| US11567514B2 (en) | 2019-02-11 | 2023-01-31 | Tesla, Inc. | Autonomous and user controlled vehicle summon to a target |
| US10956755B2 (en) | 2019-02-19 | 2021-03-23 | Tesla, Inc. | Estimating object properties using visual image data |
| CN110853160B (en) * | 2019-09-29 | 2022-01-18 | 广州市凌特电子有限公司 | Multi-dimensional recognition system and method for expressway lane |
| CN111126336B (en) * | 2019-12-31 | 2023-07-21 | 潍柴动力股份有限公司 | Sample collection method, device and equipment |
| US11023730B1 (en) * | 2020-01-02 | 2021-06-01 | International Business Machines Corporation | Fine-grained visual recognition in mobile augmented reality |
| CN114511834B (en) * | 2020-11-17 | 2025-10-31 | 阿里巴巴集团控股有限公司 | Method and device for determining prompt information, electronic equipment and storage medium |
| CN113496213B (en) * | 2021-06-29 | 2024-05-28 | 中汽创智科技有限公司 | Method, device, system and storage medium for determining target perception data |
| EP4388510A1 (en) | 2021-08-19 | 2024-06-26 | Tesla, Inc. | Vision-based system training with simulated content |
| US12462575B2 (en) | 2021-08-19 | 2025-11-04 | Tesla, Inc. | Vision-based machine learning model for autonomous driving with adjustable virtual camera |
| CN119993385B (en) * | 2025-04-16 | 2025-07-11 | 深圳市丞辉威世智能科技有限公司 | A method for optimizing the operation of lower limb hip and knee rehabilitation equipment based on machine vision |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101388146A (en) * | 2008-06-16 | 2009-03-18 | 上海高德威智能交通系统有限公司 | Image acquisition and processing device and method, vehicle monitoring and recording system |
| CN102147971A (en) * | 2011-01-14 | 2011-08-10 | 赵秀江 | Traffic information acquisition system based on video image processing technology |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9520040B2 (en) * | 2008-11-21 | 2016-12-13 | Raytheon Company | System and method for real-time 3-D object tracking and alerting via networked sensors |
-
2012
- 2012-07-06 CN CN201210234127.0A patent/CN102737236B/en not_active Expired - Fee Related
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101388146A (en) * | 2008-06-16 | 2009-03-18 | 上海高德威智能交通系统有限公司 | Image acquisition and processing device and method, vehicle monitoring and recording system |
| CN102147971A (en) * | 2011-01-14 | 2011-08-10 | 赵秀江 | Traffic information acquisition system based on video image processing technology |
Also Published As
| Publication number | Publication date |
|---|---|
| CN102737236A (en) | 2012-10-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN102737236B (en) | Method for automatically acquiring vehicle training sample based on multi-modal sensor data | |
| Fan et al. | Pothole detection based on disparity transformation and road surface modeling | |
| Dhiman et al. | Pothole detection using computer vision and learning | |
| CN109690623B (en) | Systems and methods for recognizing the pose of a camera in a scene | |
| Jog et al. | Pothole properties measurement through visual 2D recognition and 3D reconstruction | |
| CN106503653B (en) | Region labeling method and device and electronic equipment | |
| US20100295948A1 (en) | Method and device for camera calibration | |
| CN114761997A (en) | Target detection method, terminal device and medium | |
| Zhou et al. | Moving object detection and segmentation in urban environments from a moving platform | |
| JP2007527569A (en) | Imminent collision detection based on stereoscopic vision | |
| JP6021689B2 (en) | Vehicle specification measurement processing apparatus, vehicle specification measurement method, and program | |
| CN108027974A (en) | Image processing apparatus and image processing method | |
| US20220292747A1 (en) | Method and system for performing gtl with advanced sensor data and camera image | |
| Petrovai et al. | A stereovision based approach for detecting and tracking lane and forward obstacles on mobile devices | |
| Kruber et al. | Vehicle position estimation with aerial imagery from unmanned aerial vehicles | |
| CN111833443B (en) | Landmark Position Reconstruction in Autonomous Machine Applications | |
| CN111256651B (en) | A method and device for perimeter vehicle ranging based on monocular vehicle camera | |
| Herghelegiu et al. | Robust ground plane detection and tracking in stereo sequences using camera orientation | |
| KR20220062709A (en) | System for detecting disaster situation by clustering of spatial information based an image of a mobile device and method therefor | |
| Horani et al. | A framework for vision-based lane line detection in adverse weather conditions using vehicle-to-infrastructure (V2I) communication | |
| Philipsen et al. | Day and night-time drive analysis using stereo vision for naturalistic driving studies | |
| Lee et al. | Semi-automatic framework for traffic landmark annotation | |
| US20240257372A1 (en) | Computer program product, information processing apparatus, and information processing method | |
| JP7585147B2 (en) | Arithmetic device, speed calculation method | |
| Wang et al. | A system of automated training sample generation for visual-based car detection |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| C06 | Publication | ||
| PB01 | Publication | ||
| C10 | Entry into substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| C14 | Grant of patent or utility model | ||
| GR01 | Patent grant | ||
| CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20150624 Termination date: 20180706 |
|
| CF01 | Termination of patent right due to non-payment of annual fee |