[go: up one dir, main page]

CN103425764A - Vehicle matching method based on videos - Google Patents

Vehicle matching method based on videos Download PDF

Info

Publication number
CN103425764A
CN103425764A CN2013103371386A CN201310337138A CN103425764A CN 103425764 A CN103425764 A CN 103425764A CN 2013103371386 A CN2013103371386 A CN 2013103371386A CN 201310337138 A CN201310337138 A CN 201310337138A CN 103425764 A CN103425764 A CN 103425764A
Authority
CN
China
Prior art keywords
vehicle
search
window
video
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103371386A
Other languages
Chinese (zh)
Other versions
CN103425764B (en
Inventor
章云
陈泓屺
刘治
陈贞丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201310337138.6A priority Critical patent/CN103425764B/en
Publication of CN103425764A publication Critical patent/CN103425764A/en
Application granted granted Critical
Publication of CN103425764B publication Critical patent/CN103425764B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Traffic Control Systems (AREA)

Abstract

本发明公开了一种快速匹配视频中运动车辆的方法,首先获取车辆监测视频,利用背景帧差法分割出每一帧图像中的移动车辆;其次记录每一帧中的车辆位置,建立车辆位置数据库;再次以需要匹配的车辆位置为中心,在相邻图像帧中放置搜索窗口;然后在搜索窗口中搜索目标车辆,并根据搜索结果自适应调整窗口大小直至搜索到车辆,记录车辆位置;最后对搜索到的目标进行位移变化量与速度变化量的计算,选择出最优目标,完成匹配。本发明的提出弥补了车辆检测完成后无有效方法进行车辆匹配的缺陷。并且发明方法提出的匹配特征计算简单,复杂度低,具有较强的实时性。发明的验证模块能够提高匹配正确率。具有广阔的应用前景。

Figure 201310337138

The invention discloses a method for quickly matching a moving vehicle in a video. Firstly, the vehicle monitoring video is obtained, and the moving vehicle in each frame image is segmented by using the background frame difference method; secondly, the vehicle position in each frame is recorded to establish the vehicle position Database; place the search window in adjacent image frames centered on the vehicle position to be matched again; then search for the target vehicle in the search window, and adaptively adjust the window size according to the search results until the vehicle is found, and record the vehicle position; finally Calculate the displacement change and speed change of the searched target, select the optimal target, and complete the matching. The proposal of the present invention makes up for the defect that there is no effective method for vehicle matching after the vehicle detection is completed. Moreover, the matching feature proposed by the inventive method is simple to calculate, has low complexity, and has strong real-time performance. The inventive verification module can improve the matching accuracy. have a broad vision of application.

Figure 201310337138

Description

一种基于视频的车辆匹配方法A video-based vehicle matching method

技术领域technical field

本发明涉及到交通视频检测技术领域,是一种基于视频的车辆匹配方法,主要用于智能交通系统提取道路交通信息。The invention relates to the technical field of traffic video detection, is a video-based vehicle matching method, and is mainly used for intelligent traffic systems to extract road traffic information.

背景技术Background technique

基于视频的车辆匹配是指在一系列的图像帧中,对处在不同位置的同一辆车进行识别。目前很多系统(包括智能交通系统)都需要提取道路的车流量。很多算法都已经成功地将运动车辆检测出来,但检测的结果只能表示单一的静态图像中的车辆数目。如果需要统计车流量,则需要对每辆车进行匹配。车辆匹配方法的完善有助于交通智能化。同时通过车辆匹配可以记录车辆的行驶轨迹。当车辆发生违法变道等不安全行为时,该方法可以记录并报告给交通管理部门,提高道路交通安全性。Video-based vehicle matching refers to the identification of the same vehicle at different locations in a series of image frames. At present, many systems (including intelligent transportation systems) need to extract the traffic flow of the road. Many algorithms have successfully detected moving vehicles, but the detection results can only represent the number of vehicles in a single static image. If you need to count the traffic flow, you need to match each vehicle. The perfection of the vehicle matching method is conducive to the intelligentization of traffic. At the same time, the vehicle's driving trajectory can be recorded through vehicle matching. When unsafe behaviors such as illegal lane changes occur, the method can record and report to the traffic management department to improve road traffic safety.

传统的车辆匹配方法主要是基于mean-shift方法,这种方法没有考虑到实际工程应用背景,在搜索时并没有优化搜索范围,使得算法的计算量较大,实时性较差;其次,mean-shift的搜索窗口大小固定,不能自适应调整,缺乏实用性。The traditional vehicle matching method is mainly based on the mean-shift method. This method does not take into account the actual engineering application background, and does not optimize the search range during the search, making the algorithm a large amount of calculation and poor real-time performance; secondly, the mean-shift The size of shift's search window is fixed and cannot be adjusted adaptively, which lacks practicality.

由于这种传统的匹配方法的缺陷和不足,一直难以有成熟的车流量检测的产品推出,同时随着智能交通系统发展的完善,视频采集设备的普及以及基于视频的交通信息采集硬件成本逐渐降低,基于视频的车辆匹配技术有着广泛的应用前景。Due to the defects and deficiencies of this traditional matching method, it has been difficult to launch mature traffic flow detection products. At the same time, with the development of intelligent transportation systems, the popularity of video acquisition equipment and the cost of video-based traffic information acquisition hardware have gradually decreased. , video-based vehicle matching technology has broad application prospects.

发明内容Contents of the invention

针对现有车辆匹配技术的缺陷或不足,本发明提供了一种基于视频的车辆匹配方法。该方法可以对视频范围内的行驶车辆进行快速且准确的匹配。Aiming at the defects or deficiencies of the existing vehicle matching technology, the present invention provides a video-based vehicle matching method. This method can quickly and accurately match the moving vehicles within the video range.

为了实现上述任务,本发明采取如下的技术解决方案:In order to realize above-mentioned task, the present invention takes following technical solution:

一种基于视频的车辆匹配方法,其特征在于如下步骤:A video-based vehicle matching method, characterized in the following steps:

步骤一:获取车辆监测视频,利用背景帧差法分割出每一帧图像中的移动车辆。Step 1: Obtain the vehicle monitoring video, and use the background frame difference method to segment the moving vehicle in each frame of image.

从视频采集设备中获取车辆监测视频,利用背景帧差法分割出每一帧图像中的移动车辆。The vehicle monitoring video is obtained from the video acquisition device, and the moving vehicle in each frame image is segmented by the background frame difference method.

步骤二:记录每一帧图像中的车辆的位置,建立车辆位置数据库。Step 2: Record the position of the vehicle in each frame of image, and establish a vehicle position database.

记录每一帧图像中的车辆位置,求出车辆的最小矩形外轮廓,寻找最小矩形外轮廓的几何中心位置,将该点作为车辆的特征点并存入数据库。Record the position of the vehicle in each frame of image, calculate the minimum rectangular outer contour of the vehicle, find the geometric center position of the smallest rectangular outer contour, use this point as the feature point of the vehicle and store it in the database.

步骤三:以需要匹配的车辆位置为中心,在相邻图像帧中放置搜索窗口。Step 3: Center on the vehicle position to be matched, place a search window in adjacent image frames.

选择需要追踪的车辆,以该车辆的最小矩形外轮廓的几何中心为中心,在需要追踪的相邻帧上放置一个矩形搜索窗口,该搜索窗口限定了车辆搜索的范围。该矩形窗口的大小为2*T*V*α,其中T表示相邻帧之间的实际时间间距,V表示该道路的最高限制时速,α表示现场中实际距离与成像平面的映射关系,即实际距离与在视频中距离的比值。Select the vehicle to be tracked, center the geometric center of the minimum rectangular outline of the vehicle, and place a rectangular search window on the adjacent frame to be tracked, which limits the scope of the vehicle search. The size of the rectangular window is 2*T*V*α, where T represents the actual time interval between adjacent frames, V represents the maximum speed limit of the road, and α represents the mapping relationship between the actual distance and the imaging plane in the scene, that is The ratio of the actual distance to the distance in the video.

步骤四:在搜索窗口中搜索目标车辆,并根据搜索结果自适应调整窗口大小直至搜索到车辆,记录车辆位置。Step 4: Search for the target vehicle in the search window, and adaptively adjust the size of the window according to the search results until the vehicle is found, and record the vehicle location.

在搜索窗口限定的范围内,从车辆特征点数据库中搜索车辆。如果没有搜索到车辆,则自动增加窗口大小,增加的步长为0.2*T*V*α,直至搜索到车辆;当搜索窗口大小超出监测范围时,搜索停止;或者当搜素窗口大于T*405*α时,搜索停止。记录所有搜索到的车辆特征点,存入一个待筛选集合中。Within the range limited by the search window, the vehicle is searched from the vehicle feature point database. If no vehicle is found, the window size is automatically increased, and the increased step size is 0.2*T*V*α, until the vehicle is found; when the search window size exceeds the monitoring range, the search stops; or when the search window is greater than T* When 405*α, the search stops. Record all the searched vehicle feature points and store them in a set to be screened.

步骤五:对搜索到的目标进行位移变化量与速度变化量的计算,选择出最优目标,完成匹配。Step 5: Calculate the displacement change and speed change of the searched target, select the optimal target, and complete the matching.

从待筛选集合中计算车辆的位移变化量ΔS与速度变化量ΔV,根据两者的平方和D=ΔS2+ΔV2衡量两者的运动状态变化量。选取D作为最优目标进行匹配。Calculate the displacement change ΔS and speed change ΔV of the vehicle from the set to be screened, and measure the movement state change of the two according to the square sum of the two D=ΔS 2 +ΔV 2 . Select D as the optimal target for matching.

本发明基于视频车辆匹配方法,与现有技术相比,可对视频范围内所有车辆进行匹配。该方法计算复杂度小,匹配时间短,准确性较高,具有广阔的应用前景。The invention is based on a video vehicle matching method, and compared with the prior art, it can match all vehicles within the video range. This method has small computational complexity, short matching time and high accuracy, and has broad application prospects.

附图说明Description of drawings

图1为本发明方法的流程图。Fig. 1 is the flowchart of the method of the present invention.

图2为窗口自适应调整的流程图。FIG. 2 is a flow chart of window adaptive adjustment.

图3为视频采集设备的安装示意图。Figure 3 is a schematic diagram of the installation of the video capture device.

以下结合附图和实施例对本发明做进一步的详细说明。The present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments.

具体实施方式Detailed ways

本发明的基于视频车辆匹配方法,采用的原理是:将采集到的视频处理为一定格式大小的灰度图像序列。车辆随着时间变化分布在图像上的不同位置。相邻图像之间的时间间隔比较小,车辆在该时间间隔内移动的位移同样很小。所以可以通过计算车辆的位移来对车辆进行匹配。由于行驶的时间很短,车辆的速度的方向和大小变化也很小。所以速度也可以作为匹配车辆的一个特征。在进行匹配的过程中发现,不需要搜索图像中的所有车辆进行匹配,可以直接排除那些距离较远的车辆。出于优化算法的角度考虑,本发明提出了一个搜索窗口作为搜索范围。在实际匹配过程中,车辆的大小及速度都各不相同。如果窗口设定过大,则会搜索到很多干扰目标,增加后续匹配过程中的计算量。如果窗口设定过小,就很难保证能够搜索到车辆,不能完成搜索任务。如何设定窗口的尺寸成为一个待解决的难题。本发明提出了一种自适应调整窗口尺寸的方法,在保证了能够搜索到车辆的同时,又能排除大部分的干扰目标。The video-based vehicle matching method of the present invention adopts the principle that the collected video is processed into a grayscale image sequence of a certain format and size. Vehicles are distributed at different locations on the image over time. The time interval between adjacent images is relatively small, and the displacement of the vehicle moving within this time interval is also very small. Therefore, the vehicle can be matched by calculating the displacement of the vehicle. Since the traveling time is very short, the direction and magnitude of the vehicle's speed change very little. So speed can also be used as a feature for matching vehicles. In the process of matching, it is found that it is not necessary to search for all vehicles in the image for matching, and those vehicles that are far away can be directly excluded. Considering the optimization algorithm, the present invention proposes a search window as the search range. In the actual matching process, the size and speed of the vehicles are different. If the window is set too large, many interference targets will be searched, which will increase the amount of calculation in the subsequent matching process. If the window is set too small, it is difficult to ensure that the vehicle can be searched, and the search task cannot be completed. How to set the size of the window becomes a difficult problem to be solved. The invention proposes a method for adaptively adjusting the size of the window, which can eliminate most of the interference targets while ensuring that the vehicle can be searched.

具体采用以下步骤实现:Specifically, the following steps are used to achieve:

第一步:分割出移动车辆Step 1: Segment the moving vehicle

通过图3所示,通过假设在交通道路上的视频采集设备获取实时交通视频,经过对该视频进行背景帧差处理后,分割出所有行驶中的车辆,并将分割出来的车辆二值化,记录每一辆车在图像中覆盖的位置坐标。As shown in Figure 3, by assuming that the video acquisition device on the traffic road acquires real-time traffic video, after performing background frame difference processing on the video, all driving vehicles are segmented, and the segmented vehicles are binarized, Record the location coordinates of each vehicle covered in the image.

第二步:确定车辆位置Step Two: Determine Vehicle Location

这一步的主要目的是是确定车辆的几何中心位置。首先要确定车辆的最小矩形外轮廓。第一步中已经得到车辆分割后的二值图。所以只需要确定二值图中的连通域最小矩形外轮廓。首先将车辆所覆盖的像素点坐标整理为一个集合{(x1,y1),(x2,y2),(x3,y3)......(xi,yi)},其中xi,yi分别代表第i个像素点所在的坐标位置。最小矩形外轮廓可以通过坐标集合中的x,y的最大值与最小值。接下来对集合进行排序,分别找到最小x坐标xmin,最大x坐标xmax,最小y坐标ymin,最大y坐标ymax。则可以确定最小矩形外轮廓的四个角点(xmin,ymin),(xmin,ymax),(xmax,ymin),(xmax,ymax)。在根据四个角点确定最小外轮廓的几何中心(xmid,ymid),其中xmid=(xmin+xmax)/2,ymid=(ymin+ymax)/2。The main purpose of this step is to determine the geometric center of the vehicle. The first step is to determine the minimum rectangular outer contour of the vehicle. In the first step, the binary image after vehicle segmentation has been obtained. Therefore, it is only necessary to determine the smallest rectangular outer contour of the connected domain in the binary image. First organize the coordinates of the pixels covered by the vehicle into a set {(x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 )......(x i , y i ) }, where x i and y i respectively represent the coordinate position of the i-th pixel. The outer contour of the smallest rectangle can pass the maximum and minimum values of x and y in the coordinate set. Next, sort the collection to find the minimum x coordinate x min , maximum x coordinate x max , minimum y coordinate y min , and maximum y coordinate y max , respectively. Then the four corner points (x min , y min ), (x min , y max ), (x max , y min ), (x max , y max ) of the outer contour of the minimum rectangle can be determined. The geometric center (x mid , y mid ) of the minimum outer contour is determined according to the four corner points, where x mid =(x min +x max )/2, y mid =(y min +y max )/2.

步骤三:放置搜索窗口Step 3: Place the search window

在需要匹配的目标的上放置搜索窗口,以搜索该车辆周围最小距离目标。以最小矩形外轮廓的几何中心为中心,在相邻帧上放置一个矩形搜索窗口。矩形窗口的大小为2*T*V*α,其中T表示相邻帧之间的实际时间间距,V表示该道路的最高限制时速,α表示现场中实际距离与成像平面的映射关系,即实际距离与在视频中距离的比值。在实际现场,我们选取的图像采集设备为每秒30帧,所以T=1/108000h,而道路规定的最高限速为60km/h,α根据图像采集设备的标定结果而不同。本发明选取α=12.3p/m,其中p/m表示实际距离中的1m在成像平面上占据的像素个数。对结果进行取整后,矩形窗口的长和宽分别14个像素点。Place a search window on the target that needs to be matched to search for the minimum distance target around the vehicle. Place a rectangular search window on adjacent frames centered on the geometric center of the smallest rectangular outer contour. The size of the rectangular window is 2*T*V*α, where T represents the actual time interval between adjacent frames, V represents the maximum speed limit of the road, and α represents the mapping relationship between the actual distance and the imaging plane in the scene, that is, the actual The ratio of the distance to the distance in the video. In the actual scene, the image acquisition device we selected is 30 frames per second, so T=1/108000h, and the maximum speed limit stipulated by the road is 60km/h, and α varies according to the calibration results of the image acquisition device. The present invention selects α=12.3p/m, where p/m represents the number of pixels occupied by 1m in the actual distance on the imaging plane. After rounding the result, the length and width of the rectangular window are 14 pixels respectively.

步骤四:窗口自适应调整大小直至搜索到车辆。Step 4: The window is adaptively resized until the vehicle is found.

设检测到的第i辆车辆的几何中心为(xi,yi),用于存储搜索窗口中的车辆中的几何中心的向量表示为Vector.(xm,ym),Vector.xm表示存储的第m辆车的横坐标,而Vector.ym表示第m辆车的纵坐标。以被追踪车辆的几何中心(x0,y0)为中心,在长宽为a,b的矩形窗口内进行搜索。若搜索到该窗口内有移动车辆,则将该车辆的几何中心存储在向量Vector.(xm,ym)中。然后对存储在Vector.(xm,ym)中的车辆进行基于位置和速度的追踪运算。若搜索完成后,搜素窗口内的车辆数目为0,则搜索窗口的长宽分别以δ,ε为步长进行增长,即a=a+δ,b=b+ε,直到搜索到车辆为止。Suppose the geometric center of the i-th vehicle detected is ( xi , y i ), and the vector used to store the geometric center of the vehicle in the search window is expressed as Vector.(x m , y m ), Vector.x m Indicates the abscissa of the stored m-th vehicle, and Vector.y m indicates the ordinate of the m-th vehicle. Take the geometric center (x 0 , y 0 ) of the tracked vehicle as the center, and search within a rectangular window whose length and width are a, b. If there is a moving vehicle in the window, store the geometric center of the vehicle in the vector Vector.(x m , y m ). Then the vehicle stored in Vector.(x m , y m ) is tracked based on position and velocity. If after the search is completed, the number of vehicles in the search window is 0, then the length and width of the search window are increased with δ and ε as the step size respectively, that is, a=a+δ, b=b+ε, until the vehicle is searched .

步骤五,对搜索到的目标进行位移变化量与速度变化量的计算,选择出最优目标,完成匹配。Step five, calculate the displacement variation and velocity variation of the searched target, select the optimal target, and complete the matching.

计算车辆的位移变化量ΔS与速度变化量ΔV,其中ΔS由计算两辆车辆几何中心的Euclidean距离得到,而速度变化量ΔV需要计算同一辆车的位移变化量之差。然后根据D=ΔS2+ΔV2计算出车辆的位移变化量与速度变化量的平方和。从所有的搜索对象中寻找具有最小平方和的目标,即可完成追踪过程。Calculate the displacement change ΔS and the speed change ΔV of the vehicle, where ΔS is obtained by calculating the Euclidean distance between the geometric centers of the two vehicles, and the speed change ΔV needs to calculate the difference between the displacement changes of the same vehicle. Then calculate the sum of the squares of the vehicle's displacement change and speed change according to D=ΔS 2 +ΔV 2 . Find the target with the smallest sum of squares from all the search objects to complete the tracking process.

Claims (6)

1. the vehicle matching process based on video is characterized in that following steps:
Step 1: obtain the vehicle monitoring video, utilize the background frame difference method to be partitioned into the moving vehicle in each two field picture.
Obtain the vehicle monitoring video from video capture device, utilize the background frame difference method to be partitioned into the moving vehicle in each two field picture.
Step 2: record the position of vehicle in each two field picture, set up the vehicle position data storehouse.
At first record the vehicle location in each two field picture, obtain the minimum rectangle outline of vehicle; Then find the geometric center position of minimum rectangle outline, the unique point using this as vehicle; Then deposit the unique point of all vehicles in database again.
Step 3: centered by the vehicle characteristics point of needs coupling, in the adjacent image frame, place search window.
The vehicle of select to need following the trail of, the geometric center of minimum rectangle outline of this vehicle of take is window center, places a rectangular search window on consecutive frame, this search window defines the scope of vehicle search.The length of side of this rectangular window is T*V* α, and wherein T means the time interval between consecutive frame, and V means the maximum limit speed per hour of this road, and α means the mapping relations of actual range and imaging plane in scene, the i.e. ratio of actual range and video middle distance.
Step 4: search for target vehicle in search window, and adjust window size until search vehicle, the registration of vehicle position according to the Search Results self-adaptation.
In the scope limited at search window, search vehicle from the set of vehicle characteristics point data.If do not search vehicle, automatically increase window size, the step-length of increase is 0.2*T*V* α, until search vehicle; When search window exceeds monitoring range, search stops; Perhaps, when searching plain window and be greater than T*405* α, search stops.Record all vehicle characteristics points that search, deposit in a set to be screened.
Step 5: calculate the displacement variable and the velocity variable that search target, select optimal objective, complete coupling.
Calculate displacement variable Δ S and the velocity variable Δ V of vehicle from set to be screened, according to both quadratic sum D=Δ S 2+ Δ V 2Weigh both motion state variable quantities, then choose D and mated as optimal objective.
2. according to the described vehicle fast matching method based on video of claim 1, it is characterized in that: the geometric center that the vehicle characteristics for coupling in step 2 is the vehicle minimum outline.
3. according to the described vehicle fast matching method based on video of claim 1, it is characterized in that: the window for search in step 3 for rectangle, the initial size of rectangular window is T*V* α, wherein T means the real time spacing between consecutive frame, V means the maximum limit speed per hour of this road, α means the mapping relations of actual range and imaging plane in scene, i.e. actual range and ratio at the video middle distance.
4. according to the described vehicle fast matching method based on video of claim 1, it is characterized in that: the size of the search window in step 4 is self-adaptation adjustment.When not searching vehicle, the step-length that window size increases is 0.2*T*V* α, and wherein the increase of window has two upper limits: (1). exceed the detection coverage; (2) exceed T*405* α.
5. according to the described vehicle fast matching method based on video of claim 1, it is characterized in that: the displacement variable that the velocity variable computing method in step 5 are the vehicle front cross frame poor: Δ V=Δ S 1-Δ S 2, Δ S wherein 1The displacement variable that means vehicle minimum rectangle outline geometric center previous frame and real-time frame, Δ S 2Mean the displacement variable that the vehicle previous frame calculates.
6. according to the described vehicle fast matching method based on video of claim 1, it is characterized in that: displacement variable and the quadratic sum of velocity variable, i.e. D=Δ S that the index of weighing the state of motion of vehicle variable quantity in step 5 is vehicle minimum rectangle profile geometric center 2+ Δ V 2.
CN201310337138.6A 2013-07-30 2013-07-30 Vehicle matching method based on videos Expired - Fee Related CN103425764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310337138.6A CN103425764B (en) 2013-07-30 2013-07-30 Vehicle matching method based on videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310337138.6A CN103425764B (en) 2013-07-30 2013-07-30 Vehicle matching method based on videos

Publications (2)

Publication Number Publication Date
CN103425764A true CN103425764A (en) 2013-12-04
CN103425764B CN103425764B (en) 2017-04-12

Family

ID=49650503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310337138.6A Expired - Fee Related CN103425764B (en) 2013-07-30 2013-07-30 Vehicle matching method based on videos

Country Status (1)

Country Link
CN (1) CN103425764B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839259A (en) * 2014-02-13 2014-06-04 西安交通大学 Optimal matching block searching method and device for image
CN106022307A (en) * 2016-06-08 2016-10-12 中国科学院自动化研究所 Remote sensing image vessel detection method based on vessel rotation rectangular space
CN108108680A (en) * 2017-12-13 2018-06-01 长安大学 A kind of front vehicle identification and distance measuring method based on binocular vision
WO2018205120A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Target tracking method, smart device and storage medium
CN109255803A (en) * 2018-08-24 2019-01-22 长安大学 A kind of displacement calculation method for the moving target soundd out based on displacement
CN111108342A (en) * 2016-12-30 2020-05-05 迪普迈普有限公司 Visual odometry and pairwise alignment for HD map creation
CN111354191A (en) * 2020-03-09 2020-06-30 深圳大学 Lane driving condition determining method, device and equipment and storage medium
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN116884236A (en) * 2023-06-26 2023-10-13 中关村科学城城市大脑股份有限公司 Traffic flow collection device and traffic flow collection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100013935A1 (en) * 2006-06-14 2010-01-21 Honeywell International Inc. Multiple target tracking system incorporating merge, split and reacquisition hypotheses
CN102222346B (en) * 2011-05-23 2013-03-13 北京云加速信息技术有限公司 Vehicle detecting and tracking method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈泓屺等: "基于二次帧差背景提取的车辆追踪方法", 《广东工业大学学报》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103839259B (en) * 2014-02-13 2016-11-23 西安交通大学 A kind of image searches blocks and optimal matching blocks method and device
CN103839259A (en) * 2014-02-13 2014-06-04 西安交通大学 Optimal matching block searching method and device for image
CN106022307A (en) * 2016-06-08 2016-10-12 中国科学院自动化研究所 Remote sensing image vessel detection method based on vessel rotation rectangular space
CN106022307B (en) * 2016-06-08 2019-09-27 中国科学院自动化研究所 A Ship Detection Method in Remote Sensing Image Based on the Rectangular Space of Ship Rotation
CN111108342A (en) * 2016-12-30 2020-05-05 迪普迈普有限公司 Visual odometry and pairwise alignment for HD map creation
US12223593B2 (en) 2016-12-30 2025-02-11 Nvidia Corporation Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
CN111108342B (en) * 2016-12-30 2023-08-15 辉达公司 Visual range method and pair alignment for high definition map creation
WO2018205120A1 (en) * 2017-05-09 2018-11-15 深圳市速腾聚创科技有限公司 Target tracking method, smart device and storage medium
CN108108680A (en) * 2017-12-13 2018-06-01 长安大学 A kind of front vehicle identification and distance measuring method based on binocular vision
CN109255803B (en) * 2018-08-24 2022-04-12 长安大学 Displacement calculation method of moving target based on displacement heuristic
CN109255803A (en) * 2018-08-24 2019-01-22 长安大学 A kind of displacement calculation method for the moving target soundd out based on displacement
CN111354191A (en) * 2020-03-09 2020-06-30 深圳大学 Lane driving condition determining method, device and equipment and storage medium
CN111354191B (en) * 2020-03-09 2022-05-20 深圳大学 Lane driving condition determining method, device and equipment and storage medium
CN111523385A (en) * 2020-03-20 2020-08-11 北京航空航天大学合肥创新研究院 Stationary vehicle detection method and system based on frame difference method
CN112836631A (en) * 2021-02-01 2021-05-25 南京云计趟信息技术有限公司 Vehicle axle number determining method and device, electronic equipment and storage medium
CN116884236A (en) * 2023-06-26 2023-10-13 中关村科学城城市大脑股份有限公司 Traffic flow collection device and traffic flow collection method
CN116884236B (en) * 2023-06-26 2024-04-16 中关村科学城城市大脑股份有限公司 Traffic flow collection device and traffic flow collection method

Also Published As

Publication number Publication date
CN103425764B (en) 2017-04-12

Similar Documents

Publication Publication Date Title
CN103425764B (en) Vehicle matching method based on videos
CN110210389B (en) A multi-target recognition and tracking method for road traffic scenes
CN104200657B (en) A kind of traffic flow parameter acquisition method based on video and sensor
Huang et al. Lane detection based on inverse perspective transformation and Kalman filter
CN112069969B (en) Expressway monitoring video cross-mirror vehicle tracking method and system
CN101950426B (en) A vehicle relay tracking method in a multi-camera scene
CN102609934B (en) A Multi-target Segmentation and Tracking Method Based on Depth Image
CN107463890B (en) A kind of Foregut fermenters and tracking based on monocular forward sight camera
CN103226834B (en) A kind of image motion target signature point method for fast searching
CN107577996A (en) A method and system for identifying vehicle driving path deviation
CN117949942B (en) Target tracking method and system based on fusion of radar data and video data
CN104615986B (en) The method that pedestrian detection is carried out to the video image of scene changes using multi-detector
CN102393901B (en) Traffic flow information perception method based on hybrid characteristic and system thereof
CN102073852B (en) Multiple vehicle segmentation method based on optimum threshold values and random labeling method for multiple vehicles
CN104599502A (en) Method for traffic flow statistics based on video monitoring
CN106204484B (en) A Traffic Target Tracking Method Based on Optical Flow and Local Invariant Features
CN113506318A (en) A 3D object perception method in vehicle edge scene
CN108416798B (en) A method for vehicle distance estimation based on optical flow
CN109670462A (en) Continue tracking across panorama based on the aircraft of location information
CN107944354B (en) Vehicle detection method based on deep learning
CN108009494A (en) A kind of intersection wireless vehicle tracking based on unmanned plane
CN106447697A (en) Specific moving target fast tracking method based on moving platform
EP2813973B1 (en) Method and system for processing video image
CN106780564A (en) A kind of anti-interference contour tracing method based on Model Prior
WO2023108931A1 (en) Vehicle model determining method based on video-radar fusion perception

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170412

Termination date: 20190730

CF01 Termination of patent right due to non-payment of annual fee