[go: up one dir, main page]

CN102663357A - Color characteristic-based detection algorithm for stall at parking lot - Google Patents

Color characteristic-based detection algorithm for stall at parking lot Download PDF

Info

Publication number
CN102663357A
CN102663357A CN2012100865534A CN201210086553A CN102663357A CN 102663357 A CN102663357 A CN 102663357A CN 2012100865534 A CN2012100865534 A CN 2012100865534A CN 201210086553 A CN201210086553 A CN 201210086553A CN 102663357 A CN102663357 A CN 102663357A
Authority
CN
China
Prior art keywords
image
parking stall
parking
parking space
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100865534A
Other languages
Chinese (zh)
Inventor
蒋大林
王立霞
董珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN2012100865534A priority Critical patent/CN102663357A/en
Publication of CN102663357A publication Critical patent/CN102663357A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于颜色特征的停车场车位检测算法,在大型停车场内架设CCD摄像头,通过CCD摄像头实时采集车位图像信息,再由计算机系统读入采集到的车位图像数据,并对这些图像数据依次进行待测车位截取、彩色图像平滑滤波等预处理,然后利用建立的背景混合高斯模型进行颜色特征提取。为了判断车位处是否停放的是车,提取方差、边缘及角点这些比较显著性的特征,作为每幅图像的特征空间,统计这些特征,输入利用边界样本训练好的分类器中,判断车位的占用情况。方法使用范围广泛,通用性强,可应用于室内及室外各种停车场环境,且具有安装方便,成本低廉,实时性好,检测精度高等优点。

Figure 201210086553

The invention discloses a parking space detection algorithm based on color features. A CCD camera is set up in a large parking lot, and the image information of the parking space is collected in real time through the CCD camera, and then the computer system reads the collected parking space image data, and these The image data is sequentially preprocessed by intercepting the parking space to be tested, smoothing and filtering the color image, and then using the established background mixture Gaussian model for color feature extraction. In order to judge whether there is a car parked in the parking space, the more significant features such as variance, edge and corner are extracted as the feature space of each image, and these features are counted, and input into the classifier trained with boundary samples to judge the parking space. occupancy. The method has a wide range of applications and strong versatility, can be applied to various indoor and outdoor parking lot environments, and has the advantages of convenient installation, low cost, good real-time performance, and high detection accuracy.

Figure 201210086553

Description

基于颜色特征的停车场车位检测算法Parking space detection algorithm based on color features

技术领域 technical field

本发明涉及模式识别、统计学习理论、图像处理领域,设计并实现了一种对室内外停车场车位占用情况进行实时监控和检测的通用方法。The invention relates to the fields of pattern recognition, statistical learning theory, and image processing, and designs and implements a general method for real-time monitoring and detection of parking space occupancy in indoor and outdoor parking lots.

背景技术 Background technique

近年来,随着经济社会的高速发展,我国城市机动车数量迅速增加,而停车场建设相对缓慢,停车难问题日益突出。研究车位检测方法可以有效的解决车位资源有限的问题,提高停车场车位的使用率,满足了停车场在效率、安全和管理上的要求,这将对我国现阶段智能交通的研究和发展、对停车场的合理高效利用起到积极的推动作用。In recent years, with the rapid development of economy and society, the number of motor vehicles in my country's cities has increased rapidly, while the construction of parking lots has been relatively slow, and the problem of difficult parking has become increasingly prominent. Studying the parking space detection method can effectively solve the problem of limited parking space resources, improve the utilization rate of parking spaces, and meet the requirements of efficiency, safety and management of the parking lot. Reasonable and efficient use of the parking lot plays a positive role in promoting.

目前,停车场车位检测方法有很多,主要可以分为基于物理特征的检测方法和基于视频监控、计算机视觉及图像处理技术的检测方法。基于物理特征的检测方法主要采用地埋感应线圈、超声波、地磁检测等方式实现。这种方式具有成本低、受气候影响小等优点,但是施工麻烦,要开挖路面,对路面造成破坏,而且路面受季节和车辆压力影响,线圈容易损坏,难于维护;基于视频监控、计算机视觉及图像处理技术的检测方式具有许多的优势,首先,拍摄视频图像的摄像机安装方便,更换不影响交通,容易调整和移动摄像头的位置,无需在车道路面上施工;其次,视频图像处理技术可以达到实时性强、车位检测精度高等特点。At present, there are many parking space detection methods, which can be mainly divided into detection methods based on physical features and detection methods based on video surveillance, computer vision and image processing technology. Detection methods based on physical characteristics are mainly realized by buried induction coils, ultrasonic waves, and geomagnetic detection. This method has the advantages of low cost and little impact on the climate, but the construction is troublesome, and the road surface needs to be excavated, causing damage to the road surface, and the road surface is affected by seasons and vehicle pressure, the coil is easily damaged, and it is difficult to maintain; based on video surveillance, computer vision And the detection method of image processing technology has many advantages. First, the camera for shooting video images is easy to install, and the replacement will not affect the traffic. It is easy to adjust and move the position of the camera, and there is no need for construction on the road surface; Strong real-time performance and high precision of parking space detection.

发明内容 Contents of the invention

本发明的目的是提出一种基于颜色特征及模式识别相结合的车位检测方法。首次将颜色特征引入到停车场检测系统中,以期达到较高的车位检测准确率。The purpose of the present invention is to propose a parking space detection method based on the combination of color features and pattern recognition. For the first time, the color feature is introduced into the parking lot detection system in order to achieve a higher accuracy rate of parking space detection.

本发明是采用以下技术手段实现的:分析停车场车位图像信息颜色分布特点,统计空车位图像,构建背景颜色高斯模型,利用模型判断每个像素点的类属情况。但是这只能判断出此像素点为非背景像素点,还不能判断车位处是否停放的是车,所以应该提取一些跟车有关并且跟空车位具有很大区分性的特征,选择角点,边缘,方差,利用大量采集到的停车场图像训练SVM分类器,进行车位占泊情况检测。本发明实现的具体步骤阐述如下:The present invention is realized by adopting the following technical means: analyzing the color distribution characteristics of parking space image information, counting empty parking space images, constructing a background color Gaussian model, and using the model to judge the category of each pixel. But this can only judge that this pixel is a non-background pixel, and it cannot judge whether there is a car parked in the parking space. Therefore, some features that are related to the car and have a great distinction from the empty parking space should be extracted, and the corner points and edges should be selected. , variance, using a large number of collected parking lot images to train the SVM classifier to detect parking space occupancy. The concrete steps that the present invention realizes are set forth as follows:

(1)采用CCD摄像头拍摄获取车位视频数据,摄像机的拍摄区域范围为1-4个车位,且摄像头相对位置和拍摄角度保持不变;(1) Use a CCD camera to shoot and obtain parking space video data. The shooting area of the camera is 1-4 parking spaces, and the relative position and shooting angle of the camera remain unchanged;

(2)选择一幅无车的背景图像,设置无车背景图像待测车位的边框坐标,目的是截取只包含单一待测车位信息的图像数据,将得到的待测车位的背景图像设为I0(2) Select a background image without a car, set the frame coordinates of the parking space to be measured in the background image without a car, the purpose is to intercept the image data that only includes a single parking space to be measured, and set the background image of the parking space to be measured as 1 0 ;

(3)对于每幅待测图像,按步骤(2)中的方式截取出具体待测车位的区域范围,进行如下预处理步骤:(3) For each image to be tested, the area range of the specific parking space to be tested is intercepted in the manner in step (2), and the following preprocessing steps are carried out:

(a)RGB模型到HSI模型转换。(a) RGB model to HSI model conversion.

Hh == θθ 360360 -- θθ BB ≤≤ GG BB >> GG θθ == arccosarccos {{ 11 22 [[ (( RR -- GG )) ++ (( RR -- BB )) ]] [[ (( RR -- GG )) 22 ++ (( RR -- GG )) (( GG -- BB )) ]] 11 // 22 }}

SS == 11 -- 33 (( RR ++ GG ++ BB )) [[ minmin (( RR ,, GG ,, BB )) ]]

II == 11 33 (( RR ++ GG ++ BB ))

其中R,G,B分别代表RGB彩色模型下的红色分量,蓝色分量,绿色分量,θ是色调的参考角度;H,S,I分别代表的是在HSI彩色模型下色度分量,饱和度分量及强度分量。Among them, R, G, and B respectively represent the red component, blue component, and green component under the RGB color model, and θ is the reference angle of the hue; H, S, and I respectively represent the chroma component and saturation under the HSI color model. weight and intensity.

(b)在HSI空间下,因为要利用图像的颜色信息,那么我们只对HIS空间下的强度分量进行滤波,保留图像的颜色信息。对I分量进行图像增强处理。(b) In the HSI space, because the color information of the image is to be used, we only filter the intensity component in the HIS space to retain the color information of the image. Image enhancement processing is performed on the I component.

(c)HSI彩色模型与RGB模型的转换。经平滑后的图像转换到RGB模型空间下进行特征提取。公式如下所示:(c) Conversion between HSI color model and RGB model. The smoothed image is transformed into the RGB model space for feature extraction. The formula looks like this:

G=I(1-S)G=I(1-S)

Figure BDA0000147820190000031
Figure BDA0000147820190000031

R=3I-(G+B)R=3I-(G+B)

其中R,G,B分别代表RGB彩色模型下的红色分量,蓝色分量,绿色分量;H,S,I分别代表的是在HIS彩色模型下色度分量,饱和度分量及强度分量。Among them, R, G, and B respectively represent the red component, blue component, and green component under the RGB color model; H, S, and I respectively represent the chroma component, saturation component and intensity component under the HIS color model.

(4)特征提取,分析图像,提取对空车位和车位有车占用具有明显区分能力的特征:颜色特征、边缘特征、角点特征、方差,其方法如下:(4) Feature extraction, analyze the image, and extract features that have a clear ability to distinguish between empty parking spaces and parking spaces occupied by cars: color features, edge features, corner features, and variance. The method is as follows:

(a)车位颜色特征提取:车的颜色信息是非常丰富的,而且在不同的光照下可能会呈现出不同的颜色,如果去刻画所有车的颜色信息那是相当困难的,而且也是不全面的。那么转变思想,对于空车位的颜色信息,实际上的比较单一的,一般情况下就是水泥地面,即使是光照的影响,所呈现出的颜色信息也是非常接近的。YCbCr空间下,Cb,Cr分量分量能够很好的反映图像的颜色信息,并且与光照变化无关,那么利用此颜色分量进行空车位颜色特征的度量,克服了光照强度带来的干扰,而且RGB到YCbCr之间的转化是线性变化,计算简单。根据空车位训练颜色模型,建立基于颜色YCbCr空间下的混合高斯模型,CbCr具有很好的聚类特性。具体步骤如下:(a) Parking space color feature extraction: The color information of the car is very rich, and it may show different colors under different lighting conditions. It is quite difficult to describe the color information of all cars, and it is not comprehensive. . Then change the mind, the color information of the empty parking space is actually relatively simple, generally it is the concrete floor, even if it is affected by the light, the color information presented is very close. In the YCbCr space, the Cb and Cr components can well reflect the color information of the image and have nothing to do with the illumination change. Then use this color component to measure the color characteristics of the empty parking space, which overcomes the interference caused by the illumination intensity, and RGB to The conversion between YCbCr is a linear change, and the calculation is simple. The color model is trained according to the empty parking space, and the mixed Gaussian model based on the color YCbCr space is established. CbCr has good clustering characteristics. Specific steps are as follows:

①颜色空间转换(RGB到YCbCr空间)。转换公式① Color space conversion (RGB to YCbCr space). conversion formula

Y=0.299*R+0.587*G+0.114*BY=0.299*R+0.587*G+0.114*B

Cb=-0.147*R-0.289*G+0.436*BCb=-0.147*R-0.289*G+0.436*B

Cr=0.615*R-0.515*G-0.1*BCr=0.615*R-0.515*G-0.1*B

其中Y代表亮度信息,Cb、Cr是色度信息,R、G、B分别代表RGB空间下的红色分量、绿色分量和蓝色分量。Among them, Y represents brightness information, Cb and Cr represent chroma information, and R, G and B represent red component, green component and blue component in RGB space, respectively.

②建立多维混合高斯模型。统计空车位像素点可能出现的M种模态,m个单高斯模型像素点的每个模态,每个模态赋予不同的权重,组成混合高斯模型。多维混合高斯模型的公式如下所示:②Establish multi-dimensional mixed Gaussian model. Count the M modes that may appear in the pixels of the empty parking space, each mode of the m single Gaussian model pixels, and assign different weights to each mode to form a mixed Gaussian model. The formula for the multidimensional mixture Gaussian model is as follows:

pp (( xx ii )) == ΣΣ ii == 11 mm aa jj NN jj (( xx ii ;; uu jj ,, ΣΣ jj ))

NN jj (( xx ;; uu jj ,, ΣΣ jj )) == 11 (( 22 ππ )) mm // 22 || ΣΣ jj || expexp [[ -- 11 22 (( xx -- μμ jj )) TT ΣΣ jj -- 11 (( xx -- μμ jj )) ]]

式中

Figure BDA0000147820190000043
ai为每个模态在混合高斯模型中所占的权重,Nj(xi;uj,∑j)表示第j个单高斯模型的概率密度函数。代表第j个高斯模型对应的Cb,Cr的像素点均值,x(Cb,Cr)为样本,
Figure BDA0000147820190000045
p(xi)代表样本的类概率密度函数,∑j为像素点在每个模态下的协方差矩阵。In the formula
Figure BDA0000147820190000043
a i is the weight of each mode in the mixed Gaussian model, and N j ( xi ; u j , ∑ j ) represents the probability density function of the jth single Gaussian model. Represents the pixel mean value of Cb and Cr corresponding to the jth Gaussian model, x(Cb, Cr) is the sample,
Figure BDA0000147820190000045
p( xi ) represents the class probability density function of the sample, and ∑ j is the covariance matrix of the pixel in each mode.

这些参数是根据停车场空车位像素点进行训练得到的,尽可能的取不同光照、遮挡、阴影下空车位像素点的分布情况。m的取值为3~5。建立的混合高斯模型具有较强的环境适应能力,并且利用颜色信息的单一性,可以很好的将前景图像检测出来。These parameters are obtained by training according to the pixel points of the empty parking spaces in the parking lot, and the distribution of the pixel points of the empty parking spaces under different lighting, occlusion, and shadows is taken as much as possible. The value of m is 3-5. The established mixed Gaussian model has a strong ability to adapt to the environment, and it can detect the foreground image well by using the singleness of color information.

③像素点判决。输入图像像素点x,判断像素点与混合高斯模型的匹配情况,判决准则是根据当前像素值与混合高斯模型中背景高斯分布均值的比值来判定当前值是否为非空车位像素点。如满足公式③ Pixel judgment. Input the pixel point x of the image to judge the matching between the pixel point and the mixed Gaussian model. The judgment criterion is to determine whether the current value is a non-empty parking space pixel point according to the ratio of the current pixel value to the average value of the background Gaussian distribution in the mixed Gaussian model. If the formula is satisfied

|xj-uj|>D|∑j||x j -u j |>D|∑ j |

其中D为经验值,一般取2.5。uj,|∑j|,为混合高斯模型中每个单高斯模型的均值和协方差矩阵的行列式。如果满足上式,那么就认为此像素点为前景像素点。Among them, D is the experience value, generally 2.5. u j , |∑ j |, is the determinant of the mean and covariance matrix of each single Gaussian model in the mixed Gaussian model. If the above formula is satisfied, then this pixel is considered as a foreground pixel.

④统计非背景像素点个数S及确定最小矩形的长宽比LW。(b)改进的Canny边缘检测算法。边缘检测的目的是能有效地抑制噪声,二是必须尽量精确定位边缘的密度。能克服阴影、光照变化的影响。④ Count the number S of non-background pixels and determine the aspect ratio LW of the smallest rectangle. (b) Improved Canny edge detection algorithm. The purpose of edge detection is to effectively suppress noise, and the second is to locate the edge density as accurately as possible. It can overcome the influence of shadow and light changes.

①Canny边缘检测。基本原理是首先经过高斯滤波器平滑图像,然后利用一阶偏导的有限差分来计算梯度的幅值和方向,再对梯度幅值进行非极大值抑制,最后用双阈值算法检测和连接边缘。①Canny edge detection. The basic principle is to first smooth the image through a Gaussian filter, then use the finite difference of the first-order partial derivative to calculate the magnitude and direction of the gradient, then suppress the gradient magnitude by non-maximum value, and finally use the double-threshold algorithm to detect and connect the edges .

②改进算法,利用meanshift算法对图像进行颜色聚类。然后再进行canny检测算法,这样通过聚类之后,将颜色相近的信息进行合并,滤掉了伪边缘信息,可以得到精确的图像边缘。然后利用一阶偏导的有限差分来计算梯度的幅值和方向,再对梯度幅值进行非极大值抑制,最后用双阈值算法检测和连接边缘。②Improve the algorithm and use the meanshift algorithm to cluster the color of the image. Then the canny detection algorithm is performed, so that after clustering, the information with similar colors is merged, and the false edge information is filtered out, so that accurate image edges can be obtained. Then the finite difference of the first order partial derivative is used to calculate the magnitude and direction of the gradient, and then the non-maximum suppression is performed on the gradient magnitude, and finally the double threshold algorithm is used to detect and connect the edges.

③边缘点密度计算。③Edge point density calculation.

dd EE. == ΣΣ (( ii ,, jj )) == 11 GG EE. SS

其中,dE表示边缘点密度,GE表示二值车位区域图像中边缘像素值为1的点,S表示该车位的面积。Among them, d E represents the density of edge points, GE represents the point whose edge pixel value is 1 in the binary parking area image, and S represents the area of the parking space.

(c)角点特征检测。车具有丰富的角点信息,而空车位的角点特征并不明显。角点定义为二维图像亮度变化剧烈的点或图像边缘曲线上具有极大值的点。本文选用计算简单、提取的角点特征均匀合理、可以定量提取特征点以及算子稳定的Harris算法进行角点特征检测。(c) Corner feature detection. Cars have rich corner information, but the corner features of empty parking spaces are not obvious. The corner point is defined as the point where the brightness of the two-dimensional image changes sharply or the point with the maximum value on the edge curve of the image. In this paper, the Harris algorithm with simple calculation, uniform and reasonable extracted corner features, quantitative extraction of feature points and stable operator is selected for corner feature detection.

①Harris算法原理:对操作的灰度图像的每个点,计算该点在横向和纵向的一阶导数,以及二者的乘积;特征点是局部范围内的极大兴趣值对应的像素点。①Harris algorithm principle: For each point of the grayscale image to be operated, calculate the first-order derivative of the point in the horizontal and vertical directions, and the product of the two; the feature point is the pixel point corresponding to the maximum value of interest in the local range.

②算法步骤,应用Harris方法提取图像中角点的过程可以分为以下几个步骤:② Algorithm steps, the process of applying the Harris method to extract the corner points in the image can be divided into the following steps:

I计算图像像素点在水平和垂直方向上的梯度,以及两者的乘积,得到M中4个元素的值: M = I x 2 I x I y I x I x I y 2 其中 I x 2 = I x × I x ; I y 2 = I y × I y . I calculate the gradient of the image pixel in the horizontal and vertical directions, and the product of the two, and get the value of the 4 elements in M: m = I x 2 I x I the y I x I x I the y 2 in I x 2 = I x × I x ; I the y 2 = I the y × I the y .

其中Ix为像素点在水平方向上的梯度,Iy为像素点在垂直方向的梯度Among them, I x is the gradient of the pixel point in the horizontal direction, and I y is the gradient of the pixel point in the vertical direction

II对图像进行高斯滤波,得到新的M。离散二维零均值高斯函数为:II performs Gaussian filtering on the image to obtain a new M. The discrete two-dimensional zero-mean Gaussian function is:

GaussGauss == expexp (( -- xx 22 ++ ythe y 22 22 σσ 22 ))

III计算原图像上对应的每个像素点的兴趣值,即R值。k为系数,一般取0.04.III Calculate the interest value of each pixel corresponding to the original image, that is, the R value. k is a coefficient, generally 0.04.

RR == {{ II xx 22 ×× II ythe y 22 -- (( II xx II ythe y )) 22 }} -- kk (( II xx 22 ++ II ythe y 22 )) 22

IV选取局部极值点。Harris方法认为,特征点是局部范围内的极大兴趣值对应的像素点。IV selects local extremum points. The Harris method believes that the feature point is the pixel point corresponding to the maximum interest value in the local range.

V设定阈值,选取一定量的角点。V sets the threshold and selects a certain amount of corner points.

③角点数目统计count。统计车位的角点个数,如果有车辆存在的话,角点的个数比较多,空车位的角点个数比较少。③ Count the number of corner points. Count the number of corners of the parking spaces. If there are vehicles, the number of corners is relatively large, and the number of corners of empty parking spaces is relatively small.

(d)车位方差参数计算:将待测车位图像I与选取的无车背景图像I0做差,计算其绝对值Gs=|I-I0|,获得车位区域差值图像Gs,Gs只包含单独车位的信息,根据下面的公式计算该车位的方差:(d) Calculation of parking space variance parameters: make the difference between the parking space image I to be tested and the selected car-free background image I 0 , calculate its absolute value G s = |II 0 |, and obtain the parking space area difference image G s , G s only Contains the information of a single parking space, and calculates the variance of the parking space according to the following formula:

σσ 00 == ΣΣ (( ii ,, jj )) ∈∈ GG sthe s GG sthe s (( ii ,, jj )) -- GG ‾‾ sthe s nno

此处σ表示车位区域的方差,

Figure BDA0000147820190000067
表示车位区域差值图像Gs的平均值,n代表Gs内的像素点总数。Here σ represents the variance of the parking area,
Figure BDA0000147820190000067
Indicates the average value of the difference image G s of the parking area, and n represents the total number of pixels in G s .

(5)SVM分类器设计。选择500幅待测车位图像作为训练样本图像,用来训练分类器本文选择SVM作为根据以上提取的图像比较有区分能力的五个模式特征:根据背景混合高斯模型确定的区域面积S和最小矩形区域的长宽比LW、边缘点密度(dE)、角点数目(count),方差(σ0),训练SVM分类器。(5) SVM classifier design. Select 500 images of parking spaces to be tested as training sample images to train the classifier. In this paper, SVM is selected as the five pattern features that are more distinguishable based on the images extracted above: the area S and the minimum rectangular area determined according to the background mixture Gaussian model The aspect ratio LW, the edge point density (d E ), the number of corner points (count), and the variance (σ 0 ), train the SVM classifier.

①SVM分类器的原理。支持向量机方法是统计学习理论的VC维理论和结构风险最小化原理基础上的,根据有限的样本信息在模型的复杂性(即对特定训练样本的学习精度)和学习能力(即无错误的识别任意样本的能力)之间寻求最佳折中,以期获得最好的推广能力。通过事先选择好的某一个非线性变换,讲输入向量映射到高维特征空间中,在一特征空间中,构造一个最有分类超平面.求最优分类超平面等价于求最大间隔:采用Lagrange乘数法并应用KKT(Karush-Kuhn-Tucker)条件,可求得决策函数为:①The principle of SVM classifier. The support vector machine method is based on the VC dimension theory of statistical learning theory and the principle of structural risk minimization. According to the limited sample information, the complexity of the model (that is, the learning accuracy of specific training samples) and the learning ability (that is, the error-free The ability to identify any sample) seeks the best compromise in order to obtain the best generalization ability. Through a pre-selected nonlinear transformation, the input vector is mapped to a high-dimensional feature space, and a most classified hyperplane is constructed in a feature space. Finding the optimal classification hyperplane is equivalent to finding the maximum interval: using Lagrange multiplier method and application of KKT (Karush-Kuhn-Tucker) conditions, the decision function can be obtained as:

线性可分,决策函数为: f ( x ) = sgn [ &Sigma; i l &alpha; i y i < x i , x > + b ] Linearly separable, the decision function is: f ( x ) = sgn [ &Sigma; i l &alpha; i the y i < x i , x > + b ]

不可分时,引入核函数之后的决策函数为:When inseparable, the decision function after introducing the kernel function is:

ff (( xx )) == sgnsgn [[ &Sigma;&Sigma; ii ll &alpha;&alpha; ii ** ythe y ii KK (( xx ii ,, xx jj )) ++ bb ** ]]

样本表示为(xi,yi)其中xi=(S,dE,count,σ0),yi=0或1A sample is expressed as ( xi , y i ) where xi = (S, d E , count, σ 0 ), y i = 0 or 1

②参数选择:惩罚因子(C)和核函数参数(σorq)选择。所使用的方法为,交叉验证法。通过测试非训练样本在某个固定参数值上的错误率,然后不断修正参数,它是留一法(L00)估计错误率的一种特例。其基本原理为将训练样本分成n份,任选其中n-1份作为训练集,取剩下一份作为测试集,设定参数C和σ的取值范围。选取最高平均识别精度对应的参数对C和σ。若最高平均识别精度对应不同的参数对,取最少平均支持向量个数对应的参数对C和σ。这是因为支持向量机的计算复杂度为0(L*D),其中L代表支持向量个数,D代表特征向量维数,L越小所需计算量越少。本课题选择的核函数为径向基核函数(RBF)② Parameter selection: penalty factor (C) and kernel function parameter (σorq) selection. The method used is the cross-validation method. By testing the error rate of non-training samples on a fixed parameter value, and then continuously correcting the parameters, it is a special case of the leave-one-out method (L00) to estimate the error rate. The basic principle is to divide the training sample into n parts, select n-1 parts as the training set, and take the remaining part as the test set, and set the value range of the parameters C and σ. Select the parameter pair C and σ corresponding to the highest average recognition accuracy. If the highest average recognition accuracy corresponds to different parameter pairs, take the parameter pair C and σ corresponding to the least average number of support vectors. This is because the computational complexity of the support vector machine is 0(L*D), where L represents the number of support vectors, D represents the dimension of the feature vector, and the smaller L is, the less calculation is required. The kernel function chosen in this topic is the Radial Basis Kernel Function (RBF)

③样本分类及边界样本的选择。③Sample classification and selection of boundary samples.

I样本分类。样本对分类器的训练起到至关重要的作用。样本可以分为,好样本(易于与其他类区分),边界样本(与类样本相距较近样本),差样本(容易混淆的样本)。本课题研究的目的,是针对那些难以区分(差样本)的样本,能够正确的区分出来,那么分类器训练的好坏,对于样本的分类是非常重要的。研究发现,好的样本可以使得训练处的模式类区域更加的紧凑,但是不同模式类区域间隔更大。差的样本的存在可能使得训练出的模式类区域尽量大,而相邻区域之间容易产生重叠,增大了分类误差。边界样本进行训练,可以使得训练出的类区域尽量大,而相邻类区域尽量没有重叠,分类性能最佳。Classification of I samples. Samples play a crucial role in the training of classifiers. Samples can be divided into good samples (easy to distinguish from other classes), boundary samples (closer to class samples), and poor samples (easy to confuse samples). The purpose of this subject research is to correctly distinguish those samples that are difficult to distinguish (poor samples), so the quality of the classifier training is very important for the classification of samples. The study found that good samples can make the pattern class area at the training site more compact, but the interval between different pattern class areas is larger. The existence of poor samples may make the trained pattern class area as large as possible, and the overlap between adjacent areas is easy to increase, which increases the classification error. Training with boundary samples can make the trained class area as large as possible, and the adjacent class areas do not overlap as much as possible, and the classification performance is the best.

II边界样本的选择。直接利用RBF_SVM分类器来进行边界样本选择,即首先利用交叉验证法进行参数选择,然后利用选择的参数训练RBF_SVM分类器,所得到的支持向量集即为所求的边界样本。II Selection of Boundary Samples. Directly use the RBF_SVM classifier to select the boundary samples, that is, first use the cross-validation method to select the parameters, and then use the selected parameters to train the RBF_SVM classifier, and the obtained support vector set is the required boundary samples.

III最后利用边界样本最终的SVM分类器模型。III finally utilizes the final SVM classifier model of boundary samples.

(6)目标识别:将目标待测车位按上述(1)-(4)步计算得到四个车位特征参数,分别代入由步骤(5)确定的所训练的分类器中,直接得到车位占用情况。(6) Target recognition: the target parking space to be tested is calculated according to the above (1)-(4) steps to obtain four parking space characteristic parameters, which are respectively substituted into the trained classifier determined by step (5) to directly obtain the parking space occupancy situation .

本发明与现有技术相比,具有以下明显的优势和有益效果:Compared with the prior art, the present invention has the following obvious advantages and beneficial effects:

首先,本发明在充分研究分析室外停车场具体环境的基础之上,提出了五类可以充分反映车位是否泊车的特征参数信息。发明了基于颜色的混合高斯模型的背景检测算法、以及利用meanshift进行颜色聚类之后边缘检测算法的改进,是本文的创新点,而且每一特征都有其特殊性,避免光照、天气及车位上水迹等干扰因素的影响。也为统计模式识别分类方法(SVM)提供了精确有效的特征参数。其次,为了构造精确的SVM分类器,本发明将对训练样本进行了分类,利用SVM先进行边界样本的寻找,最后利用边界样本,训练最终的分类器,这一方法使得模式类区域尽量的大,而相邻区域没有重叠,分类性能达到最大。该发明有效的提高了车位检测的准确率。实验证明该方法既保证了车位识别的准确率同时也提高了车位检测的速度。First, on the basis of fully studying and analyzing the specific environment of the outdoor parking lot, the present invention proposes five types of characteristic parameter information that can fully reflect whether the parking space is parked or not. The invention of the background detection algorithm based on the color-based mixed Gaussian model, and the improvement of the edge detection algorithm after color clustering using meanshift are the innovations of this paper, and each feature has its particularity, avoiding lighting, weather and parking spaces. Interfering factors such as water traces. It also provides accurate and effective feature parameters for the statistical pattern recognition classification method (SVM). Secondly, in order to construct an accurate SVM classifier, the present invention classifies the training samples, uses SVM to search for boundary samples first, and finally uses boundary samples to train the final classifier. This method makes the pattern class area as large as possible , while adjacent regions do not overlap, the classification performance is maximized. The invention effectively improves the accuracy of parking space detection. Experiments prove that this method not only ensures the accuracy of parking space recognition but also improves the speed of parking space detection.

附图说明 Description of drawings

图1计算车位区域四个特征参数流程图;Fig. 1 calculates the flow chart of four characteristic parameters of the parking space area;

图2建立基于颜色特征的混合高斯模型。Figure 2 builds a mixture of Gaussian models based on color features.

图3SVM思想模型图;Figure 3 SVM thought model diagram;

图4最优分类面示意图;Figure 4 is a schematic diagram of the optimal classification surface;

图5设计训练SVM分类器流程图;Fig. 5 design training SVM classifier flowchart;

图6车位检测方法流程图。Figure 6 is a flow chart of the parking space detection method.

具体实施方式 Detailed ways

本发明中采用CCD摄像头进行车位图像的采集,摄像机的架设高度一般为2-5米,每台摄像机覆盖的有效场景范围包含1-4个车位,摄像头相对位置和拍摄角度保持不变。在本实例中采用一台CCD摄像头,拍摄的车位图像中包含4个车位,如图6所示。在此以第一车位,即图像中车位面积最大的车位为例。在计算机中完成以下步骤,具体实施流程如图3所示:In the present invention, a CCD camera is used to collect images of parking spaces. The height of the cameras is generally 2-5 meters. The effective scene range covered by each camera includes 1-4 parking spaces, and the relative positions and shooting angles of the cameras remain unchanged. In this example, a CCD camera is used, and the parking image captured includes 4 parking spaces, as shown in Figure 6. Here, the first parking space, that is, the parking space with the largest area in the image, is taken as an example. Complete the following steps in the computer, and the specific implementation process is shown in Figure 3:

第一步:选择一幅一号车位无车图像作为背景图像,选择要求:该车位图像干扰较少,读取该图像并将其平滑去噪后转换成灰度图像;Step 1: Select a car-free image of the No. 1 parking space as the background image. The selection requirements: the image of the parking space has less interference, read the image and convert it into a grayscale image after smoothing and denoising;

第二步:在此车位背景图像中确定一号车位的边框坐标,四边形车位的四个顶点坐标为(352,458),(550,675),(490,715),(320,512),根据四个坐标截取只包含一号背景车位信息的图像数据,将其设为I0;Step 2: Determine the border coordinates of the No. 1 parking space in the background image of the parking space. The coordinates of the four vertices of the quadrilateral parking space are (352, 458), (550, 675), (490, 715), (320, 512), According to four coordinates, the image data containing only No. 1 background parking space information is intercepted, and it is set as I0;

第三步:选出500幅车位图像作为训练样本库,其中250幅为一号车位有车占用时的图像,其余250幅为一号车位无车占用时的图像。Step 3: Select 500 images of parking spaces as the training sample library, 250 of which are images when the No. 1 parking space is occupied by a car, and the remaining 250 images are images when the No. 1 parking space is not occupied by a car.

第四步:读取训练样本库中每一幅待测车位图像信息,按第二步中的方式截取出一号待测车位的区域范围进行预处理,其步骤如下:Step 4: Read the image information of each parking space to be tested in the training sample library, and intercept the area of the parking space No. 1 to be tested according to the method in the second step for preprocessing. The steps are as follows:

首先RGB彩色空间转换到HSI模型下,利用转换公式,并提取H分量,I分量,S分量。在I空间下进行图像增强处理,保存好图像的颜色信息。First, the RGB color space is converted to the HSI model, and the conversion formula is used to extract the H component, I component, and S component. Image enhancement processing is carried out in I space, and the color information of the image is preserved.

最后转换到RGB空间下,在此模型空间下方便特征的提取。Finally, it is converted to RGB space, which is convenient for feature extraction in this model space.

第五步:提取并计算一号车位图像中四个特征参数值,特征参数提取流程如图1所示,具体过程如下:Step 5: Extract and calculate the four characteristic parameter values in the No. 1 parking space image. The characteristic parameter extraction process is shown in Figure 1. The specific process is as follows:

(1)颜色特征提取。基于背景颜色的混合高斯模型已经确定,此模型统计空车位图像像素可能出现的各种情况,建立的混合高斯模型,具有很强的抗干扰能力,能够很好的将背景和前景图像进行区分,鲁棒性强。具体颜色特征提取步骤如下。(1) Color feature extraction. The mixed Gaussian model based on the background color has been determined. This model counts various situations that may occur in the pixels of the empty parking space image. The established mixed Gaussian model has strong anti-interference ability and can well distinguish the background and foreground images. Strong robustness. The specific color feature extraction steps are as follows.

I提取彩色图像的R、G、B分量。I extract the R, G, B components of the color image.

II转换到YCbCr空间下。提取Cb、Cr分量。II is converted to the YCbCr space. Extract Cb, Cr components.

III将每个车位的像素点带入到混合高斯模型中,设定阈值,进行判断。III brings the pixel points of each parking space into the mixed Gaussian model, sets the threshold, and makes judgments.

IV统计车位区域内非背景像素点的个数S及得到的最小矩形区域的长宽比LW。这些非背景区域很可能是车的像素点,根据车的面积比较大及长宽比比例为1~2.5之间这些特点,将车的其他干扰物进行区分。IV Count the number S of non-background pixels in the parking area and the aspect ratio LW of the smallest rectangular area obtained. These non-background areas are likely to be the pixels of the car. According to the characteristics of the relatively large area of the car and the aspect ratio between 1 and 2.5, other distracting objects of the car are distinguished.

(2)计算车位边缘点密度特征参数,改进的canny边缘检测算法的具体步骤:(2) Calculate the characteristic parameters of the edge point density of the parking space, and the specific steps of the improved canny edge detection algorithm:

A、首先用meanshift算法进行图像基于颜色特征的聚类处理,平滑掉非伪边缘信息。转换到灰度空间下。进行canny边缘检测。A. First, use the meanshift algorithm to cluster the image based on color features, and smooth out the non-pseudo edge information. Convert to grayscale space. Perform canny edge detection.

B、利用经典导数算子找到图像灰度的沿着两个方向的偏导数,并求出梯度的幅值和方向: | G | = G x 2 + G y 2 &theta; = arctan G y G x B. Use the classic derivative operator to find the partial derivative along two directions of the image grayscale, and find the magnitude and direction of the gradient: | G | = G x 2 + G the y 2 &theta; = arctan G the y G x

C、非最大值抑制:遍历图像,若某个像素的灰度值与其梯度方向上前后两个像素的灰度值相比不是最大的,那么就将这个像素点的灰度值置为0,即不是边缘。C. Non-maximum suppression: Traversing the image, if the gray value of a pixel is not the largest compared with the gray values of the two pixels before and after the gradient direction, then set the gray value of this pixel to 0. i.e. not fringe.

D、双阈值算法检测和链接边缘。利用累计统计直方图得到的高阈值T1,再取一个低阈值出来(通常取界=0.4T1)。凡是大于高阈值的一定是边缘;凡是小于低阈值的一定不是边缘;如果检测结果大于阈值但又小于高阈值,那就要看这个像素的邻接像素中有没有超过高阈值的边缘像素:如果有的话那么它就是边缘了,否则他就不是边缘。D. Double threshold algorithm to detect and link edges. Use the high threshold T1 obtained from the cumulative statistical histogram, and then take a low threshold (usually the boundary = 0.4T1). Anything larger than the high threshold must be an edge; anything smaller than the low threshold must not be an edge; if the detection result is greater than the threshold but smaller than the high threshold, it depends on whether there are any edge pixels exceeding the high threshold among the adjacent pixels of this pixel: if there is If so, then it is an edge, otherwise it is not an edge.

E、计算边缘点密度。E. Calculate the edge point density.

dd EE. == &Sigma;&Sigma; (( ii ,, jj )) == 11 GG EE. SS

其中,dE表示边缘点密度,GE表示二值车位区域图像中边缘像素值为1的点,S表示该车位的面积。Among them, d E represents the density of edge points, GE represents the point whose edge pixel value is 1 in the binary parking area image, and S represents the area of the parking space.

(3)角点特征检测。车具有丰富的角点信息,而空车位的角点特征并不明显。角点定义为二维图像亮度变化剧烈的点或图像边缘曲线上具有极大值的点。本文选用计算简单、提取的角点特征均匀合理、可以定量提取特征点以及算子稳定的Harris算法进行角点特征检测。(3) Corner feature detection. Cars have rich corner information, but the corner features of empty parking spaces are not obvious. The corner point is defined as the point where the brightness of the two-dimensional image changes sharply or the point with the maximum value on the edge curve of the image. In this paper, the Harris algorithm with simple calculation, uniform and reasonable extracted corner features, quantitative extraction of feature points and stable operator is selected for corner feature detection.

①Harris算法原理:对操作的灰度图像的每个点,计算该点在横向和纵向的一阶导数,以及二者的乘积;特征点是局部范围内的极大兴趣值对应的像素点。①Harris algorithm principle: For each point of the grayscale image to be operated, calculate the first-order derivative of the point in the horizontal and vertical directions, and the product of the two; the feature point is the pixel point corresponding to the maximum value of interest in the local range.

②算法步骤,应用Harris方法提取图像中角点的过程可以分为以下几个步骤:② Algorithm steps, the process of applying the Harris method to extract the corner points in the image can be divided into the following steps:

I计算图像像素点在水平和垂直方向上的梯度,以及两者的乘积,得到M中4个元素的值:I calculate the gradient of the image pixel in the horizontal and vertical directions, and the product of the two, and get the value of the 4 elements in M:

M = I x 2 I x I y I x I x I y 2 其中 I x 2 = I x &times; I x ; I y 2 = I y &times; I y . m = I x 2 I x I the y I x I x I the y 2 in I x 2 = I x &times; I x ; I the y 2 = I the y &times; I the y .

其中Ix为像素点在水平方向上的梯度,Iy为像素点在垂直方向的梯度Among them, I x is the gradient of the pixel point in the horizontal direction, and I y is the gradient of the pixel point in the vertical direction

II对图像进行高斯滤波,得到新的M。II performs Gaussian filtering on the image to obtain a new M.

III计算原图像上对应的每个像素点的兴趣值,即R值。III Calculate the interest value of each pixel corresponding to the original image, that is, the R value.

RR == {{ II xx 22 &times;&times; II ythe y 22 -- (( II xx II ythe y )) 22 }} -- kk (( II xx 22 ++ II ythe y 22 )) 22

IV选取局部极值点。Harris方法认为,特征点是局部范围内的极大兴趣值对应的像素点。IV selects local extremum points. The Harris method believes that the feature point is the pixel point corresponding to the maximum interest value in the local range.

V设定阈值,选取一定量的角点。V sets the threshold and selects a certain amount of corner points.

③角点数目统计。统计车位的角点个数,如果有车辆存在的话,角点的个数比较多,空车位的角点个数比较少。③Statistics on the number of corner points. Count the number of corners of the parking spaces. If there are vehicles, the number of corners is relatively large, and the number of corners of empty parking spaces is relatively small.

(4)进行车位方差参数计算,令Gs=|I-I0|,(I为当前图像,I0为之前选取的背景空车位图像)。获得一号车位区域差值图像Gs,,根据下面的公式计算该车位的方差:(4) Calculate the parking space variance parameter, set G s =|II 0 |, (I is the current image, I 0 is the previously selected background empty parking space image). Obtain the regional difference image G s of the No. 1 parking space, and calculate the variance of the parking space according to the following formula:

&sigma;&sigma; == &Sigma;&Sigma; (( ii ,, jj )) &Element;&Element; GG sthe s GG sthe s (( ii ,, jj )) -- GG &OverBar;&OverBar; sthe s nno

此处σ表示车位区域的方差,

Figure BDA0000147820190000122
表示车位区域差值图像Gs的平均值,n代表Gs内的像素点总数。Here σ represents the variance of the parking area,
Figure BDA0000147820190000122
Indicates the average value of the difference image G s of the parking area, and n represents the total number of pixels in G s .

第六步:SVM分类器训练。样本表示为(xi,yi),且xi=(S,dE,count,σ0),yi=0或1Step 6: SVM classifier training. A sample is expressed as ( xi , y i ), and xi = (S, d E , count, σ 0 ), y i = 0 or 1

1 参数选择和交叉验证法:惩罚因子(C)和核函数参数(σorq)选择。所使用的方法为,交叉验证法。通过对这500幅训练样本在某个固定参数值上的错误率,然后不断修正参数,它是留一法(L00)估计错误率的一种特例。其基本原理为将训练样本分成5份,任选其中4份作为训练集,取剩下一份作为测试集,设定参数C和σ的取值范围。选取最高平均识别精度对应的参数对C和σ。若最高平均识别精度对应不同的参数对,取最少平均支持向量个数对应的参数对C和σ。这是因为支持向量机的计算复杂度为0(L*D),其中L代表支持向量个数,D代表特征向量维数,L越小所需计算量越少。针对停车场样本特征选择的核函数为径向基核函数(RBF),C=3,σ=0.0011 Parameter selection and cross-validation method: penalty factor (C) and kernel function parameter (σorq) selection. The method used is the cross-validation method. It is a special case of the leave-one-out method (L00) to estimate the error rate by adjusting the error rate of the 500 training samples at a fixed parameter value and then continuously correcting the parameters. The basic principle is to divide the training sample into 5 parts, select 4 of them as the training set, and take the remaining part as the test set, and set the value range of the parameters C and σ. Select the parameter pair C and σ corresponding to the highest average recognition accuracy. If the highest average recognition accuracy corresponds to different parameter pairs, take the parameter pair C and σ corresponding to the least average number of support vectors. This is because the computational complexity of the support vector machine is 0(L*D), where L represents the number of support vectors, D represents the dimension of the feature vector, and the smaller L is, the less calculation is required. The kernel function selected for the characteristics of the parking lot sample is radial basis kernel function (RBF), C=3, σ=0.001

2 边界样本的选择。直接利用RBF_SVM分类器来进行边界样本选择,即首先利用交叉验证法进行参数选择,然后利用选择的参数训练RBF_SVM分类器,所得到的支持向量集即为所求的边界样本。2 Selection of boundary samples. Directly use the RBF_SVM classifier to select the boundary samples, that is, first use the cross-validation method to select the parameters, and then use the selected parameters to train the RBF_SVM classifier, and the obtained support vector set is the required boundary samples.

3 最后利用边界样本最终的SVM分类器模型。3 Finally, use the final SVM classifier model of the boundary samples.

第七步:读入一幅待测车位图像,按上面第一至五步进行处理,计算出车位的五个特征参数值,将它们依次代入第六步中SVM分类器中,直接得到车位占用情况。Step 7: Read in an image of a parking space to be tested, process it according to the first to five steps above, calculate the five characteristic parameter values of the parking space, and substitute them into the SVM classifier in the sixth step in order to directly obtain the parking space occupancy Condition.

第八步:输出待测图像的一号车位识别结果,用1表示车位有车占用,用0表示车位无车占用。为验证本发明方法检测车位的准确性和通用性,采用停车场现场拍摄的500幅车位图像进行虚报率、漏检率、误报率的实验测试,实验结果表明本发明具有良好的检测效果。Step 8: Output the recognition result of the No. 1 parking space of the image to be tested, use 1 to indicate that the parking space is occupied by a car, and use 0 to indicate that the parking space is not occupied by a car. In order to verify the accuracy and versatility of the inventive method for detecting parking spaces, 500 images of parking spaces taken on-site in the parking lot were used to carry out experimental tests of false alarm rate, missed detection rate, and false alarm rate. The experimental results show that the present invention has good detection effects.

仿真结果分析Simulation result analysis

表1为图6所示四个车位(其图中,1代表一号车位,2代表二号车位,3代表三号车位,4代表四号车位,黑色部分代表车位区域以外的背景)的实验统计数据,通过以下三率来统计模糊模式识别方法对于车位图像检测的准确率:Table 1 is the experiment of the four parking spaces shown in Figure 6 (in the figure, 1 represents the No. 1 parking space, 2 represents the No. 2 parking space, 3 represents the No. 3 parking space, 4 represents the No. 4 parking space, and the black part represents the background outside the parking space area) Statistical data, the accuracy of the fuzzy pattern recognition method for parking space image detection is counted through the following three rates:

1.虚报率=(将无车判为有车的图像帧数)/(全部的无车图像帧数);1. False report rate = (the number of image frames where no car is judged as having a car)/(the total number of image frames without a car);

2.漏检率=(将有车判为无车的图像帧数)/(全部的无车图像帧数);2. Missing detection rate = (the number of image frames where the car is judged as no car)/(the number of all car-free image frames);

3.误报率=(将无车判为有车的图像帧数+将有车判为无车的图像帧数)/(全部图像帧数);3. False alarm rate=(the number of image frames with no car judged as having a car+the number of image frames with a car judged as no car)/(all image frames);

表1车位图像测试结果Table 1 Parking space image test results

  车位号 Parking number   虚报率 False reporting rate   漏检率 Missing rate   误报率 False alarm rate   一号车位 Parking Space No. 1   2.11% 2.11%   0.11% 0.11%   1.8% 1.8%   二号车位 No. 2 parking space   3.51% 3.51%   0 0   1.37% 1.37%   三号车位 Parking No. 3   5.6% 5.6%   0 0   5.58% 5.58%   四号车位 Parking No. 4   13.09% 13.09%   1.2% 1.2%   10.09% 10.09%

Claims (6)

1. the parking position detection algorithm based on color characteristic is characterized in that: comprise image processing module, characteristic extracting module, statistical sorter design module; May further comprise the steps:
Obtain the parking stall video data 1.1. adopt the CCD camera to take, the shooting area scope of video camera is several parking stalls, and camera relative position and shooting angle remain unchanged;
1.2. select a width of cloth not have the background image of car, read this image and with converting gray-scale map to behind its smoothing denoising;
1.3. the frame coordinate of no back frame scape image parking stall to be measured is set, and intercepting only comprises the view data of single parking space information to be measured, and the background image of the parking stall to be measured that obtains is made as I 0
1.4. for every width of cloth testing image, the mode intercepting in 1.3 goes out the regional extent of concrete parking stall to be measured set by step then, and carries out pre-service; Read the information of the colored parking stall to be measured of each width of cloth image-region, carry out color space conversion, RGB is transformed under the HSI space, carry out image filtering and handle;
1.5. feature extraction phases: the color characteristic of parking stall is applied in the parking stall measure system; Foundation is based on the mixed Gauss model of background color; Obtain the area in zone, non-NULL parking stall and the length breadth ratio of minimum rectangular area; Extract marginal point density, angle point characteristic and variance characteristic then, utilize mathematical statistics method to calculate parking stall five characteristic parameter areas S, length breadth ratio LW, marginal point density d E, angle point number count, parking stall variances sigma;
1.6. select several parking stall to be measured images as training sample image, choice of sample is wanted rationally, selects multiple image to train, wherein 50% width of cloth is the empty wagons bit image, and 50% width of cloth has car parking stall image; Utilize the theory of statistical model identification, training svm classifier device;
1.7. target parking stall to be measured is calculated several parking stall characteristic parameters by above-mentioned steps 1.1-1.5 step, in the sorter that substitution is confirmed by step 1.6 respectively, directly carries out Target Recognition.
2. a kind of parking position detection algorithm based on color characteristic according to claim 1, it is characterized in that: several described in the described step 1.6 are 500 width of cloth.
3. the parking position detection algorithm based on color characteristic according to claim 1 is characterized in that: described shooting area scope is 1-4 parking stall.
4. the parking position detection algorithm based on color characteristic according to claim 1 is characterized in that: described marginal point density d E, adopt and carry out color cluster through meanshift, more close colouring information is synthesized one type, and then under gray space, carry out the canny rim detection, obtain accurate more marginal information:
d E = &Sigma; ( i , j ) = 1 G E S
Wherein, d EExpression marginal point density, G EThe edge pixel value is 1 point in the area image of expression two-value parking stall, and S representes the area of this parking stall.
5. the parking position detection algorithm based on color characteristic according to claim 1 is characterized in that: described angle point number count obtains through following steps: utilization Harris Corner Detection Algorithm counts parking stall regional extent angle point number count.
6. the parking position detection algorithm based on color characteristic according to claim 1 is characterized in that: described parking stall variances sigma is calculated: with parking stall to be measured image I and the no back frame scape image I of choosing 0It is poor to do, and calculates it
Figure FDA0000147820180000022
σ representes the variance in zone, parking stall,
Figure FDA0000147820180000023
Expression parking stall zone error image G sMean value, n represents G sInterior pixel sum.
CN2012100865534A 2012-03-28 2012-03-28 Color characteristic-based detection algorithm for stall at parking lot Pending CN102663357A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100865534A CN102663357A (en) 2012-03-28 2012-03-28 Color characteristic-based detection algorithm for stall at parking lot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100865534A CN102663357A (en) 2012-03-28 2012-03-28 Color characteristic-based detection algorithm for stall at parking lot

Publications (1)

Publication Number Publication Date
CN102663357A true CN102663357A (en) 2012-09-12

Family

ID=46772841

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100865534A Pending CN102663357A (en) 2012-03-28 2012-03-28 Color characteristic-based detection algorithm for stall at parking lot

Country Status (1)

Country Link
CN (1) CN102663357A (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103544501A (en) * 2013-10-28 2014-01-29 哈尔滨商业大学 Indoor and outdoor scene classification method based on Fourier transformation
CN104574354A (en) * 2013-09-26 2015-04-29 张国飙 Parking space monitoring method based on edge detection
CN104899898A (en) * 2015-05-28 2015-09-09 华南理工大学 Multidimensional information probability model based road surface detection method
CN104916162A (en) * 2015-05-28 2015-09-16 惠州华阳通用电子有限公司 Parking stall detection method and system
CN105844959A (en) * 2016-06-13 2016-08-10 北京精英智通科技股份有限公司 Method for determining entering of vehicles to parking spaces, device, method for determining exiting of vehicles from parking spaces, and device
CN105894823A (en) * 2016-06-03 2016-08-24 北京精英智通科技股份有限公司 Parking detection method, device and system
CN106504583A (en) * 2017-01-05 2017-03-15 陕西理工学院 Cloud monitoring method, cloud server and cloud monitoring system for parking space status in parking lot
CN106845483A (en) * 2017-02-10 2017-06-13 杭州当虹科技有限公司 A kind of video high definition printed words detection method
CN107025802A (en) * 2017-05-08 2017-08-08 普宙飞行器科技(深圳)有限公司 A kind of method and unmanned plane that parking stall is found based on unmanned plane
CN107392890A (en) * 2017-06-20 2017-11-24 华南理工大学 A kind of FPC copper line surfaces oxidation defect detection method and its detecting system
CN107431762A (en) * 2015-04-14 2017-12-01 索尼公司 Image processing equipment, image processing method and image processing system
CN107886080A (en) * 2017-11-23 2018-04-06 同济大学 One kind is parked position detecting method
CN108242178A (en) * 2018-02-26 2018-07-03 北京车和家信息技术有限公司 A kind of method for detecting parking stalls, device and electronic equipment
CN108875911A (en) * 2018-05-25 2018-11-23 同济大学 One kind is parked position detecting method
CN109034211A (en) * 2018-07-04 2018-12-18 广州市捷众智能科技有限公司 A kind of parking space state detection method based on machine learning
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN109359659A (en) * 2018-12-26 2019-02-19 哈尔滨理工大学 A classification method of automobile insurance sheets based on color features
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN110322680A (en) * 2018-03-29 2019-10-11 纵目科技(上海)股份有限公司 A kind of bicycle position detecting method, system, terminal and storage medium based on specified point
CN112598922A (en) * 2020-12-07 2021-04-02 安徽江淮汽车集团股份有限公司 Parking space detection method, device, equipment and storage medium
CN112614375A (en) * 2020-12-18 2021-04-06 中标慧安信息技术股份有限公司 Parking guidance method and system based on vehicle driving state
CN118675150A (en) * 2024-08-05 2024-09-20 比亚迪股份有限公司 Parking space detection method and device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059909A (en) * 2006-04-21 2007-10-24 浙江工业大学 All-round computer vision-based electronic parking guidance system
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
CN101807352A (en) * 2010-03-12 2010-08-18 北京工业大学 Method for detecting parking stalls on basis of fuzzy pattern recognition
US20110116717A1 (en) * 2009-11-17 2011-05-19 Mando Corporation Method and system for recognizing parking lot
CN102110376A (en) * 2011-02-18 2011-06-29 汤一平 Roadside parking space detection device based on computer vision
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101059909A (en) * 2006-04-21 2007-10-24 浙江工业大学 All-round computer vision-based electronic parking guidance system
CN101196996A (en) * 2007-12-29 2008-06-11 北京中星微电子有限公司 Image detection method and device
CN101436252A (en) * 2008-12-22 2009-05-20 北京中星微电子有限公司 Method and system for recognizing vehicle body color in vehicle video image
US20110116717A1 (en) * 2009-11-17 2011-05-19 Mando Corporation Method and system for recognizing parking lot
CN101807352A (en) * 2010-03-12 2010-08-18 北京工业大学 Method for detecting parking stalls on basis of fuzzy pattern recognition
CN102110376A (en) * 2011-02-18 2011-06-29 汤一平 Roadside parking space detection device based on computer vision
CN102289948A (en) * 2011-09-02 2011-12-21 浙江大学 Multi-characteristic fusion multi-vehicle video tracking method under highway scene

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104574354B (en) * 2013-09-26 2017-10-10 成都海存艾匹科技有限公司 Parking position monitoring method based on edge detection
CN104574354A (en) * 2013-09-26 2015-04-29 张国飙 Parking space monitoring method based on edge detection
CN103544501B (en) * 2013-10-28 2016-08-17 哈尔滨商业大学 Indoor and outdoor based on Fourier transformation scene classification method
CN103544501A (en) * 2013-10-28 2014-01-29 哈尔滨商业大学 Indoor and outdoor scene classification method based on Fourier transformation
CN107431762A (en) * 2015-04-14 2017-12-01 索尼公司 Image processing equipment, image processing method and image processing system
CN104899898A (en) * 2015-05-28 2015-09-09 华南理工大学 Multidimensional information probability model based road surface detection method
CN104916162A (en) * 2015-05-28 2015-09-16 惠州华阳通用电子有限公司 Parking stall detection method and system
CN104899898B (en) * 2015-05-28 2018-01-05 华南理工大学 Pavement detection method based on multidimensional information probabilistic model
CN104916162B (en) * 2015-05-28 2017-05-03 惠州华阳通用电子有限公司 Parking stall detection method and system
CN105894823A (en) * 2016-06-03 2016-08-24 北京精英智通科技股份有限公司 Parking detection method, device and system
CN105894823B (en) * 2016-06-03 2018-08-21 北京精英智通科技股份有限公司 A kind of parking detection method and apparatus and system
CN105844959A (en) * 2016-06-13 2016-08-10 北京精英智通科技股份有限公司 Method for determining entering of vehicles to parking spaces, device, method for determining exiting of vehicles from parking spaces, and device
CN106504583B (en) * 2017-01-05 2019-04-05 陕西理工学院 The cloud monitoring method of parking position state
CN106504583A (en) * 2017-01-05 2017-03-15 陕西理工学院 Cloud monitoring method, cloud server and cloud monitoring system for parking space status in parking lot
CN106845483A (en) * 2017-02-10 2017-06-13 杭州当虹科技有限公司 A kind of video high definition printed words detection method
CN107025802A (en) * 2017-05-08 2017-08-08 普宙飞行器科技(深圳)有限公司 A kind of method and unmanned plane that parking stall is found based on unmanned plane
CN107025802B (en) * 2017-05-08 2020-10-13 普宙飞行器科技(深圳)有限公司 Unmanned aerial vehicle-based parking space searching method and unmanned aerial vehicle
CN107392890A (en) * 2017-06-20 2017-11-24 华南理工大学 A kind of FPC copper line surfaces oxidation defect detection method and its detecting system
CN107392890B (en) * 2017-06-20 2020-05-19 华南理工大学 FPC copper wire surface oxidation defect detection method and detection system thereof
CN107886080A (en) * 2017-11-23 2018-04-06 同济大学 One kind is parked position detecting method
CN108242178A (en) * 2018-02-26 2018-07-03 北京车和家信息技术有限公司 A kind of method for detecting parking stalls, device and electronic equipment
CN110322680B (en) * 2018-03-29 2022-01-28 纵目科技(上海)股份有限公司 Single parking space detection method, system, terminal and storage medium based on designated points
CN110322680A (en) * 2018-03-29 2019-10-11 纵目科技(上海)股份有限公司 A kind of bicycle position detecting method, system, terminal and storage medium based on specified point
CN108875911A (en) * 2018-05-25 2018-11-23 同济大学 One kind is parked position detecting method
CN108875911B (en) * 2018-05-25 2021-06-18 同济大学 A kind of parking space detection method
CN109034211A (en) * 2018-07-04 2018-12-18 广州市捷众智能科技有限公司 A kind of parking space state detection method based on machine learning
CN109063632A (en) * 2018-07-27 2018-12-21 重庆大学 A kind of parking position Feature Selection method based on binocular vision
CN109063632B (en) * 2018-07-27 2022-02-01 重庆大学 Parking space characteristic screening method based on binocular vision
CN109359659A (en) * 2018-12-26 2019-02-19 哈尔滨理工大学 A classification method of automobile insurance sheets based on color features
CN110097064A (en) * 2019-05-14 2019-08-06 驭势科技(北京)有限公司 One kind building drawing method and device
CN112598922A (en) * 2020-12-07 2021-04-02 安徽江淮汽车集团股份有限公司 Parking space detection method, device, equipment and storage medium
CN112614375B (en) * 2020-12-18 2021-10-08 中标慧安信息技术股份有限公司 Parking guidance method and system based on vehicle driving state
CN112614375A (en) * 2020-12-18 2021-04-06 中标慧安信息技术股份有限公司 Parking guidance method and system based on vehicle driving state
CN118675150A (en) * 2024-08-05 2024-09-20 比亚迪股份有限公司 Parking space detection method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN102663357A (en) Color characteristic-based detection algorithm for stall at parking lot
CN101807352B (en) Method for detecting parking stalls on basis of fuzzy pattern recognition
CN101493980B (en) Rapid video flame detection method based on multi-characteristic fusion
CN103247059B (en) A kind of remote sensing images region of interest detection method based on integer wavelet and visual signature
WO2017190574A1 (en) Fast pedestrian detection method based on aggregation channel features
CN104966085B (en) A kind of remote sensing images region of interest area detecting method based on the fusion of more notable features
CN101398894B (en) Automobile license plate automatic recognition method and implementing device thereof
CN105205489B (en) Detection method of license plate based on color and vein analyzer and machine learning
CN105005989B (en) A kind of vehicle target dividing method under weak contrast
CN108108761A (en) A kind of rapid transit signal lamp detection method based on depth characteristic study
CN105825203A (en) Ground arrowhead sign detection and identification method based on dotted pair matching and geometric structure matching
CN103218832B (en) Based on the vision significance algorithm of global color contrast and spatial distribution in image
CN101127076A (en) Human Eye State Detection Method Based on Cascade Classification and Hough Circle Transformation
CN103440491A (en) A real-time detection method of dense human flow based on color features
CN102799859A (en) Method for identifying traffic sign
CN107590492A (en) A kind of vehicle-logo location and recognition methods based on convolutional neural networks
CN103366373B (en) Multi-time-phase remote-sensing image change detection method based on fuzzy compatible chart
CN103020992A (en) Video image significance detection method based on dynamic color association
CN103955949A (en) Moving target detection method based on Mean-shift algorithm
CN107832762A (en) A kind of License Plate based on multi-feature fusion and recognition methods
CN110321855A (en) A kind of greasy weather detection prior-warning device
CN105893960A (en) Road traffic sign detecting method based on phase symmetry
CN104657724A (en) Method for detecting pedestrians in traffic videos
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN102004925A (en) Method for training object classification model and identification method using object classification model

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20120912