[go: up one dir, main page]

CN104732227B - A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation - Google Patents

A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation Download PDF

Info

Publication number
CN104732227B
CN104732227B CN201510129964.0A CN201510129964A CN104732227B CN 104732227 B CN104732227 B CN 104732227B CN 201510129964 A CN201510129964 A CN 201510129964A CN 104732227 B CN104732227 B CN 104732227B
Authority
CN
China
Prior art keywords
mrow
image
mtd
msub
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510129964.0A
Other languages
Chinese (zh)
Other versions
CN104732227A (en
Inventor
张永东
谭利
许跃生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN201510129964.0A priority Critical patent/CN104732227B/en
Publication of CN104732227A publication Critical patent/CN104732227A/en
Application granted granted Critical
Publication of CN104732227B publication Critical patent/CN104732227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明公开了一种基于清晰度和亮度评估的车牌快速定位方法,包括下述步骤:(1)对输入图像进行基于噪声和锐度的清晰度评估;(2)若清晰度不足则进行梯度锐化处理,若清晰度过高则进行高斯模糊处理;(3)由RGB图转换为灰度图;(4)对灰度图进行亮度评估;(5)若亮度异常则进行光照归一处理;(6)通过Scharr算子提取图像中垂直边缘;(7)进行局部自适应阈值处理;(8)通过形态学处理过滤噪声,以及使垂直边缘融合成连通区域;(9)对连通区域进行区域标记,根据中国车牌的特征进行筛选并得到车牌区域。

The invention discloses a method for quickly locating a license plate based on sharpness and brightness evaluation, which comprises the following steps: (1) carrying out sharpness evaluation based on noise and sharpness on an input image; (2) performing a gradient if the sharpness is insufficient Sharpening processing, if the clarity is too high, perform Gaussian blur processing; (3) convert from RGB image to grayscale image; (4) evaluate the brightness of the grayscale image; (5) perform light normalization processing if the brightness is abnormal ; (6) Extract vertical edges in the image through the Scharr operator; (7) Perform local adaptive threshold processing; (8) Filter noise through morphological processing, and make vertical edges merge into connected regions; (9) Connect the connected regions Area mark, filter according to the characteristics of the Chinese license plate and get the license plate area.

Description

一种基于清晰度和亮度评估的车牌快速定位方法A Fast Locating Method of License Plate Based on Clarity and Brightness Evaluation

技术领域technical field

本发明属于图像处理领域,特别涉及一种基于清晰度和亮度评估的车牌快速定位方法。The invention belongs to the field of image processing, in particular to a method for quickly locating a license plate based on definition and brightness evaluation.

背景技术Background technique

在智能交通系统中,基于数字图像的车牌号码识别技术已成为不可或缺的重要组成部分并广泛地应用在监测报警、违章处罚、出入管理及高速公路收费管理等领域。车牌定位,即定位出图像或视频中车牌的位置,是车牌号码识别的首要步骤,其准确性直接影响着车牌号码识别的效果,同时也是最耗时的一步。故车牌定位在智能交通系统中具有重要地位。In the intelligent transportation system, the license plate number recognition technology based on digital images has become an indispensable and important part and is widely used in the fields of monitoring and alarming, punishment for violations, access management and highway toll management. License plate location, that is, locating the position of the license plate in the image or video, is the first step in license plate number recognition. Its accuracy directly affects the effect of license plate number recognition, and it is also the most time-consuming step. Therefore, license plate location plays an important role in intelligent transportation systems.

目前车牌定位方法分为三类:Currently, license plate positioning methods are divided into three categories:

1、基于边缘信息的方法。该方法应用最广泛,利用车牌垂直边缘较多的特征来进行定位。对于简单场景如出入管理和收费站等,具有良好的检测率和实时性,但对于复杂的场景如高速公路等,由于车牌较为模糊且光照不可控,导致车牌定位的准确率低。1. Method based on edge information. This method is the most widely used, and uses the features with more vertical edges of the license plate for positioning. For simple scenes such as access management and toll booths, it has good detection rate and real-time performance, but for complex scenes such as expressways, the accuracy of license plate location is low due to the blurred license plate and uncontrollable lighting.

2、基于颜色信息的方法。该方法是利用已知的车牌颜色信息辅助车牌的边缘信息进行定位,对于光照情况单一的场景能大幅提提高准确率,但在光照复杂情景如夜晚等会引起反作用,而且对于图像分辨率要求也较高,只适用于现实极少场景中的车牌识别系统。2. Method based on color information. This method uses the known color information of the license plate to assist the edge information of the license plate to locate. It can greatly improve the accuracy rate for scenes with single lighting conditions, but it will cause adverse effects in complex lighting scenarios such as night, and the image resolution requirements are also high. Higher, only suitable for license plate recognition systems in very few realistic scenarios.

3、基于机器学习的方法。该方法通过搜集不同场景下的车牌样本并用于训练出包含样本中不同情形的全局模型,然后再推广到新图像中的未知情况。对于新图像的清晰度或亮度接近训练样本时效果较为理想,但差别较大时则准确率低且难以保证实时性。3. Method based on machine learning. This method collects license plate samples in different scenes and uses them to train a global model that includes different situations in the samples, and then generalizes to unknown situations in new images. When the clarity or brightness of the new image is close to the training sample, the effect is ideal, but when the difference is large, the accuracy is low and it is difficult to guarantee real-time performance.

综上所述,基于边缘信息的车牌定位方法由于运算量小、处理速度快、所需存储空间小等优点被普遍应用与实时车牌自动识别系统中,但其缺点主要体现为以下两点:1、不能根据输入数据的具体情况作出调整;2、复杂场景下准确率低,实时性差。To sum up, the license plate location method based on edge information is widely used in the real-time automatic license plate recognition system due to the advantages of small amount of calculation, fast processing speed, and small required storage space, but its shortcomings are mainly reflected in the following two points: 1. 1. It cannot be adjusted according to the specific situation of the input data; 2. In complex scenes, the accuracy rate is low and the real-time performance is poor.

发明内容Contents of the invention

本发明的主要目的在于克服现有技术的缺点与不足,提供一种基于清晰度和亮度评估的车牌快速定位方法,该方法有效地解决了车牌在不同清晰度和复杂光照环境下难以被准确定位或不能定位问题,从而提高了车牌定位的准确率。The main purpose of the present invention is to overcome the shortcomings and deficiencies of the prior art, and provide a method for quickly locating license plates based on evaluation of clarity and brightness, which effectively solves the problem that license plates are difficult to locate accurately under different clarity and complex lighting environments. Or can not locate the problem, thereby improving the accuracy of the license plate location.

为了到达上述目的,本发明采用以下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:

一种基于清晰度和亮度评估的车牌快速定位方法,包括下述步骤:A method for quickly locating license plates based on clarity and brightness evaluation, comprising the following steps:

(1)对输入图像进行基于噪声和锐度的清晰度评估;(1) Carry out sharpness evaluation based on noise and sharpness to the input image;

(2)对清晰度不足的图像进行梯度锐化处理,清晰度过高的图像进行高斯模糊处理;(2) Gradient sharpening processing is performed on images with insufficient clarity, and Gaussian blur processing is performed on images with excessive clarity;

(3)对步骤(2)处理后的图像由RGB图转换为灰度图;(3) the image processed in step (2) is converted into a grayscale image by the RGB image;

(4)对灰度图进行亮度评估,即对灰度图像进行亮度过高或亮度不足两种情况进行评估;(4) Evaluate the brightness of the grayscale image, that is, evaluate the grayscale image in two situations of excessive brightness or insufficient brightness;

(5)若亮度异常则进行光照归一处理;(5) If the brightness is abnormal, normalize the illumination;

(6)通过Scharr算子提取图像中垂直边缘;(6) extract the vertical edge in the image by the Scharr operator;

(7)进行局部自适应阈值处理;(7) Perform local adaptive threshold processing;

(8)通过形态学处理过滤噪声,以及使垂直边缘融合成连通区域;(8) filter noise through morphological processing, and merge vertical edges into connected regions;

(9)对连通区域进行区域标记,根据中国车牌的特征进行筛选并得到车牌区域。(9) Carry out area marking on the connected area, screen according to the characteristics of the Chinese license plate and obtain the license plate area.

优选的,步骤(1)中,清晰度评估由噪声评估和锐度评估组成,在得到图像的噪声评估值和锐度评估值后,通过一定的加权关系得到图像的清晰度评估值,计算清晰度评估值的公式如下:Preferably, in step (1), the sharpness evaluation is composed of noise evaluation and sharpness evaluation. After obtaining the noise evaluation value and sharpness evaluation value of the image, the sharpness evaluation value of the image is obtained through a certain weighted relationship, and the sharpness evaluation is calculated. The formula for degree evaluation value is as follows:

Quality=wNoise+(1-w)SharpnessQuality=wNoise+(1-w)Sharpness

其中,w为权重值,取值范围在0和1之间;Among them, w is the weight value, and the value range is between 0 and 1;

所述噪声评估是通过先对图像进行OTSU阈值处理得到二值图,然后计算连通区域中面积小于给定阈值的占比,最后通过线性变换得到噪声评估值。计算噪声评估值的公式如下:The noise evaluation is to obtain a binary image by performing OTSU threshold processing on the image first, then calculate the proportion of connected regions whose area is smaller than a given threshold, and finally obtain the noise evaluation value through linear transformation. The formula for calculating the noise evaluation value is as follows:

其中,Compi是第i个区域,Areai是第i个区域的面积,Tarea是判断面积是否过小的给定阈值,a和b是线性变换的参数;Among them, Comp i is the i-th area, Area i is the area of the i-th area, T area is a given threshold for judging whether the area is too small, a and b are the parameters of the linear transformation;

所述锐度评估采用的是点锐度算法,对图像的每点取8邻域点与之相减,先求8个差值的加权和,在将所有点所得值相加除以像素总个数,计算锐度评估值的公式如下:The sharpness evaluation adopts the point sharpness algorithm, and subtracts 8 neighborhood points from each point of the image, and first calculates the weighted sum of the 8 differences, and then adds and divides the obtained values of all points by the total pixel value. The formula for calculating the sharpness evaluation value is as follows:

其中,a是第a个领域像素,m和n为图像的长和宽,df为灰度的变化幅值,dx为像素点间的距离增量。Among them, a is the pixel in the ath area, m and n are the length and width of the image, df is the change amplitude of the gray level, and dx is the distance increment between pixels.

优选的,步骤(2)具体为:Preferably, step (2) is specifically:

(2-1)给定两个阈值Tquality1和Tquality2,其中Tquality1是判断图像清晰度是否不足的阈值,Tquality2是判断图像清晰度是否过高的阈值;(2-1) Two thresholds T quality1 and T quality2 are given, wherein T quality1 is the threshold for judging whether the image definition is insufficient, and T quality2 is the threshold for judging whether the image definition is too high;

(2-2)当Quality<Tquality1时对图像进行梯度锐化处理,即通过Laplace算子求取图像梯度信息Grad,然后与原图相加得到梯度锐化结果,梯度锐化的公式如下:(2-2) When Quality<T quality1 , the gradient sharpening process is performed on the image, that is, the image gradient information Grad is obtained through the Laplace operator, and then added to the original image to obtain the gradient sharpening result. The gradient sharpening formula is as follows:

f(x,y)=f(x,y)+Grad(x,y)f(x,y)=f(x,y)+Grad(x,y)

其中,f(x,y)为原图,Grad(x,y)为梯度信息;Among them, f(x, y) is the original image, Grad(x, y) is the gradient information;

(2-3)当Quality>Tquality2时对图像进行高斯模糊处理,即通过使用高斯函数计算呈正态分布的模糊模板,并与原图做卷积运算得到高斯模糊结果,高斯函数以及高斯模糊的公式如下:(2-3) When Quality>T quality2 , Gaussian blur processing is performed on the image, that is, the Gaussian function is used to calculate the blur template with a normal distribution, and the Gaussian blur result is obtained by convolution with the original image, Gaussian function and Gaussian blur The formula is as follows:

f(x,y)=f(x,y)*Gauss(a,b)f(x,y)=f(x,y)*Gauss(a,b)

其中,σ为正态分布的标准差,m和n为模板长和宽,a和b为模板上元素,f(x,y)为原图。Among them, σ is the standard deviation of the normal distribution, m and n are the length and width of the template, a and b are the elements on the template, and f(x, y) is the original image.

优选的,步骤(3)中,由RGB图转灰度图的公式为:Preferably, in step (3), the formula of converting the RGB image to the grayscale image is:

Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y);Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y);

或Gray(x,y)=(38R(x,y)+75G(x,y)+15B(x,y))>>7;Or Gray(x, y)=(38R(x, y)+75G(x, y)+15B(x, y))>>7;

其中,R、G、B为像素(x,y)的红、绿、蓝三个颜色通道的值,Gray为像素(x,y)对应的灰度值。or Among them, R, G, and B are the values of the red, green, and blue color channels of the pixel (x, y), and Gray is the gray value corresponding to the pixel (x, y).

优选的,步骤(4)中,对灰度进行亮度评估的方法是:Preferably, in step (4), the method for carrying out brightness evaluation to grayscale is:

计算像素值低于阈值Tbright1或高于阈值Tbright2的像素占所有像素的比例之和,若大于给定阈值Tbright3则认为是则是亮度异常。Calculate the sum of the proportions of pixels whose pixel value is lower than the threshold T bright1 or higher than the threshold T bright2 to all pixels, and if it is greater than the given threshold T bright3 , it is considered to be an abnormal brightness.

优选的,步骤(5)中,光照归一处理的方法为:Preferably, in step (5), the method for normalizing the illumination is:

(5-1)先用Gamma校正增强图像亮度,Gamma校正公式如下:(5-1) First use Gamma correction to enhance the brightness of the image. The Gamma correction formula is as follows:

g(x,y)=c[f(x,y)]γg(x,y)=c[f(x,y)]γ

式中f(x,y)是输入图像,g(x,y)是输出图像,c为任意值用于调整图像对比度,γ在0~1之间用于调整图像亮度;In the formula, f(x, y) is the input image, g(x, y) is the output image, c is any value used to adjust the image contrast, and γ is used to adjust the image brightness between 0 and 1;

(5-2)然后进行高斯差分滤波去除光照中高频分量,即不均匀部分,高斯差分滤波公式如下:(5-2) Then perform Gaussian difference filtering to remove the high-frequency component of the illumination, that is, the uneven part. The Gaussian difference filtering formula is as follows:

g(x,y)=f(x,y)*Gauss(σ2)-f(x,y)*Gauss(σ1)g(x,y)=f(x,y)*Gauss(σ 2 )-f(x,y)*Gauss(σ 1 )

式中f(x,y)是输入图像,g(x,y)是输出图像,Gauss(σn)为方差是σn(n=1,2)的高斯函数;In the formula, f(x, y) is an input image, g(x, y) is an output image, and Gauss(σ n ) is a Gaussian function whose variance is σ n (n=1, 2);

(5-3)最后采用对比度均衡化增强图像光照的对比度,从而达到去除光照中不均匀同时最大程度保留图像有用细节并增强图像对比度,对比度均衡化公式如下:(5-3) Finally, the contrast equalization is used to enhance the contrast of the image illumination, so as to remove the unevenness of the illumination while retaining the useful details of the image to the greatest extent and enhance the image contrast. The contrast equalization formula is as follows:

式中f(x,y)是输入图像,gn(x,y)(n=1,2,3)是输出图像,α是压缩性指数用于降低大像素值的影响,τ用于第一步归一化后截除图像中大像素值。where f(x, y) is the input image, g n (x, y) (n=1, 2, 3) is the output image, α is the compressibility index used to reduce the influence of large pixel values, τ is used for the first Cut off large pixel values in the image after one-step normalization.

优选的,步骤(6)中,通过Scharr算子提取图像中垂直边缘的具体方法为:Preferably, in step (6), the specific method of extracting the vertical edge in the image by the Scharr operator is:

Scharr模板表达如下:The Scharr template expression is as follows:

再采用Gradx对图像进行卷积获取其垂直边缘,其垂直边缘提取公式如下:Then use Grad x to convolve the image to obtain its vertical edge, and the vertical edge extraction formula is as follows:

f(x,y)=f(x,y)*Gradx f(x,y)=f(x,y)*Grad x

其中,f(x,y)为原图。Among them, f(x, y) is the original image.

优选的,步骤(7)中,进行局部自适应阈值处理的具体方法为:Preferably, in step (7), the specific method for performing local adaptive threshold processing is:

(7-1)将图像分成m×n个等大的小区域;(7-1) Divide the image into m×n small areas of equal size;

(7-2)每个小区域内进行OTSU处理,将所有阈值组成的二维矩阵记作Mat_T0(7-2) Carry out OTSU processing in each small area, the two-dimensional matrix that all thresholds are formed is recorded as Mat_T 0 ;

(7-3)对所有Mat_T0进行3×3的高斯滤波输出Mat_T1,然后根据Mat_T1进行每个区域的图像二值化。(7-3) Perform 3×3 Gaussian filtering on all Mat_T 0 to output Mat_T 1 , and then perform image binarization for each region according to Mat_T 1 .

优选的,步骤(8)具体为:Preferably, step (8) is specifically:

(8-1)通过形态学开操作对图像进行噪声过滤,具体是先构造一个m1×n1的模板,然后对图像进行先腐蚀后膨胀的操作;(8-1) Noise filtering is performed on the image through the morphological opening operation, specifically, an m 1 ×n 1 template is constructed first, and then the image is corroded first and then expanded;

(8-2)通过形态学闭操作使图像中垂直边缘融合成连通区域,具体是先构造一个m2×n2的模板,然后对图像进行先膨胀后腐蚀的操作;(8-2) Merge the vertical edges in the image into connected regions through the morphological closing operation. Specifically, construct an m 2 ×n 2 template first, and then perform the operation of first expanding and then corroding the image;

其中,腐蚀是将物体的边缘加以腐蚀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改像素(x,y)的值为所有像素中最小的值。膨胀则与腐蚀相反,是将物体的轮廓加以膨胀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改模板内所有像素的值为所有像素中最大的值。Among them, erosion is to corrode the edge of the object. The specific process is to process the template for each pixel (x, y) in the image as follows: place the pixel (x, y) in the center of the template, and traverse For all other pixels covered by the template, the value of the modified pixel (x, y) is the smallest value among all pixels. Expansion is the opposite of erosion. It is to expand the outline of the object. The specific process is to process each pixel (x, y) in the image as follows: place the pixel (x, y) in the center of the template, and according to the size of the template , traverse all other pixels covered by the template, and modify the value of all pixels in the template to the maximum value among all pixels.

优选的,步骤(9)具体为:Preferably, step (9) is specifically:

(9-1)对连通区域进行区域标记,并计算其最小外接矩形;(9-1) Carry out regional labeling to the connected region, and calculate its minimum circumscribed rectangle;

(9-2)根据中国车牌的长宽比ASPECT和面积大小AREA,给定一个长宽比的偏差率e1和面积的偏差率e2,使每个连通区域的最小外接矩形的长宽比aspect和面积area进行以下检验:(9-2) According to the aspect ratio ASPECT and the area size AREA of the Chinese license plate, given a deviation rate e 1 of the aspect ratio and a deviation rate e 2 of the area, the aspect ratio of the smallest circumscribed rectangle of each connected area is aspect and area perform the following tests:

|aspect-ASPECT|≤e1ASPECT|aspect-ASPECT|≤e 1 ASPECT

|area-AREA|≤e2AREA|area-AREA|≤e 2 AREA

同时满足上述条件的连通区域可以进行步骤(9-3),否则是非车牌区域;The connected area that satisfies the above conditions can proceed to step (9-3), otherwise it is a non-license plate area;

(9-3)一般来说,图像中的车牌的倾斜角度不会太大,故给定一个倾斜角度的阈值Tangle,使每个连通区域的倾斜角度angle通过以下检验:(9-3) Generally speaking, the inclination angle of the license plate in the image is not too large, so given a threshold T angle of the inclination angle, the inclination angle angle of each connected region can pass the following test:

|angle-0|≤Tangle |angle-0|≤T angle

若满足上述条件的连通区域则认为是车牌区域,否则是非车牌区域。If the connected area meets the above conditions, it is considered as a license plate area, otherwise it is a non-license plate area.

本发明与现有技术相比,具有如下优点和有益效果:Compared with the prior art, the present invention has the following advantages and beneficial effects:

1、本发明用Scharr算子代替Sobel算子进行边缘检测,能更为准确地检测车牌的垂直边缘且维持同样的高效率。1. The present invention uses the Scharr operator instead of the Sobel operator for edge detection, which can detect the vertical edge of the license plate more accurately and maintain the same high efficiency.

2、本发明创新性地引入了清晰度评估模块,对清晰度不同的输入图像作出自适应的调整,更有利于后续处理,较大的提高了不同清晰度下的车牌定位的准确率。2. The present invention innovatively introduces a sharpness evaluation module to make adaptive adjustments to input images with different sharpness, which is more conducive to subsequent processing, and greatly improves the accuracy of license plate positioning under different sharpness.

3、本发明创新性地引入了亮度评估模块,对亮度异常的输入图像作出光照归一处理,更有利于后续处理,较大的提高了复杂光照环境下车牌定位的准确率。3. The present invention innovatively introduces a brightness evaluation module to perform illumination normalization processing on input images with abnormal brightness, which is more conducive to subsequent processing and greatly improves the accuracy of license plate positioning in complex illumination environments.

附图说明Description of drawings

图1是本发明一种基于清晰度和亮度评估的车牌快速定位方法的总流程图。FIG. 1 is a general flow chart of a method for quickly locating license plates based on clarity and brightness evaluation in the present invention.

具体实施方式detailed description

下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例Example

如图1所示,本实施例一种基于清晰度和亮度评估的车牌快速定位方法,具体包括下述步骤:As shown in Figure 1, a kind of license plate quick location method based on sharpness and brightness evaluation in the present embodiment, specifically comprises the following steps:

1、对输入图像进行基于噪声和锐度的清晰度评估:1. Perform a sharpness evaluation based on noise and sharpness on the input image:

①噪声评估是通过先对图像进行OTSU阈值处理得到二值图,然后计算连通区域中面积小于给定阈值的占比,最后通过线性变换得到噪声评估值。计算噪声评估值的公式如下:① Noise evaluation is to obtain a binary image by performing OTSU threshold processing on the image first, then calculate the proportion of connected regions whose area is smaller than a given threshold, and finally obtain the noise evaluation value through linear transformation. The formula for calculating the noise evaluation value is as follows:

其中,Compi是第i个区域,Areai是第i个区域的面积,Tarea是判断面积是否过小的给定阈值,a和b是线性变换的参数。Among them, Comp i is the i-th area, Area i is the area of the i-th area, T area is a given threshold for judging whether the area is too small, a and b are the parameters of the linear transformation.

②锐度评估采用的是点锐度算法,对图像的每点取8邻域点与之相减,先求8个差值的加权和(权的大小取决于距离,距离远,则权值小,如45°和135°方向的差值需要乘),在将所有点所得值相加除以像素总个数。计算锐度评估值的公式如下:②The sharpness evaluation adopts the point sharpness algorithm, and subtracts 8 neighborhood points from each point of the image, and first calculates the weighted sum of the 8 differences (the size of the weight depends on the distance, the distance is long, the weight Small, such as the difference between 45° and 135° direction needs to be multiplied ), add the values obtained from all points and divide by the total number of pixels. The formula for calculating the sharpness evaluation value is as follows:

其中,a是第a个领域像素,m和n为图像的长和宽,df为灰度的变化幅值,dx为像素点间的距离增量。Among them, a is the pixel in the ath area, m and n are the length and width of the image, df is the change amplitude of the gray level, and dx is the distance increment between pixels.

③清晰度评估由噪声评估和锐度评估组成,在得到图像的噪声评估值和锐度评估值后,通过一定的加权关系得到图像的清晰度评估值。计算清晰度评估值的公式如下:③ Sharpness evaluation is composed of noise evaluation and sharpness evaluation. After obtaining the noise evaluation value and sharpness evaluation value of the image, the image sharpness evaluation value is obtained through a certain weighted relationship. The formula for calculating the clarity evaluation value is as follows:

Quality=wNoise+(1-w)SharpnessQuality=wNoise+(1-w)Sharpness

其中,w为权重值,取值范围在0和1之间。Among them, w is the weight value, and the value range is between 0 and 1.

2、对清晰度不足的图像进行梯度锐化处理,清晰度过高的图像进行高斯模糊处理:2. Perform gradient sharpening on images with insufficient clarity, and perform Gaussian blur processing on images with excessive clarity:

①给定两个阈值Tquality1和Tquality2,其中Tquality1是判断图像清晰度是否不足的阈值,Tquality2是判断图像清晰度是否过高的阈值。① Two thresholds T quality1 and T quality2 are given, where T quality1 is the threshold for judging whether the image definition is insufficient, and T quality2 is the threshold for judging whether the image definition is too high.

②当Quality<Tquality1时对图像进行梯度锐化处理,即通过Laplace算子求取图像梯度信息Grad,然后与原图相加得到梯度锐化结果,梯度锐化的公式如下:②When Quality<T quality1 , the gradient sharpening process is performed on the image, that is, the gradient information Grad of the image is obtained through the Laplace operator, and then added to the original image to obtain the gradient sharpening result. The formula of gradient sharpening is as follows:

f(x,y)=f(x,y)+Grad(x,y)f(x,y)=f(x,y)+Grad(x,y)

其中,f(x,y)为原图,Grad(x,y)为梯度信息。Among them, f(x, y) is the original image, and Grad(x, y) is the gradient information.

③当Quality>Tquality2时对图像进行高斯模糊处理,即通过使用高斯函数计算呈正态分布的模糊模板,并与原图做卷积运算得到高斯模糊结果,高斯函数以及高斯模糊的公式如下:③When Quality>T quality2 , Gaussian blur processing is performed on the image, that is, the Gaussian function is used to calculate the blur template with a normal distribution, and the Gaussian blur result is obtained by convolution with the original image. The formula of the Gaussian function and Gaussian blur is as follows:

f(x,y)=f(x,y)*Gauss(a,b)f(x,y)=f(x,y)*Gauss(a,b)

其中,σ为正态分布的标准差,m和n为模板长和宽,a和b为模板上元素,f(x,y)为原图。Among them, σ is the standard deviation of the normal distribution, m and n are the length and width of the template, a and b are the elements on the template, and f(x, y) is the original image.

3、由RGB图转换为灰度图,转换公式如下:3. Convert from RGB image to grayscale image, the conversion formula is as follows:

Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y)Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y)

Gray(x,y)=(38R(x,y)+75G(x,y)+15B(x,y))>>7Gray(x,y)=(38R(x,y)+75G(x,y)+15B(x,y))>>7

其中,R、G、B为像素(x,y)的红、绿、蓝三个颜色通道的值,Gray为像素(x,y)对应的灰度值。Among them, R, G, and B are the values of the red, green, and blue color channels of the pixel (x, y), and Gray is the gray value corresponding to the pixel (x, y).

4、对灰度图进行亮度评估:4. Evaluate the brightness of the grayscale image:

对灰度图像进行亮度过高或亮度不足两种情况进行评估,具体是计算像素值低于阈值Tbright1或高于阈值Tbright2的像素占所有像素的比例之和,若大于给定阈值Tbright3则认为是则是亮度异常。Evaluate the two cases of excessive brightness or insufficient brightness of the grayscale image, specifically calculate the sum of the proportion of pixels whose pixel value is lower than the threshold T bright1 or higher than the threshold T bright2 to all pixels, if it is greater than the given threshold T bright3 It is considered to be an abnormal brightness.

5、若亮度异常则进行光照归一处理:5. If the brightness is abnormal, perform light normalization processing:

①先用Gamma校正增强图像亮度,Gamma校正公式如下:①First use Gamma correction to enhance the brightness of the image. The Gamma correction formula is as follows:

g(x,y)=c[f(x,y)]γ g(x,y)=c[f(x,y)] γ

式中f(x,y)是输入图像,g(x,y)是输出图像,c为任意值用于调整图像对比度,γ在0~1之间用于调整图像亮度;In the formula, f(x, y) is the input image, g(x, y) is the output image, c is any value used to adjust the image contrast, and γ is used to adjust the image brightness between 0 and 1;

②然后进行高斯差分滤波(DOG,Differnce of Gaussian)去除光照中高频分量,即不均匀部分,DoG滤波公式如下:② Then perform Gaussian difference filtering (DOG, Difference of Gaussian) to remove the high-frequency components of the illumination, that is, the uneven part. The DoG filtering formula is as follows:

g(x,y)=f(x,y)*Gauss(σ2)-f(x,y)*Gauss(σ1)g(x,y)=f(x,y)*Gauss(σ 2 )-f(x,y)*Gauss(σ 1 )

式中f(x,y)是输入图像,g(x,y)是输出图像,Gauss(σn)为方差是σn(n=1,2)的高斯函数;In the formula, f(x, y) is an input image, g(x, y) is an output image, and Gauss(σ n ) is a Gaussian function whose variance is σ n (n=1, 2);

③最后采用对比度均衡化增强图像光照的对比度,从而达到去除光照中不均匀同时最大程度保留图像有用细节并增强图像对比度,对比度均衡化公式如下:③Finally, contrast equalization is used to enhance the contrast of image illumination, so as to remove unevenness in illumination while retaining useful details of the image to the greatest extent and enhance image contrast. The formula for contrast equalization is as follows:

式中f(x,y)是输入图像,gn(x,y)(n=1,2,3)是输出图像,α是压缩性指数用于降低大像素值的影响,τ用于第一步归一化后截除图像中大像素值;where f(x, y) is the input image, g n (x, y) (n=1, 2, 3) is the output image, α is the compressibility index used to reduce the influence of large pixel values, τ is used for the first Cut off large pixel values in the image after one-step normalization;

6、通过Scharr算子提取图像中垂直边缘:6. Extract the vertical edge in the image through the Scharr operator:

为了提取图像边缘信息,常用的有Sobel算子,但由于Sobel算子在3×3的卷积模板上计算往往不大精确,因此有一个改进的Sobel算子称为Scharr算子,其运算效率跟Sobel算子一样快,但结果却更加精确,Scharr模板表达如下:In order to extract image edge information, the Sobel operator is commonly used, but because the calculation of the Sobel operator on the 3×3 convolution template is often inaccurate, there is an improved Sobel operator called the Scharr operator, and its operational efficiency As fast as the Sobel operator, but the result is more accurate, the Scharr template expression is as follows:

由于水平边缘对于车牌定位一般无利反而有害,这里只采用Gradx对图像进行卷积获取其垂直边缘,其垂直边缘提取公式如下:Since the horizontal edge is generally not beneficial but harmful to the license plate location, only Grad x is used to convolve the image to obtain its vertical edge. The vertical edge extraction formula is as follows:

f(x,y)=f(x,y)*Gradx f(x,y)=f(x,y)*Grad x

其中,f(x,y)为原图。Among them, f(x, y) is the original image.

7、进行局部自适应阈值处理:7. Perform local adaptive threshold processing:

①将图像分成m×n个等大的小区域;① Divide the image into m×n small areas of equal size;

②每个小区域内进行OTSU处理,将所有阈值组成的二维矩阵记作Mat_T0② Carry out OTSU processing in each small area, and record the two-dimensional matrix composed of all thresholds as Mat_T 0 ;

③对所有Mat_T0进行3×3的高斯滤波输出Mat_T1,然后根据Mat_T1进行每个区域的图像二值化;③ Perform a 3×3 Gaussian filter on all Mat_T 0 to output Mat_T 1 , and then perform image binarization for each region according to Mat_T 1 ;

8、通过形态学处理过滤噪声,以及使垂直边缘融合成连通区域:8. Filter noise through morphological processing, and merge vertical edges into connected regions:

①通过形态学开操作对图像进行噪声过滤,具体是先构造一个m1×n1的模板,然后对图像进行先腐蚀后膨胀的操作;① Noise filtering is performed on the image through the morphological opening operation, specifically, an m 1 ×n 1 template is constructed first, and then the image is corroded first and then expanded;

②通过形态学闭操作使图像中垂直边缘融合成连通区域,具体是先构造一个m2×n2的模板,然后对图像进行先膨胀后腐蚀的操作;② Merge the vertical edges in the image into connected regions through the morphological closing operation. Specifically, construct an m 2 ×n 2 template first, and then perform the operation of first dilation and then erosion on the image;

其中,腐蚀是将物体的边缘加以腐蚀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改像素(x,y)的值为所有像素中最小的值。膨胀则与腐蚀相反,是将物体的轮廓加以膨胀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改模板内所有像素的值为所有像素中最大的值。Among them, erosion is to corrode the edge of the object. The specific process is to process the template for each pixel (x, y) in the image as follows: place the pixel (x, y) in the center of the template, and traverse For all other pixels covered by the template, the value of the modified pixel (x, y) is the smallest value among all pixels. Expansion is the opposite of erosion. It is to expand the outline of the object. The specific process is to process each pixel (x, y) in the image as follows: place the pixel (x, y) in the center of the template, and according to the size of the template , traverse all other pixels covered by the template, and modify the value of all pixels in the template to the maximum value among all pixels.

9、对连通区域进行区域标记,根据中国车牌的特征进行筛选并得到车牌区域:9. Carry out area marking on the connected area, filter according to the characteristics of the Chinese license plate and obtain the license plate area:

①对连通区域进行区域标记,并计算其最小外接矩形;①Mark the connected area and calculate its minimum circumscribed rectangle;

②根据中国车牌的长宽比ASPECT和面积大小AREA,给定一个长宽比的偏差率e1和面积的偏差率e2,使每个连通区域的最小外接矩形的长宽比aspect和面积area进行以下检验:②According to the aspect ratio ASPECT and area size AREA of the Chinese license plate, given an aspect ratio deviation rate e 1 and an area deviation rate e 2 , make the aspect ratio aspect and area area of the smallest circumscribed rectangle of each connected area Perform the following tests:

|aspect-ASPECT|≤e1ASPECT|aspect-ASPECT|≤e 1 ASPECT

|area-AREA|≤e2AREA|area-AREA|≤e 2 AREA

同时满足上述条件的连通区域可以进行步骤③,否则是非车牌区域。Connected areas that meet the above conditions can proceed to step ③, otherwise it is a non-license plate area.

③一般来说,图像中的车牌的倾斜角度不会太大,故给定一个倾斜角度的阈值Tangle,使每个连通区域的倾斜角度angle通过以下检验:③Generally speaking, the inclination angle of the license plate in the image is not too large, so given a threshold T angle of inclination angle, the inclination angle angle of each connected region can pass the following test:

|angle-0|≤Tangle |angle-0|≤T angle

若满足上述条件的连通区域则认为是车牌区域,否则是非车牌区域。If the connected area meets the above conditions, it is considered as a license plate area, otherwise it is a non-license plate area.

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiment is a preferred embodiment of the present invention, but the embodiment of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, Simplifications should be equivalent replacement methods, and all are included in the protection scope of the present invention.

Claims (9)

1.一种基于清晰度和亮度评估的车牌快速定位方法,其特征在于,包括下述步骤:1. a license plate rapid location method based on clarity and brightness evaluation, is characterized in that, comprises the following steps: (1)对输入图像进行基于噪声和锐度的清晰度评估;所述清晰度评估由噪声评估和锐度评估组成,在得到图像的噪声评估值和锐度评估值后,通过加权关系得到图像的清晰度评估值,计算清晰度评估值的公式如下:(1) Carry out sharpness evaluation based on noise and sharpness on the input image; the sharpness evaluation is composed of noise evaluation and sharpness evaluation, after obtaining the noise evaluation value and sharpness evaluation value of the image, the image is obtained through a weighted relationship The clarity evaluation value of , the formula for calculating the clarity evaluation value is as follows: Quality=wNoise+(1-w)SharpnessQuality=wNoise+(1-w)Sharpness 其中,w为权重值,取值范围在0和1之间;Among them, w is the weight value, and the value range is between 0 and 1; 所述噪声评估是通过先对图像进行OTSU阈值处理得到二值图,然后计算连通区域中面积小于给定阈值的占比,最后通过线性变换得到噪声评估值,计算噪声评估值的公式如下:The noise evaluation is to obtain a binary image by first performing OTSU threshold processing on the image, then calculate the proportion of the area in the connected region smaller than a given threshold, and finally obtain the noise evaluation value through linear transformation. The formula for calculating the noise evaluation value is as follows: <mrow> <mi>N</mi> <mi>o</mi> <mi>i</mi> <mi>s</mi> <mi>e</mi> <mo>=</mo> <msub> <mi>L</mi> <mn>1</mn> </msub> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>Area</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <msub> <mi>T</mi> <mrow> <mi>a</mi> <mi>r</mi> <mi>e</mi> <mi>a</mi> </mrow> </msub> </munderover> <msub> <mi>Comp</mi> <mi>i</mi> </msub> <mo>/</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <msub> <mi>Area</mi> <mi>i</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow> <mi>&amp;infin;</mi> </munderover> <msub> <mi>Comp</mi> <mi>i</mi> </msub> <mo>+</mo> <msub> <mi>L</mi> <mn>2</mn> </msub> </mrow> <mrow><mi>N</mi><mi>o</mi><mi>i</mi><mi>s</mi><mi>e</mi><mo>=</mo><msub><mi>L</mi><mn>1</mn></msub><munderover><mo>&amp;Sigma;</mo><mrow><msub><mi>Area</mi><mi>i</mi></msub><mo>=</mo><mn>0</mn></mrow><msub><mi>T</mi><mrow><mi>a</mi><mi>r</mi><mi>e</mi><mi>a</mi></mrow></msub></munderover><msub><mi>Comp</mi><mi>i</mi></msub><mo>/</mo><munderover><mo>&amp;Sigma;</mo><mrow><msub><mi>Area</mi><mi>i</mi></msub><mo>=</mo><mn>0</mn></mrow><mi>&amp;infin;</mi></munderover><msub><mi>Comp</mi><mi>i</mi></msub><mo>+</mo><msub><mi>L</mi><mn>2</mn></msub></mrow> 其中,Compi是第i个区域,Areai是第i个区域的面积,Tarea是判断面积是否过小的给定阈值,L1和L2是线性变换的参数;Among them, Comp i is the i-th area, Area i is the area of the i-th area, T area is a given threshold for judging whether the area is too small, L 1 and L 2 are the parameters of linear transformation; 所述锐度评估采用的是点锐度算法,对图像的每点取8邻域点与之相减,先求8个差值的加权和,再将所有点所得值相加除以像素总个数,计算锐度评估值的公式如下:The sharpness evaluation uses the point sharpness algorithm, and subtracts 8 neighborhood points from each point of the image, first calculates the weighted sum of the 8 differences, and then adds and divides the obtained values of all points by the total pixel value. The formula for calculating the sharpness evaluation value is as follows: <mrow> <mi>S</mi> <mi>h</mi> <mi>a</mi> <mi>r</mi> <mi>p</mi> <mi>n</mi> <mi>e</mi> <mi>s</mi> <mi>s</mi> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>m</mi> <mo>&amp;times;</mo> <mi>n</mi> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mn>8</mn> </munderover> <mo>|</mo> <mfrac> <mrow> <mi>d</mi> <mi>f</mi> </mrow> <mrow> <mi>d</mi> <mi>x</mi> </mrow> </mfrac> <mo>|</mo> <mo>/</mo> <mrow> <mo>(</mo> <mi>m</mi> <mo>&amp;times;</mo> <mi>n</mi> <mo>)</mo> </mrow> </mrow> <mrow><mi>S</mi><mi>h</mi><mi>a</mi><mi>r</mi><mi>p</mi><mi>n</mi><mi>e</mi><mi>s</mi><mi>s</mi><mo>=</mo><munderover><mo>&amp;Sigma;</mo><mrow><mi>i</mi><mo>=</mo><mn>1</mn></mrow><mrow><mi>m</mi><mo>&amp;times;</mo><mi>n</mi></mrow></munderover><munderover><mo>&amp;Sigma;</mo><mrow><mi>k</mi><mo>=</mo><mn>1</mn></mrow><mn>8</mn></munderover><mo>|</mo><mfrac><mrow><mi>d</mi><mi>f</mi></mrow><mrow><mi>d</mi><mi>x</mi></mrow></mfrac><mo>|</mo><mo>/</mo><mrow><mo>(</mo><mi>m</mi><mo>&amp;times;</mo><mi>n</mi><mo>)</mo></mrow></mrow> 其中,k是第k个邻域像素,m和n为图像的长和宽,df为灰度的变化幅值,dx为像素点间的距离增量;Among them, k is the kth neighborhood pixel, m and n are the length and width of the image, df is the change amplitude of the gray level, and dx is the distance increment between pixels; (2)对清晰度不足的图像进行梯度锐化处理,清晰度过高的图像进行高斯模糊处理;(2) Gradient sharpening processing is performed on images with insufficient clarity, and Gaussian blur processing is performed on images with excessive clarity; (3)对步骤(2)处理后的图像由RGB图转换为灰度图;(3) the image processed in step (2) is converted into a grayscale image by the RGB image; (4)对灰度图进行亮度评估,即对灰度图像进行亮度过高或亮度不足两种情况进行评估;(4) Evaluate the brightness of the grayscale image, that is, evaluate the grayscale image in two situations of excessive brightness or insufficient brightness; (5)若亮度异常则进行光照归一处理;(5) If the brightness is abnormal, normalize the illumination; (6)通过Scharr算子提取图像中垂直边缘;(6) extract the vertical edge in the image by the Scharr operator; (7)进行局部自适应阈值处理;(7) Perform local adaptive threshold processing; (8)通过形态学处理过滤噪声,以及使垂直边缘融合成连通区域;(8) filter noise through morphological processing, and merge vertical edges into connected regions; (9)对连通区域进行区域标记,根据中国车牌的特征进行筛选并得到车牌区域。(9) Carry out area marking on the connected area, screen according to the characteristics of the Chinese license plate and obtain the license plate area. 2.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(2)具体为:2. the license plate rapid location method based on clarity and brightness evaluation according to claim 1, is characterized in that, step (2) is specially: (2-1)给定两个阈值Tquality1和Tquality2,其中Tquality1是判断图像清晰度是否不足的阈值,Tquality2是判断图像清晰度是否过高的阈值;(2-1) Two thresholds T quality1 and T quality2 are given, wherein T quality1 is the threshold for judging whether the image definition is insufficient, and T quality2 is the threshold for judging whether the image definition is too high; (2-2)当Quality<Tquality1时对图像进行梯度锐化处理,即通过Laplace算子求取图像梯度信息Grad,然后与原图相加得到梯度锐化结果,梯度锐化的公式如下:(2-2) When Quality<T quality1 , the gradient sharpening process is performed on the image, that is, the image gradient information Grad is obtained through the Laplace operator, and then added to the original image to obtain the gradient sharpening result. The gradient sharpening formula is as follows: f1(x,y)=f(x,y)+Grad(x,y)f1(x,y)=f(x,y)+Grad(x,y) 其中,f(x,y)为原图,f1(x,y)为原图锐化后得到的图像,Grad(x,y)为梯度信息;Among them, f(x,y) is the original image, f1(x,y) is the image obtained after sharpening the original image, and Grad(x,y) is the gradient information; (2-3)当Quality>Tquality2时对图像进行高斯模糊处理,即通过使用高斯函数计算呈正态分布的模糊模板,并与原图做卷积运算得到高斯模糊结果,高斯函数以及高斯模糊的公式如下:(2-3) When Quality>T quality2 , Gaussian blur processing is performed on the image, that is, the Gaussian function is used to calculate the blur template with a normal distribution, and the Gaussian blur result is obtained by convolution with the original image, Gaussian function and Gaussian blur The formula is as follows: <mrow> <mi>G</mi> <mi>a</mi> <mi>u</mi> <mi>s</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mn>1</mn> <mrow> <mn>2</mn> <msup> <mi>&amp;pi;&amp;sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> <msup> <mi>e</mi> <mrow> <mo>-</mo> <mfrac> <mrow> <msup> <mrow> <mo>(</mo> <mi>a</mi> <mo>-</mo> <mi>m</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>+</mo> <msup> <mrow> <mo>(</mo> <mi>b</mi> <mo>-</mo> <mi>n</mi> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> <mrow> <mn>2</mn> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mrow> </mfrac> </mrow> </msup> </mrow> <mrow><mi>G</mi><mi>a</mi><mi>u</mi><mi>s</mi><mi>s</mi><mrow><mo>(</mo><mi>a</mi><mo>,</mo><mi>b</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mn>1</mn><mrow><mn>2</mn><msup><mi>&amp;pi;&amp;sigma;</mi><mn>2</mn></msup></mrow></mfrac><msup><mi>e</mi><mrow><mo>-</mo><mfrac><mrow><msup><mrow><mo>(</mo><mi>a</mi><mo>-</mo><mi>m</mi><mo>/</mo><mn>2</mn><mo>)</mo></mrow><mn>2</mn></msup><mo>+</mo><msup><mrow><mo>(</mo><mi>b</mi><mo>-</mo>mo><mi>n</mi><mo>/</mo><mn>2</mn><mo>)</mo></mrow><mn>2</mn></msup></mrow><mrow><mn>2</mn><msup><mi>&amp;sigma;</mi><mn>2</mn></msup></mrow></mfrac></mrow></msup></mrow> f2(x,y)=f(x,y)*Gauss(a,b)f2(x,y)=f(x,y)*Gauss(a,b) 其中,σ为正态分布的标准差,m和n为模板长和宽,a和b为模板上元素,f(x,y)为原图,f2(x,y)为原图高斯模糊后得到的图像。Among them, σ is the standard deviation of the normal distribution, m and n are the length and width of the template, a and b are the elements on the template, f(x,y) is the original image, f2(x,y) is the original image after Gaussian blur the resulting image. 3.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(3)中,由RGB图转灰度图的公式为:3. the license plate rapid location method based on clarity and brightness evaluation according to claim 1, is characterized in that, in step (3), the formula that changes grayscale image by RGB figure is: Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y);Gray(x,y)=0.299R(x,y)+0.587G(x,y)+0.114B(x,y); 或Gray(x,y)=(38R(x,y)+75G(x,y)+15B(x,y))>>7;Or Gray(x,y)=(38R(x,y)+75G(x,y)+15B(x,y))>>7; 其中,R、G、B为像素(x,y)的红、绿、蓝三个颜色通道的值,Gray为像素(x,y)对应的灰度值。or Among them, R, G, and B are the values of the red, green, and blue color channels of the pixel (x, y), and Gray is the gray value corresponding to the pixel (x, y). 4.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(4)中,对灰度图进行亮度评估的方法是:4. the license plate fast location method based on clarity and brightness evaluation according to claim 1, is characterized in that, in step (4), the method for carrying out brightness evaluation to grayscale image is: 计算像素值低于阈值Tbright1或高于阈值Tbright2的像素占所有像素的比例之和,若大于给定阈值Tbright3则认为是亮度异常。Calculate the sum of the proportions of pixels whose pixel value is lower than the threshold T bright1 or higher than the threshold T bright2 to all pixels, and if it is greater than the given threshold T bright3 , it is considered abnormal brightness. 5.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(5)中,光照归一处理的方法为:5. the license plate rapid location method based on clarity and brightness evaluation according to claim 1, is characterized in that, in step (5), the method for illumination normalization processing is: (5-1)先用Gamma校正增强图像亮度,Gamma校正公式如下:(5-1) First use Gamma correction to enhance the brightness of the image. The Gamma correction formula is as follows: g(x,y)=c[f(x,y)]γ g(x,y)=c[f(x,y)] γ 式中f(x,y)是输入图像,g(x,y)是输出图像,c为任意值用于调整图像对比度,γ在0~1之间用于调整图像亮度;In the formula, f(x, y) is the input image, g(x, y) is the output image, c is any value used to adjust the contrast of the image, and γ is used to adjust the brightness of the image between 0 and 1; (5-2)然后进行高斯差分滤波去除光照中高频分量,即不均匀部分,高斯差分滤波公式如下:(5-2) Then perform Gaussian difference filtering to remove the high-frequency component of the illumination, that is, the uneven part. The Gaussian difference filtering formula is as follows: g(x,y)=f(x,y)*Gauss(σ2)-f(x,y)*Gauss(σ1)g(x,y)=f(x,y)*Gauss(σ 2 )-f(x,y)*Gauss(σ 1 ) 式中f(x,y)是输入图像,g(x,y)是输出图像,Gauss(σn)为方差是σn(n=1,2)的高斯函数;Where f(x,y) is the input image, g(x,y) is the output image, Gauss(σ n ) is a Gaussian function with variance σ n (n=1,2); (5-3)最后采用对比度均衡化增强图像光照的对比度,从而达到去除光照中不均匀同时最大程度保留图像有用细节并增强图像对比度,对比度均衡化公式如下:(5-3) Finally, the contrast equalization is used to enhance the contrast of the image illumination, so as to remove the unevenness of the illumination while retaining the useful details of the image to the greatest extent and enhance the image contrast. The contrast equalization formula is as follows: <mrow> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>(</mo> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mo>(</mo> <msup> <mrow> <mo>|</mo> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mrow> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> <mi>&amp;alpha;</mi> </msup> <mo>)</mo> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mi>&amp;alpha;</mi> </mfrac> </msup> </mfrac> </mrow> <mrow><msub><mi>g</mi><mn>1</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><mi>f</mi><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mrow><msup><mrow><mo>(</mo><mi>m</mi><mi>e</mi><mi>a</mi><mi>n</mi><mo>(</mo><msup><mrow><mo>|</mo><mrow><mi>f</mi><mrow><mo>(</mo><mrow><msup><mi>x</mi><mo>&amp;prime;</mo></msup><mo>,</mo><msup><mi>y</mi><mo>&amp;prime;</mo></msup></mrow><mo>)</mo></mrow></mrow><mo>|</mo></mrow><mi>&amp;alpha;</mi></msup><mo>)</mo><mo>)</mo></mrow><mfrac><mn>1</mn><mi>&amp;alpha;</mi></mfrac></msup></mfrac></mrow> <mrow> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <msup> <mrow> <mo>(</mo> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mo>(</mo> <mrow> <mi>min</mi> <msup> <mrow> <mo>(</mo> <mrow> <mi>&amp;tau;</mi> <mo>,</mo> <mrow> <mo>|</mo> <mrow> <msub> <mi>g</mi> <mn>1</mn> </msub> <mrow> <mo>(</mo> <mrow> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <mo>,</mo> <msup> <mi>y</mi> <mo>&amp;prime;</mo> </msup> </mrow> <mo>)</mo> </mrow> </mrow> <mo>|</mo> </mrow> </mrow> <mo>)</mo> </mrow> <mi>&amp;alpha;</mi> </msup> </mrow> <mo>)</mo> <mo>)</mo> </mrow> <mfrac> <mn>1</mn> <mi>&amp;alpha;</mi> </mfrac> </msup> </mfrac> </mrow> <mrow><msub><mi>g</mi><mn>2</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>=</mo><mfrac><mrow><msub><mi>g</mi><mn>1</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mrow><msup><mrow><mo>(</mo><mi>m</mi><mi>e</mi><mi>a</mi><mi>n</mi><mo>(</mo><mrow><mi>min</mi><msup><mrow><mo>(</mo><mrow><mi>&amp;tau;</mi><mo>,</mo><mrow><mo>|</mo><mrow><msub><mi>g</mi><mn>1</mn></msub><mrow><mo>(</mo><mrow><msup><mi>x</mi><mo>&amp;prime;</mo></msup><mo>,</mo><msup><mi>y</mi><mo>&amp;prime;</mo></msup></mrow><mo>)</mo></mrow></mrow><mo>|</mo></mrow></mrow><mo>)</mo></mrow><mi>&amp;alpha;</mi></msup></mrow><mo>)</mo><mo>)</mo></mrow><mfrac><mn>1</mn><mi>&amp;alpha;</mi></mfrac></msup></mfrac></mrow> <mrow> <msub> <mi>g</mi> <mn>3</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <mrow> <msub> <mi>g</mi> <mn>2</mn> </msub> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mi>&amp;tau;</mi> </mfrac> <mo>)</mo> </mrow> </mrow> <mrow><msub><mi>g</mi><mn>3</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow><mo>=</mo><mi>tanh</mi><mrow><mo>(</mo><mfrac><mrow><msub><mi>g</mi><mn>2</mn></msub><mrow><mo>(</mo><mi>x</mi><mo>,</mo><mi>y</mi><mo>)</mo></mrow></mrow><mi>&amp;tau;</mi></mfrac><mo>)</mo></mrow></mrow> 式中f(x,y)是输入图像,gn(x,y)(n=1,2,3)是输出图像,α是压缩性指数用于降低大像素值的影响,τ用于第一步归一化后截除图像中大像素值。where f(x,y) is the input image, g n (x,y)(n=1,2,3) is the output image, α is the compressibility index used to reduce the influence of large pixel values, τ is used for the first Cut off large pixel values in the image after one-step normalization. 6.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(6)中,通过Scharr算子提取图像中垂直边缘的具体方法为:6. the license plate fast location method based on clarity and brightness evaluation according to claim 1, is characterized in that, in step (6), the concrete method of extracting vertical edge in the image by Scharr operator is: Scharr模板表达如下:The Scharr template expression is as follows: <mfenced open = "" close = ""> <mtable> <mtr> <mtd> <mrow> <msub> <mi>Grad</mi> <mi>x</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>3</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>10</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>10</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mrow> <mo>+</mo> <mn>3</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> <mtd> <mrow> <msub> <mi>Grad</mi> <mi>y</mi> </msub> <mo>=</mo> <mfenced open = "[" close = "]"> <mtable> <mtr> <mtd> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>10</mn> </mrow> </mtd> <mtd> <mrow> <mo>-</mo> <mn>3</mn> </mrow> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> <mtd> <mn>0</mn> </mtd> </mtr> <mtr> <mtd> <mrow> <mo>+</mo> <mn>3</mn> </mrow> </mtd> <mtd> <mrow> <mo>+</mo> <mn>10</mn> </mrow> </mtd> <mtd> <mrow> <mo>+</mo> <mn>3</mn> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow> </mtd> </mtr> </mtable> </mfenced> <mfenced open = "" close = ""><mtable><mtr><mtd><mrow><msub><mi>Grad</mi><mi>x</mi></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><mn>3</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mrow><mo>+</mo><mn>3</mn></mrow></mtd></mtd>mtr><mtr><mtd><mrow><mo>-</mo><mn>10</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mrow><mo>+</mo><mn>10</mn></mrow></mtd></mtr><mtr><mtd><mrow><mo>-</mo><mn>3</mn></mrow></mtd><mtd><mn>0</mn></mtd><mtd><mrow><mo>+</mo><mn>3</mn></mrow></mtd></mtr></mtable></mfenced></mrow></mtd><mtd><mrow><msub><mi>Grad</mi><mi>y</mi></msub><mo>=</mo><mfenced open = "[" close = "]"><mtable><mtr><mtd><mrow><mo>-</mo><mn>3</mn></mrow></mtd><mtd><mrow><mo>-</mo><mn>10</mn></mrow></mtd><mtd><mrow><mo>-</mo><mn>3</mn></mrow></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mrow><mo>+</mo><mn>3</mn></mrow></mtd><mtd><mrow><mo>+</mo><mn>10</mn></mrow></mtd><mtd><mrow><mo>+</mo><mn>3</mn></mrow></mtd></mtr></mtable></mfenced></mrow></mtd></mtr></mtable></mfenced> 再采用Gradx对图像进行卷积获取其垂直边缘,其垂直边缘提取公式如下:Then use Grad x to convolve the image to obtain its vertical edge, and the vertical edge extraction formula is as follows: f3(x,y)=f(x,y)*Gradx f3(x,y)=f(x,y)*Grad x 其中,f(x,y)为原图,f3(x,y)为原图垂直边缘提取后得到的图像。Among them, f(x, y) is the original image, and f3(x, y) is the image obtained after the vertical edge extraction of the original image. 7.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(7)中,进行局部自适应阈值处理的具体方法为:7. the license plate fast location method based on clarity and brightness evaluation according to claim 1, is characterized in that, in step (7), the concrete method that carries out local self-adaptive threshold processing is: (7-1)将图像分成m×n个等大的小区域;(7-1) Divide the image into m×n small areas of equal size; (7-2)每个小区域内进行OTSU处理,将所有阈值组成的二维矩阵记作Mat_T0(7-2) Carry out OTSU processing in each small area, the two-dimensional matrix that all thresholds are formed is recorded as Mat_T 0 ; (7-3)对所有Mat_T0进行3×3的高斯滤波输出Mat_T1,然后根据Mat_T1进行每个区域的图像二值化。(7-3) Perform 3×3 Gaussian filtering on all Mat_T 0 to output Mat_T 1 , and then perform image binarization for each region according to Mat_T 1 . 8.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(8)具体为:8. the license plate quick location method based on clarity and brightness evaluation according to claim 1, is characterized in that, step (8) is specially: (8-1)通过形态学开操作对图像进行噪声过滤,具体是先构造一个m1×n1的模板,然后对图像进行先腐蚀后膨胀的操作;(8-1) Noise filtering is performed on the image through the morphological opening operation, specifically, an m 1 ×n 1 template is constructed first, and then the image is corroded first and then expanded; (8-2)通过形态学闭操作使图像中垂直边缘融合成连通区域,具体是先构造一个m2×n2的模板,然后对图像进行先膨胀后腐蚀的操作;(8-2) Merge the vertical edges in the image into connected regions through the morphological closing operation. Specifically, construct an m 2 ×n 2 template first, and then perform the operation of first expanding and then corroding the image; 其中,腐蚀是将物体的边缘加以腐蚀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改像素(x,y)的值为所有像素中最小的值,膨胀则与腐蚀相反,是将物体的轮廓加以膨胀,具体过程是将模板对图像中每一个像素(x,y)做如下处理:将像素(x,y)置于模板的中心,根据模板大小,历遍所有被模板覆盖的其他像素,修改模板内所有像素的值为所有像素中最大的值。Among them, corrosion is to corrode the edge of the object. The specific process is to process the template for each pixel (x, y) in the image as follows: place the pixel (x, y) in the center of the template, and traverse For all other pixels covered by the template, modify the value of the pixel (x, y) to be the smallest value among all pixels. Expansion is the opposite of erosion, which is to expand the outline of the object. The specific process is to apply the template to each pixel in the image (x, y) is processed as follows: place the pixel (x, y) in the center of the template, traverse all other pixels covered by the template according to the size of the template, and modify the value of all pixels in the template to the largest value among all pixels . 9.根据权利要求1所述的基于清晰度和亮度评估的车牌快速定位方法,其特征在于,步骤(9)具体为:9. the license plate quick location method based on clarity and brightness evaluation according to claim 1, is characterized in that, step (9) is specially: (9-1)对连通区域进行区域标记,并计算其最小外接矩形;(9-1) Carry out regional labeling to the connected region, and calculate its minimum circumscribed rectangle; (9-2)根据中国车牌的长宽比ASPECT和面积大小AREA,给定一个长宽比的偏差率e1和面积的偏差率e2,使每个连通区域的最小外接矩形的长宽比aspect和面积area进行以下检验:(9-2) According to the aspect ratio ASPECT and the area size AREA of the Chinese license plate, given a deviation rate e 1 of the aspect ratio and a deviation rate e 2 of the area, the aspect ratio of the smallest circumscribed rectangle of each connected area is aspect and area perform the following tests: |aspect-ASPECT|≤e1ASPECT ①|aspect-ASPECT|≤e 1 ASPECT ① |area-AREA|≤e2AREA ②|area-AREA|≤e 2 AREA ② 同时满足公式①和②的连通区域进行步骤(9-3),否则是非车牌区域;Simultaneously satisfy the connected area of formula ① and ② to carry out step (9-3), otherwise it is a non-license plate area; (9-3)给定一个倾斜角度的阈值Tangle,使每个连通区域的倾斜角度angle通过以下检验:(9-3) Given a threshold T angle of an inclination angle, make the inclination angle angle of each connected region pass the following test: |angle-0|≤Tangle|angle-0|≤T angle 若满足公式③的连通区域则认为是车牌区域,否则是非车牌区域。If the connected area satisfies the formula ③, it is considered as a license plate area, otherwise it is a non-license plate area.
CN201510129964.0A 2015-03-23 2015-03-23 A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation Active CN104732227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510129964.0A CN104732227B (en) 2015-03-23 2015-03-23 A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510129964.0A CN104732227B (en) 2015-03-23 2015-03-23 A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation

Publications (2)

Publication Number Publication Date
CN104732227A CN104732227A (en) 2015-06-24
CN104732227B true CN104732227B (en) 2017-12-26

Family

ID=53456102

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510129964.0A Active CN104732227B (en) 2015-03-23 2015-03-23 A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation

Country Status (1)

Country Link
CN (1) CN104732227B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11587327B2 (en) 2015-10-01 2023-02-21 Intellivision Technologies Corp Methods and systems for accurately recognizing vehicle license plates
US10706330B2 (en) * 2015-10-01 2020-07-07 Intellivision Technologies Corp Methods and systems for accurately recognizing vehicle license plates
CN105898174A (en) * 2015-12-04 2016-08-24 乐视网信息技术(北京)股份有限公司 Video resolution improving method and device
JP2018091807A (en) * 2016-12-07 2018-06-14 オルボテック リミテッド Defect pass / fail judgment method and apparatus
CN107240092B (en) * 2017-05-05 2020-02-14 浙江大华技术股份有限公司 Image ambiguity detection method and device
CN109584198B (en) * 2017-09-26 2022-12-23 浙江宇视科技有限公司 Method and device for evaluating quality of face image and computer readable storage medium
CN108460766B (en) * 2018-04-12 2022-02-25 四川和生视界医药技术开发有限公司 Retina image definition evaluation method and evaluation device
CN108550158B (en) * 2018-04-16 2021-12-17 Tcl华星光电技术有限公司 Image edge processing method, electronic device and computer readable storage medium
CN108710852B (en) * 2018-05-21 2021-08-03 山东大学 A particle size distribution image recognition method and system for limited shooting depth
CN109915924B (en) * 2018-05-31 2020-11-24 徐州云创物业服务有限公司 Safety protection type electric heater
CN109509229B (en) * 2018-11-12 2020-12-15 凌云光技术集团有限责任公司 Template reconstruction device and method based on two-dimensional linear transformation
CN109394268B (en) * 2018-12-07 2021-05-11 刘志红 Polyp harm degree mapping platform
US11120536B2 (en) * 2018-12-12 2021-09-14 Samsung Electronics Co., Ltd Apparatus and method for determining image sharpness
CN110111261B (en) * 2019-03-28 2021-05-28 瑞芯微电子股份有限公司 Adaptive balance processing method for image, electronic device and computer readable storage medium
CN110400261A (en) * 2019-04-04 2019-11-01 桑尼环保(江苏)有限公司 Adaptive environment clears up platform
CN111027363B8 (en) * 2019-04-28 2021-03-19 丽水新贝蕾科技有限公司 Auxiliary signal parameter analysis system
CN110246227B (en) * 2019-05-21 2023-12-29 佛山科学技术学院 Virtual-real fusion simulation experiment image data collection method and system
CN110544229B (en) * 2019-07-11 2022-11-18 华南理工大学 A method for image focus evaluation and focus adjustment when the camera is in a state of non-uniform speed
CN111353994B (en) * 2020-03-30 2023-06-30 南京工程学院 Image non-reference brightness quality detection method for target detection
CN111611863B (en) * 2020-04-22 2022-05-20 浙江大华技术股份有限公司 License plate image quality evaluation method and device and computer equipment
CN112258503B (en) * 2020-11-13 2023-11-14 中国科学院深圳先进技术研究院 Ultrasonic image imaging quality evaluation method, device and computer readable storage medium
CN112863010B (en) * 2020-12-29 2022-08-05 宁波友好智能安防科技有限公司 Video image processing system of anti-theft lock
CN114764775A (en) * 2021-01-12 2022-07-19 深圳市普渡科技有限公司 Infrared image quality evaluation method, device and storage medium
CN113486895B (en) * 2021-07-16 2022-04-19 浙江华是科技股份有限公司 Ship board identification method and device and computer readable storage medium
CN117173416B (en) * 2023-11-01 2024-01-05 山西阳光三极科技股份有限公司 Railway freight train number image definition processing method based on image processing

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093229A (en) * 2013-01-21 2013-05-08 信帧电子技术(北京)有限公司 Positioning method and device of vehicle logo

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8331621B1 (en) * 2001-10-17 2012-12-11 United Toll Systems, Inc. Vehicle image capture system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093229A (en) * 2013-01-21 2013-05-08 信帧电子技术(北京)有限公司 Positioning method and device of vehicle logo

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
复杂光照条件下的通用车牌定位系统的研究与实现;刘攀;《中国优秀硕士学位论文全文数据库信息科技辑》;20131215(第S2期);第6、11-12、14-15、17-19、31-32、54、59、64-66页 *

Also Published As

Publication number Publication date
CN104732227A (en) 2015-06-24

Similar Documents

Publication Publication Date Title
CN104732227B (en) A kind of Location Method of Vehicle License Plate based on definition and luminance evaluation
US11380104B2 (en) Method and device for detecting illegal parking, and electronic device
CN103198315B (en) Based on the Character Segmentation of License Plate of character outline and template matches
CN103324930B (en) A license plate character segmentation method based on gray histogram binarization
TWI409718B (en) Method of locating license plate of moving vehicle
CN103136528B (en) A kind of licence plate recognition method based on dual edge detection
CN107301405A (en) Method for traffic sign detection under natural scene
CN108765491A (en) A kind of SAR image Ship Target Detection method
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN112036254A (en) Moving vehicle foreground detection method based on video image
CN110726725A (en) Transmission line hardware corrosion detection method and device
CN101398894A (en) Automobile license plate automatic recognition method and implementing device thereof
CN105354530A (en) Vehicle body color identification method and apparatus
CN104715252A (en) License plate character segmentation method with combination of dynamic template and pixel points
CN103324935B (en) Vehicle is carried out the method and system of location and region segmentation by a kind of image
CN107832762A (en) A kind of License Plate based on multi-feature fusion and recognition methods
CN106096610A (en) A kind of file and picture binary coding method based on support vector machine
CN105608455A (en) License plate tilt correction method and apparatus
CN104809461A (en) License plate recognition method and system combining sequence image super-resolution reconstruction
CN106097368A (en) A kind of recognition methods in veneer crack
CN104766344A (en) Vehicle detecting method based on moving edge extractor
CN109993134A (en) A vehicle detection method at road intersection based on HOG and SVM classifier
CN102521587B (en) License plate location method
CN103295238B (en) Video real-time location method based on ROI motion detection on Android platform
CN103927523B (en) A Foggy Level Detection Method Based on Longitudinal Gray Scale Feature

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant