CN114463570A - Vehicle detection method based on clustering algorithm - Google Patents
Vehicle detection method based on clustering algorithm Download PDFInfo
- Publication number
- CN114463570A CN114463570A CN202111542446.3A CN202111542446A CN114463570A CN 114463570 A CN114463570 A CN 114463570A CN 202111542446 A CN202111542446 A CN 202111542446A CN 114463570 A CN114463570 A CN 114463570A
- Authority
- CN
- China
- Prior art keywords
- pixel
- color
- image
- distance
- color space
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 41
- 238000013139 quantization Methods 0.000 claims abstract description 21
- 239000013598 vector Substances 0.000 claims abstract description 14
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 13
- 230000011218 segmentation Effects 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 6
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000000034 method Methods 0.000 claims description 31
- 238000012937 correction Methods 0.000 claims description 15
- 238000006243 chemical reaction Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 9
- 238000011524 similarity measure Methods 0.000 claims description 4
- 235000002566 Capsicum Nutrition 0.000 claims description 3
- 239000006002 Pepper Substances 0.000 claims description 3
- 235000016761 Piper aduncum Nutrition 0.000 claims description 3
- 235000017804 Piper guineense Nutrition 0.000 claims description 3
- 235000008184 Piper nigrum Nutrition 0.000 claims description 3
- 230000001174 ascending effect Effects 0.000 claims description 3
- 230000009467 reduction Effects 0.000 claims description 3
- 150000003839 salts Chemical class 0.000 claims description 3
- 244000203593 Piper nigrum Species 0.000 claims 1
- 230000000694 effects Effects 0.000 abstract description 5
- 238000012216 screening Methods 0.000 abstract description 4
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 241000722363 Piper Species 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010039203 Road traffic accident Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Geometry (AREA)
- Color Image Communication Systems (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明属于目标识别技术领域,尤其涉及一种基于聚类算法的车辆检测方法。The invention belongs to the technical field of target recognition, and in particular relates to a vehicle detection method based on a clustering algorithm.
背景技术Background technique
随着人们生活水平的提高,车辆的数量越来越多,车辆的种类、型号、框架结构等都具有不同的特征,因此应用虚拟现实技术和图像处理技术的目标检测和识别技术,例如人脸识别、行人特征检测、车辆检测等,在智能交通和传感技术发展的过程中,也越来越多地被应用到交通领域,例如车辆碰撞预测预警、车辆偏离车道等突发情况;在智能交通中通过视频捕捉利用视觉技术进行分析和跟踪,检测人流拥挤和车辆通行状况,可便捷高效地进行交通管理,减少交通事故。With the improvement of people's living standards, the number of vehicles is increasing, and the types, models, and frame structures of vehicles have different characteristics. Therefore, target detection and recognition technology using virtual reality technology and image processing technology, such as face Recognition, pedestrian feature detection, vehicle detection, etc., are increasingly applied to the traffic field in the process of the development of intelligent transportation and sensing technology, such as vehicle collision prediction and warning, vehicle departure from lanes and other emergencies; Through video capture in traffic, visual technology is used for analysis and tracking, and the congestion of people and vehicles can be detected, which can be used for convenient and efficient traffic management and reduce traffic accidents.
在车辆检测技术中,通过背景图像、纹理和颜色等对运动目标进行检测,提高检测精度,在检测过程中,通过利用高速混合建模、分类器、决策树等算法进行多目标检测、跟踪、识别等。在车辆检测中通过视频和图像分离背景图像,常用的技术有基于背景差分、先验知识、光流法、机器学习等,其中,机器学习的研究应用已成为目前的前沿领域,目标检测的方法和技术越来越多。In vehicle detection technology, moving targets are detected through background images, textures and colors to improve detection accuracy. In the detection process, multi-target detection, tracking, identification etc. In vehicle detection, the background image is separated from video and image. The commonly used techniques are based on background difference, prior knowledge, optical flow method, machine learning, etc. Among them, the research and application of machine learning has become the current frontier field, and the method of target detection and more and more technology.
聚类算法主要是对生成的簇中的数据进行相异和相似性判别,尽最大程度地实现对象中的相似度度量。现有的聚类算法检测精度不高,计算量大,效率低下。The clustering algorithm mainly discriminates the dissimilarity and similarity of the data in the generated clusters, and achieves the similarity measure in the object to the greatest extent possible. The existing clustering algorithm has low detection accuracy, large amount of calculation and low efficiency.
发明内容SUMMARY OF THE INVENTION
有鉴于此,本发明提出了一种基于聚类算法的车辆检测方法,在颜色量化算法方面通过算法的不断迭代使得其量化值更加准确,从而提高后续检测的效果。在车辆检测不需要指定预先定义的簇数,只需要一个灵敏度较低的参数,进而达到单变量控制的效果降低的算法的复杂度。此外通过多种方式降低了整个算法的运算量,从而提高了整个算法效率。In view of this, the present invention proposes a vehicle detection method based on a clustering algorithm. In terms of the color quantization algorithm, the quantization value is more accurate through the continuous iteration of the algorithm, thereby improving the effect of subsequent detection. In vehicle detection, there is no need to specify a predefined number of clusters, and only a parameter with lower sensitivity is needed, thereby achieving the effect of univariate control and reducing the complexity of the algorithm. In addition, the computational complexity of the entire algorithm is reduced in various ways, thereby improving the efficiency of the entire algorithm.
具体的,本发明公开的一种基于聚类算法的车辆检测方法,包括以下步骤:Specifically, a vehicle detection method based on a clustering algorithm disclosed in the present invention includes the following steps:
对图像进行预处理,包括对图像的滤波降噪以及图像的尺寸调整;Preprocessing the image, including filtering and denoising the image and resizing the image;
通过对图像进行颜色量化处理,减少像素点的颜色种类;By quantizing the color of the image, the color types of the pixels are reduced;
将图像从RGB色彩空间转换为LAB色彩空间,提取LAB颜色空间的图像像素点的颜色特征向量,并所述颜色特征向量组成特征量值矩阵;Convert the image from the RGB color space to the LAB color space, extract the color feature vector of the image pixel in the LAB color space, and the color feature vector forms a feature value matrix;
计算每个像素点的局部密度和距较高密度点的距离,筛选出局部密度大于阈值以及距较高密度点的距离大于阈值的像素点作为簇中心,将其它像素点规置到该簇中心,进行像素点的聚类;Calculate the local density of each pixel and the distance from the higher density point, screen out the pixels whose local density is greater than the threshold and the distance from the higher density point is greater than the threshold as the cluster center, and place other pixels to the cluster center , to perform pixel clustering;
根据聚类结果生成车辆分割图像。Generate vehicle segmentation images based on the clustering results.
进一步的,所述图像的滤波降噪方法为中值滤波方法,用于去除脉冲噪声与椒盐噪声,同时保留图像的边缘细节;图像尺寸的调整用于降低相似性度量值的运算数量提高运行速度。Further, the filtering and noise reduction method of the image is a median filtering method, which is used to remove impulse noise and salt and pepper noise while retaining the edge details of the image; the adjustment of the image size is used to reduce the number of operations of the similarity measure and improve the running speed. .
进一步的,所述颜色量化处理的步骤包括:Further, the steps of the color quantization processing include:
S1:从图像中随机选取K个RGB分量,Mk=[R′k,G′k,B′k],其中k为K的计数,R′k,G′k,B′k分别为选取的K个点所对应的R、G、B分量;S1: randomly select K RGB components from the image, M k = [R' k , G' k , B' k ], where k is the count of K, and R' k , G' k , B' k are selected respectively The R, G, B components corresponding to the K points of ;
S2:计算各个像素点与所述选取K个RGB分量的色彩距离S2: Calculate the color distance between each pixel and the selected K RGB components
其中,k为K个RGB分量计数,j为像素点的计数,dkj为第j个像素点与第k个RGB分量的色彩距离;Among them, k is the count of K RGB components, j is the count of pixels, and d kj is the color distance between the jth pixel and the kth RGB component;
S3:将所述色彩距离进行像素归类,归类方法如下:S3: classify the color distance into pixels, and the classification method is as follows:
将每个像素点与K个RGB分量的计算出的K个色彩距离进行比较,将该像素点归类到与K个RGB分量中对应的最小的色彩距离的类别中去;Compare each pixel with the calculated K color distances of the K RGB components, and classify the pixel into the category with the smallest color distance corresponding to the K RGB components;
S4:根据像素归类的结果,计算各个类别中的所有像素点的色彩平均值,并以计算出的平均值替换Mk的值;S4: According to the result of pixel classification, calculate the color average value of all pixel points in each category, and replace the value of M k with the calculated average value;
S5:将各个类别所对应的像素值与计算出的新的K个类别像素值行进行色彩距离的计算,判断各个类别中所对应的像素的分类是否发生变化;如果所有类别中所有像素的分类中所有像素点的分类均未发生变化,则进行步骤S6,否则跳转到步骤S2;S5: Calculate the color distance between the pixel values corresponding to each category and the calculated new K categories of pixel values, and determine whether the classification of the corresponding pixels in each category has changed; if the classification of all pixels in all categories If the classification of all the pixels in , has not changed, then go to step S6, otherwise jump to step S2;
S6:根据步骤S1~步骤S5中的得到的K个类别像素值以及各个类别中的像素点继续,将各个类别中所用的像素点的RGB分量用该类别中的类别像素值的RGB分量替换,完成颜色量化。S6: Continue according to the pixel values of the K categories obtained in steps S1 to S5 and the pixel points in each category, and replace the RGB components of the pixel points used in each category with the RGB components of the category pixel values in the category, Complete color quantization.
进一步的,所述将图像从RGB色彩空间转换为LAB色彩空间,转换步骤如下:Further, to convert the image from the RGB color space to the LAB color space, the conversion steps are as follows:
对RGB分量通过第一Gamma校正方法得到Gamma校正色彩分量Rg、Gg、Bg;Gamma-corrected color components Rg, Gg, and Bg are obtained by using the first Gamma correction method for the RGB components;
将所述Gamma校正色彩分量进行XYZ色彩空间的转换,得到XYZ色彩空间色彩分量X、Y、Z;The gamma correction color component is carried out the conversion of XYZ color space, obtains XYZ color space color component X, Y, Z;
对所述色彩空间色彩分量X、Y、Z进行第二Gamma校正获取到校正后的XYZ色彩空间色彩分量Xl、Yl、Zl;Performing the second Gamma correction on the color space color components X, Y, Z to obtain the corrected XYZ color space color components X1, Y1, Z1;
将所述Xl、Yl、Zl转换为LAB色彩空间的色彩分量l、a、b;Convert the X1, Y1, Z1 to the color components 1, a, b of the LAB color space;
提取LAB颜色空间的图像像素点的颜色信息,生成各像素点的颜色特征向量。Extract the color information of the image pixels in the LAB color space, and generate the color feature vector of each pixel.
进一步的,所述第一Gamma校正方法如下:Further, the first Gamma correction method is as follows:
其中x为R、G、B原始色彩分量之一;where x is one of the original color components of R, G, and B;
所述第二Gamma校正公式如下:The second Gamma correction formula is as follows:
其中y为XYZ色彩空间色彩分量X、Y、Z之一;Where y is one of the color components X, Y, and Z in the XYZ color space;
所述Gamma校正色彩分量进行XYZ色彩空间的转换的公式如下:The formula for the conversion of the Gamma-corrected color component to the XYZ color space is as follows:
[X,Y,Z]=M*[Rg,Gg,Bg][X,Y,Z]=M*[Rg,Gg,Bg]
其中X、Y、Z为XYZ色彩空间色彩分量,Where X, Y, Z are the color components of the XYZ color space,
所述Xl、Yl、Zl转换为LAB色彩空间的色彩分量l、a、b的公式如下:The formulas for converting the X1, Y1, and Z1 into the color components l, a, and b of the LAB color space are as follows:
a=500(Xl-Yl)a=500(Xl-Yl)
b=500(Yl-Zl)b=500(Yl-Zl)
进一步的,所述车辆检测:Further, the vehicle detection:
根据下式计算出所述颜色特征向量的特征量值,并将所有的特征量值组成特征量值矩阵D:The eigenvalues of the color feature vector are calculated according to the following formula, and all the eigenvalues are formed into a eigenvalue matrix D:
其中i代表第i的像素点,j代表第j个像素点,dij代表像素点i与像素点j之间的特征量值,其中l、a、b表示像素点的在LAB颜色空间下l、a、b三个参数;where i represents the i-th pixel, j represents the j-th pixel, and d ij represents the feature value between pixel i and pixel j, where l, a, and b represent the pixel in the LAB color space. , a, b three parameters;
将所述特征量值矩阵D上三角矩阵进行升序排列,根据中值计算方法计算出截断距离dc;The upper triangular matrix of the eigenvalue matrix D is arranged in ascending order, and the truncation distance d c is calculated according to the median calculation method;
根据下式计算每个像素点的局部密度ρi:Calculate the local density ρ i of each pixel point according to the following formula:
ρi=∑jχ(dij-dc)ρ i =∑ j χ(d ij -d c )
其中,i为当前像素点的编号,j代表除当前像素点外其他像素点的编号,dij为像素点i与像素点j之间的特征量值,dc为截断距离;Among them, i is the number of the current pixel point, j represents the number of other pixel points except the current pixel point, d ij is the feature value between the pixel point i and the pixel point j, and d c is the cutoff distance;
χ的表达形式为:The expression form of χ is:
按照下式计算距较高密度点的距离δi:Calculate the distance δ i from the higher density point as follows:
δi=min(dij)δ i =min(d ij )
其中,dij为像素点i与像素点j之间的特征量值,i为当前像素点的编号,j代表除当前像素点外其他像素点的编号且ρi<ρj;Wherein, d ij is the feature value between the pixel point i and the pixel point j, i is the number of the current pixel point, j represents the number of other pixel points except the current pixel point and ρ i <ρ j ;
将所述的局部密度以及距较高密度点的距离进行筛选,筛选出较大局部密度以及距较高密度点的距离较大的像素点作为簇中心;Screen the local density and the distance from the higher density point, and screen out the pixel point with the larger local density and the distance from the higher density point as the cluster center;
以确定的簇中心的像素点为中心,根据所述局部密度公式将所有跟该簇中心像素点的距离小于截断距离dc的像素点规置到该簇中心,完成像素点的聚类;The pixel point of the determined cluster center is the center, and according to the local density formula, all the pixel points whose distances from the pixel point of the cluster center are less than the cut-off distance dc are placed in the center of the cluster to complete the clustering of the pixel points;
根据图像中实际车辆的长宽比k,对像素聚类后的车辆长宽比进行筛选,将与车辆的长高比k误差大的聚类去除掉。According to the aspect ratio k of the actual vehicle in the image, the aspect ratio of the vehicle after pixel clustering is screened, and the clusters with a large error with the aspect ratio k of the vehicle are removed.
进一步的,所述根据聚类结果生成车辆分割图像包括:根据车辆检测结果选取每个聚类的最外围像素点的坐标组成一个矩形,生成车辆分割图像。Further, the generating the vehicle segmentation image according to the clustering result includes: selecting the coordinates of the outermost pixel points of each cluster to form a rectangle according to the vehicle detection result, and generating the vehicle segmentation image.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
本发明通过降低图像尺寸、颜色量化的步骤在保证检测精度的条件下大幅降低了计算量,提高了效率;The invention greatly reduces the amount of calculation and improves the efficiency under the condition of ensuring the detection accuracy through the steps of reducing the image size and color quantization;
本发明提出的颜色量化算法中其量化的颜色通过算法的不断迭代使得其量化值更加准确,从而提高后续检测的效果。In the color quantization algorithm proposed by the present invention, the quantized color makes its quantized value more accurate through the constant iteration of the algorithm, thereby improving the effect of subsequent detection.
本发明提出的车辆检测算法不需要指定预先定义的簇数,只需要一个灵敏度较低的参数,与其他基于密度的方法相比,本发明计算效率较高。The vehicle detection algorithm proposed by the present invention does not need to specify a predefined number of clusters, but only needs a parameter with lower sensitivity. Compared with other density-based methods, the present invention has higher computational efficiency.
附图说明Description of drawings
图1本发明的车辆检测方法流程图;Fig. 1 is the flow chart of the vehicle detection method of the present invention;
图2本发明的颜色量化流程图;Fig. 2 color quantization flow chart of the present invention;
图3本发明的聚类筛选流程图。Fig. 3 is a flow chart of cluster screening of the present invention.
具体实施方式Detailed ways
下面结合附图对本发明作进一步的说明,但不以任何方式对本发明加以限制,基于本发明教导所作的任何变换或替换,均属于本发明的保护范围。The present invention is further described below in conjunction with the accompanying drawings, but the present invention is not limited in any way, and any transformation or replacement based on the teachings of the present invention belongs to the protection scope of the present invention.
本发明采用的技术方案包括步骤如下:The technical scheme adopted in the present invention comprises the steps as follows:
本发明利用聚类算法对图像中的车辆进行分割并得到车辆目标检测图像。图1为基于聚类算法的车辆检测流程图,各个步骤的具体描述如下:The invention uses the clustering algorithm to segment the vehicle in the image and obtains the vehicle target detection image. Figure 1 is a flow chart of vehicle detection based on clustering algorithm. The specific description of each step is as follows:
1.图像预处理:本发明中图像预处理主要工作为图像的滤波降噪以及图像的尺寸调整。1. Image preprocessing: The main tasks of image preprocessing in the present invention are image filtering and noise reduction and image size adjustment.
图像的滤波采取的方法为中值滤波的方法,该方法的优势在于能够去除脉冲噪声与椒盐噪声,此外在去除噪声的同时还能够保留图像的边缘细节。图像尺寸的调整是为了降低相似性度量值的运算数量提高运行速度,具体的缩放尺寸根据实际的硬件条件进行选择,一般选择将原图像缩小一半的比例。The method of image filtering is median filtering. The advantage of this method is that it can remove impulse noise and salt and pepper noise. In addition, it can retain the edge details of the image while removing noise. The adjustment of the image size is to reduce the number of operations of the similarity measurement value and improve the running speed. The specific scaling size is selected according to the actual hardware conditions. Generally, the ratio of reducing the original image by half is selected.
2.颜色量化:步骤1中为了降低相似性度量值的运算数量提高运行速度采取了图像尺寸的调整方法,但对于RGB图像而言其运算量依旧很大,继续降低分辨率会损失原图中大量信息。2. Color quantization: In step 1, in order to reduce the number of operations of the similarity measure and improve the running speed, the image size adjustment method is adopted, but for RGB images, the amount of operations is still very large, and continuing to reduce the resolution will lose the original image. a lot of information.
为了解决该问题,本发明提出了一种颜色量化的方法,通过对图像进行颜色量化处理,在减少像素点的颜色种类的同时对原图的颜色信息做最大化保留,提高了运行效率。图2为颜色量化流程图,具体步骤如下:In order to solve this problem, the present invention proposes a color quantization method. By performing color quantization processing on the image, the color information of the original image is maximized while reducing the color types of the pixel points, and the operation efficiency is improved. Figure 2 is a flow chart of color quantization, and the specific steps are as follows:
2.1 K值选取:从图像中随机选取K个RGB分量,选取一般规则为:尽可能的在图像中分散选取,最大程度能够代表图像中的主要色彩。2.1 K value selection: K RGB components are randomly selected from the image, and the general rule for selection is: as far as possible to disperse the selection in the image, and to represent the main colors in the image to the greatest extent.
其中K为颜色量化类别数,K的取值与图像场景颜色的复杂度成正比关系,通常K的取值大于等于3。其表示形式为Mk=[R′k,G′k,B′k],其中k为K的计数,R′k,G′k,B′k分别为选取的K个点所对应的R、G、B分量。Among them, K is the number of color quantization categories, and the value of K is proportional to the complexity of the color of the image scene. Usually, the value of K is greater than or equal to 3. Its representation is M k =[R′ k , G′ k , B′ k ], where k is the count of K, R′ k , G′ k , B′ k are the R corresponding to the selected K points, respectively , G, B components.
2.2色彩距离计算:根据公式1分别计算各个像素点与步骤2.1中选取的K个RGB分量的距离。2.2 Color distance calculation: Calculate the distance between each pixel and the K RGB components selected in step 2.1 according to formula 1.
其中,k为K个RGB分量计数,j为像素点的计数,dkj为第j个像素点与第k个RGB分量的距离。Among them, k is the count of K RGB components, j is the count of pixels, and dkj is the distance between the jth pixel and the kth RGB component.
2.3像素归类:将步骤2.2中计算出的色彩距离进行像素归类,具体的归类方法为:2.3 Pixel classification: The color distance calculated in step 2.2 is classified into pixels. The specific classification method is:
将步骤2.2中每个像素点与K个RGB分量的计算出的K个色彩距离进行比较,将该像素点归类到与K个RGB分量中对应的最小的色彩距离的类别中去。Compare each pixel in step 2.2 with the calculated K color distances of the K RGB components, and classify the pixel into the category with the smallest color distance corresponding to the K RGB components.
2.4类别像素值计算:该步骤是对步骤2.1中选取的K个RGB分量的色彩值重新进行计算,获取到新的类别像素值。2.4 Category pixel value calculation: This step is to recalculate the color values of the K RGB components selected in step 2.1 to obtain a new category pixel value.
具体的计算方法为:根据步骤2.3中的像素归类的结果,按照公式2分别计算各个类别中的所有像素点的色彩平均值,并以计算出的平均值替换步骤2.1中的Mk。The specific calculation method is: according to the result of pixel classification in step 2.3, calculate the color average value of all pixels in each category according to formula 2, and replace M k in step 2.1 with the calculated average value.
其中,k为K个RGB分量计数,R′k,G′k,B′k分别为选取的K个点所对应的R、G、B分量,n为该类别中所对应的像素点总数,i为该类别中像素点的编号,Ri、Gi、Bi分别为该类别中第i个像素点的RGB分量。Among them, k is the count of K RGB components, R′ k , G′ k , B′ k are the R, G, and B components corresponding to the selected K points, respectively, and n is the total number of pixels corresponding to the category, i is the number of the pixel in the category, Ri, Gi, Bi are the RGB components of the i-th pixel in the category, respectively.
2.5像素类别判断:该步骤属于颜色量化步骤中的迭代步骤,该步骤的目的在于使得像素归类更加准确,以及类别像素值更加合理。2.5 Pixel category judgment: This step is an iterative step in the color quantization step, and the purpose of this step is to make the pixel classification more accurate and the category pixel value more reasonable.
其具体的步骤为:将各个类别所对应的像素值与步骤2.4中计算出的新的K个类别像素值行进行色彩距离的计算,判断各个类别中所对应的像素的分类是否发生变化;如果所有类别中所有像素的分类中所有像素点的分类均未发生变化,则进行下一步骤,否则跳转到步骤2.2。The specific steps are: calculate the color distance between the pixel values corresponding to each category and the new K categories of pixel values calculated in step 2.4, and determine whether the classification of the corresponding pixels in each category has changed; if If the classification of all pixels in the classification of all pixels in all categories has not changed, proceed to the next step, otherwise skip to step 2.2.
2.6色彩替换量化:该步骤是颜色量化的算法具体执行步骤,其方法为:根据步骤2.1~步骤2.5中的出的K个类别像素值以及各个类别中的像素点继续,将各个类别中所用的像素向的RGB分量用该类别中的类别像素值中的RGB分量替换,进而完成颜色量化。2.6 Color replacement quantization: This step is the specific execution step of the color quantization algorithm. The pixel-wise RGB components are replaced with the RGB components in the pixel values of the category in this category, thereby completing color quantization.
3.色彩空间转换:本发明中需要将RGB色彩空间转换为LAB色彩空间,具体转换步骤如下:3. Color space conversion: RGB color space needs to be converted into LAB color space in the present invention, and the concrete conversion steps are as follows:
3.1 Gamma校正:根据公式3对R、G、B进行Gamma校正获取到校正后的Rg=f(R)、Gg=f(G)、Bg=f(B),其中R、G、B为原始色彩分量,Rg、Gg、Bg为Gamma校正色彩分量。3.1 Gamma correction: Gamma correction is performed on R, G, and B according to formula 3 to obtain the corrected Rg=f(R), Gg=f(G), Bg=f(B), where R, G, and B are the original Color components, Rg, Gg, Bg are Gamma correction color components.
3.2 XYZ色彩空间转换:根据公式4进行XYZ色彩空间的转换。3.2 XYZ color space conversion: convert the XYZ color space according to formula 4.
[X,Y,Z]=M*[Rg,Gg,Bg] (4)[X,Y,Z]=M*[Rg,Gg,Bg] (4)
其中Rg、Gg、Bg为Gamma校正色彩分量,X、Y、Z为XYZ色彩空间色彩分量,Among them, Rg, Gg, Bg are Gamma correction color components, X, Y, Z are XYZ color space color components,
3.3 XYZ线性归一化:根据公式5对X、Y、Z进行Gamma校正获取到校正后的Xl=g(X)、Yl=g(Y)、Zl=g(Z),其中X、Y、Z为XYZ色彩空间色彩分量,Xl、Yl、Zl为X、Y、Z线性归一化之后的值的色彩分量。3.3 XYZ linear normalization: perform Gamma correction on X, Y, and Z according to formula 5 to obtain the corrected Xl=g(X), Yl=g(Y), Zl=g(Z), where X, Y, Z is the color component of the XYZ color space, and X1, Y1, and Z1 are the color components of the values after linear normalization of X, Y, and Z.
3.4 LAB色彩空间转换:根据公式6进行LAB色彩空间,其中,l、a、b为LAB色彩空间的色彩分量,X1、Yl、Zl为XYZ色彩空间中X、Y、Z线性归一化之后的值。3.4 LAB color space conversion: carry out the LAB color space according to formula 6, where l, a, b are the color components of the LAB color space, X1, Yl, Zl are the linearly normalized X, Y, Z in the XYZ color space. value.
4.提取特征向量:提取LAB颜色空间的图像像素点的颜色信息,生成各像素点的颜色特征向量.4. Extract feature vector: extract the color information of the image pixels in the LAB color space, and generate the color feature vector of each pixel.
每个像素点的特征向量为:Li=[l,a,b],其中l、a、b表示像素点的在LAB颜色空间下l、a、b三个参数,i为图像像素点的编号。The feature vector of each pixel is: Li=[l,a,b], where l, a, b represent the three parameters l, a, b of the pixel in the LAB color space, and i is the number of the image pixel .
5.车辆检测:5. Vehicle detection:
5.1计算特征量值:根据公式7计算出由步骤4获取的调整向量的特征量值,并将所有的特征量值放到特征量值矩阵D中去。5.1 Calculate the feature value: Calculate the feature value of the adjustment vector obtained in step 4 according to formula 7, and put all the feature value into the feature value matrix D.
其中i代表第i的像素点,j代表第j个像素点,dij代表像素点i与像素点j之间的特征量值,其中l、a、b表示像素点的在LAB颜色空间下l、a、b三个参数。where i represents the i-th pixel, j represents the j-th pixel, and d ij represents the feature value between pixel i and pixel j, where l, a, and b represent the pixel in the LAB color space. , a, b three parameters.
特征量值矩阵D的表现形式为:The representation form of the eigenvalue matrix D is:
5.2确定截断距离:本步骤中截断距离dc的具体确定方法为:将步骤5.1中的计算得出的特征量值矩阵D上三角矩阵进行升序排列,根据中值计算方法计算出截断距离dc。5.2 Determining the truncation distance: The specific method for determining the truncation distance dc in this step is: arrange the upper triangular matrix of the feature value matrix D calculated in step 5.1 in ascending order, and calculate the truncation distance dc according to the median calculation method.
5.3计算局部密度:本步骤中的局部密度ρi代表着该像素点周围的相似像素点的数量,该值越大说明该像素点周围的相似像素点的数量越多,即找到与第i个像素点之间的距离小于截断距离dc的像素点的个数。5.3 Calculate the local density: The local density ρ i in this step represents the number of similar pixels around the pixel. The larger the value, the more the number of similar pixels around the pixel. The distance between the pixels is less than the number of pixels with the cut-off distance dc .
具体的局部密度方法为:根据公式8计算每个像素点的局部密度。The specific local density method is: calculating the local density of each pixel point according to formula 8.
ρi=∑jχ(dij-dc) (8)ρ i =∑ j χ(d ij -d c ) (8)
其中,i为当前像素点的编号,j代表除当前像素点外其他像素点的编号,dij为像素点i与像素点j之间的特征量值,dc为截断距离。χ的表达形式为:Among them, i is the number of the current pixel, j represents the number of other pixels except the current pixel, d ij is the feature value between pixel i and pixel j, and d c is the cutoff distance. The expression form of χ is:
5.4计算距较高密度点的距离:该步骤计算的距较高密度点的距离δi目的在于找到所有比第i个像素点的局部密度都大的像素点中,与第i个像素点之间的距离的最小值。5.4 Calculate the distance from the higher density point: The purpose of the distance δ i calculated from the higher density point in this step is to find all the pixels whose local density is greater than the ith pixel, which is the same as the ith pixel. the minimum distance between them.
具体的计算方法为:按照公式9计算距较高密度点的距离δi。The specific calculation method is as follows: according to formula 9, the distance δ i from the higher density point is calculated.
δi=min(dij) (9)δ i =min(d ij ) (9)
其中,dij为像素点i与像素点j之间的特征量值,i为当前像素点的编号,j代表除当前像素点外其他像素点的编号且ρi<ρj。Among them, d ij is the feature value between pixel i and pixel j, i is the number of the current pixel, j represents the number of other pixels except the current pixel, and ρ i <ρ j .
5.5确定簇中心:该步骤为确定图像中像素点的簇中心,即初步找出图像中车辆的中心。具体的确定方法为:根据步骤5.3与步骤5.4中计算出的局部密度以及距较高密度点的距离进行筛选,筛选出较大局部密度以及距较高密度点的距离较大的像素点作为簇中心。5.5 Determine the cluster center: This step is to determine the cluster center of the pixel points in the image, that is, to initially find the center of the vehicle in the image. The specific determination method is: according to the local density calculated in step 5.3 and step 5.4 and the distance from the higher density point, screen out the pixel points with the larger local density and the larger distance from the higher density point as the cluster center.
5.6像素点聚类:根据步骤5.5中确定的簇中心进行像素点聚类。具体的聚类方法为:以确定的簇中心的像素点为中心,根据公式8将所有跟簇中心像素点的距离小于截断距离dc的像素点均规置到该簇中心,从而完成像素点的聚类。该步骤的聚类结果代表着图像中车辆初步检测结果。5.6 Pixel point clustering: Perform pixel point clustering according to the cluster center determined in step 5.5. The specific clustering method is as follows: the pixel point of the determined cluster center is the center, and according to formula 8, all the pixel points whose distance from the pixel point of the cluster center is less than the cut-off distance dc are placed in the center of the cluster, so as to complete the pixel point. clustering. The clustering results of this step represent the preliminary detection results of vehicles in the image.
5.7聚类筛选:该步骤是对步骤5.5与步骤5.6中初步检测的车辆中心以及车辆的检测结果进行筛选。5.7 Cluster screening: This step is to screen the vehicle centers and vehicle detection results initially detected in steps 5.5 and 5.6.
具体的筛选方法为:根据图像中实际车辆的长宽比k,对步骤5.6像素聚类后车辆长宽比进行筛选,将与车辆的长高比k误差较大的聚类去除掉。The specific screening method is: according to the aspect ratio k of the actual vehicle in the image, screen the aspect ratio of the vehicle after the pixel clustering in step 5.6, and remove the cluster with a large error with the aspect ratio k of the vehicle.
6.生成车辆分割图像:该步骤是将车辆从图像中分割出来或者在图像中框选出来,具体的方法为:根据步骤5的车辆检测结果选取每个聚类的最外围像素点的坐标组成一个矩形,从而完成车辆的分割生成车辆分割图像。6. Generate a vehicle segmentation image: This step is to segment the vehicle from the image or frame it in the image. The specific method is: according to the vehicle detection result in step 5, select the coordinates of the outermost pixel points of each cluster. A rectangle, thereby completing the segmentation of the vehicle, generates a vehicle segmentation image.
本发明的有益效果如下:The beneficial effects of the present invention are as follows:
本发明通过降低图像尺寸、颜色量化的步骤在保证检测精度的条件下大幅降低了计算量,提高了效率;The invention greatly reduces the amount of calculation and improves the efficiency under the condition of ensuring the detection accuracy through the steps of reducing the image size and color quantization;
本发明提出的颜色量化算法中其量化的颜色通过算法的不断迭代使得其量化值更加准确,从而提高后续检测的效果。In the color quantization algorithm proposed by the present invention, the quantized color makes its quantized value more accurate through the constant iteration of the algorithm, thereby improving the effect of subsequent detection.
本发明提出的车辆检测算法不需要指定预先定义的簇数,只需要一个灵敏度较低的参数,与其他基于密度的方法相比,本发明的计算效率较高。The vehicle detection algorithm proposed by the present invention does not need to specify a predefined number of clusters, but only needs a parameter with lower sensitivity. Compared with other density-based methods, the present invention has higher computational efficiency.
本文所使用的词语“优选的”意指用作实例、示例或例证。本文描述为“优选的”任意方面或设计不必被解释为比其他方面或设计更有利。相反,词语“优选的”的使用旨在以具体方式提出概念。如本申请中所使用的术语“或”旨在意指包含的“或”而非排除的“或”。即,除非另外指定或从上下文中清楚,“X使用A或B”意指自然包括排列的任意一个。即,如果X使用A;X使用B;或X使用A和B二者,则“X使用A或B”在前述任一示例中得到满足。As used herein, the word "preferred" means serving as an example, instance, or illustration. Any aspect or design described herein as "preferred" is not necessarily to be construed as advantageous over other aspects or designs. Rather, use of the word "preferred" is intended to present concepts in a specific manner. The term "or" as used in this application is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise or clear from context, "X employs A or B" is meant to naturally include either of the permutations. That is, "X employs A or B" is satisfied in any of the preceding examples if X employs A; X employs B; or X employs both A and B.
而且,尽管已经相对于一个或实现方式示出并描述了本公开,但是本领域技术人员基于对本说明书和附图的阅读和理解将会想到等价变型和修改。本公开包括所有这样的修改和变型,并且仅由所附权利要求的范围限制。特别地关于由上述组件(例如元件等)执行的各种功能,用于描述这样的组件的术语旨在对应于执行所述组件的指定功能(例如其在功能上是等价的)的任意组件(除非另外指示),即使在结构上与执行本文所示的本公开的示范性实现方式中的功能的公开结构不等同。此外,尽管本公开的特定特征已经相对于若干实现方式中的仅一个被公开,但是这种特征可以与如可以对给定或特定应用而言是期望和有利的其他实现方式的一个或其他特征组合。而且,就术语“包括”、“具有”、“含有”或其变形被用在具体实施方式或权利要求中而言,这样的术语旨在以与术语“包含”相似的方式包括。Furthermore, although the present disclosure has been shown and described with respect to one implementation or implementation, equivalent variations and modifications will occur to those skilled in the art based on a reading and understanding of this specification and drawings. The present disclosure includes all such modifications and variations and is limited only by the scope of the appended claims. In particular with respect to the various functions performed by the above-described components (eg, elements, etc.), the terms used to describe such components are intended to correspond to any component that performs the specified function of the component (eg, which is functionally equivalent) (unless otherwise indicated), even if not structurally equivalent to the disclosed structures that perform the functions of the exemplary implementations of the present disclosure shown herein. Furthermore, although a particular feature of the present disclosure has been disclosed with respect to only one of several implementations, such feature may be combined with one or other features of other implementations as may be desirable and advantageous for a given or particular application combination. Moreover, to the extent that the terms "comprising," "having," "containing," or variations thereof are used in the detailed description or the claims, such terms are intended to include in a manner similar to the term "comprising."
本发明实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以多个或多个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。上述提到的存储介质可以是只读存储器,磁盘或光盘等。上述的各装置或系统,可以执行相应方法实施例中的存储方法。Each functional unit in this embodiment of the present invention may be integrated into one processing module, or each unit may exist physically alone, or multiple or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium. The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. The above-mentioned apparatuses or systems may execute the storage methods in the corresponding method embodiments.
综上所述,上述实施例为本发明的一种实施方式,但本发明的实施方式并不受所述实施例的限制,其他的任何背离本发明的精神实质与原理下所做的改变、修饰、代替、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。To sum up, the above-mentioned embodiment is an embodiment of the present invention, but the embodiment of the present invention is not limited by the embodiment, and any other changes that deviate from the spirit and principle of the present invention, Modifications, substitutions, combinations, and simplifications should all be equivalent substitutions, which are all included within the protection scope of the present invention.
Claims (7)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111542446.3A CN114463570A (en) | 2021-12-14 | 2021-12-14 | Vehicle detection method based on clustering algorithm |
PCT/CN2022/081197 WO2023108933A1 (en) | 2021-12-14 | 2022-03-16 | Vehicle detection method based on clustering algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111542446.3A CN114463570A (en) | 2021-12-14 | 2021-12-14 | Vehicle detection method based on clustering algorithm |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463570A true CN114463570A (en) | 2022-05-10 |
Family
ID=81406676
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111542446.3A Pending CN114463570A (en) | 2021-12-14 | 2021-12-14 | Vehicle detection method based on clustering algorithm |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114463570A (en) |
WO (1) | WO2023108933A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117274405A (en) * | 2023-11-22 | 2023-12-22 | 深圳市蓝方光电有限公司 | LED lamp working color detection method based on machine vision |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116597188B (en) * | 2023-07-17 | 2023-09-05 | 山东北国发展集团有限公司 | Vision-based solid waste resource utilization method and system |
CN116858991B (en) * | 2023-09-04 | 2023-12-01 | 济宁华晟服装股份有限公司 | Cotton desizing treatment monitoring method |
CN117173175B (en) * | 2023-11-02 | 2024-02-09 | 湖南格尔智慧科技有限公司 | Image similarity detection method based on super pixels |
CN117689768B (en) * | 2023-11-22 | 2024-05-07 | 武汉纺织大学 | Natural scene driven garment template coloring method and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114512A1 (en) * | 2001-02-20 | 2002-08-22 | Ravishankar Rao | Color clustering and segmentation using sigma filtering |
CN106778829A (en) * | 2016-11-28 | 2017-05-31 | 常熟理工学院 | A kind of image detecting method of the hepar damnification classification of Active Learning |
CN108629783A (en) * | 2018-05-02 | 2018-10-09 | 山东师范大学 | Image partition method, system and medium based on the search of characteristics of image density peaks |
CN108764145A (en) * | 2018-04-25 | 2018-11-06 | 哈尔滨工程大学 | One kind is towards Dragon Wet Soil remote sensing images density peaks clustering method |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency object detection method based on k-means and level set superpixel segmentation |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5173898B2 (en) * | 2009-03-11 | 2013-04-03 | キヤノン株式会社 | Image processing method, image processing apparatus, and program |
CN104899899A (en) * | 2015-06-12 | 2015-09-09 | 天津大学 | Color quantification method based on density peak value |
CN107729812B (en) * | 2017-09-18 | 2021-06-25 | 南京邮电大学 | A method suitable for vehicle color recognition in surveillance scenes |
CN107766878B (en) * | 2017-09-28 | 2020-12-04 | 北京华航无线电测量研究所 | Hazardous article detection method based on Lab color space K-means clustering |
CN109035254A (en) * | 2018-09-11 | 2018-12-18 | 中国水产科学研究院渔业机械仪器研究所 | Based on the movement fish body shadow removal and image partition method for improving K-means cluster |
-
2021
- 2021-12-14 CN CN202111542446.3A patent/CN114463570A/en active Pending
-
2022
- 2022-03-16 WO PCT/CN2022/081197 patent/WO2023108933A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020114512A1 (en) * | 2001-02-20 | 2002-08-22 | Ravishankar Rao | Color clustering and segmentation using sigma filtering |
CN106778829A (en) * | 2016-11-28 | 2017-05-31 | 常熟理工学院 | A kind of image detecting method of the hepar damnification classification of Active Learning |
CN108764145A (en) * | 2018-04-25 | 2018-11-06 | 哈尔滨工程大学 | One kind is towards Dragon Wet Soil remote sensing images density peaks clustering method |
CN108629783A (en) * | 2018-05-02 | 2018-10-09 | 山东师范大学 | Image partition method, system and medium based on the search of characteristics of image density peaks |
CN110866896A (en) * | 2019-10-29 | 2020-03-06 | 中国地质大学(武汉) | Image saliency object detection method based on k-means and level set superpixel segmentation |
Non-Patent Citations (2)
Title |
---|
李京蓓,刘煜,肖华欣,赖世铭作: "深度神经网络智能图像着色技术", 31 December 2020, 国防科学技术大学出版社, pages: 28 - 29 * |
熊赟,朱扬勇,陈志渊编: "大数据挖掘", 30 April 2016, 上海科学技术出版社, pages: 124 - 125 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117274405A (en) * | 2023-11-22 | 2023-12-22 | 深圳市蓝方光电有限公司 | LED lamp working color detection method based on machine vision |
CN117274405B (en) * | 2023-11-22 | 2024-02-02 | 深圳市蓝方光电有限公司 | LED lamp working color detection method based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
WO2023108933A1 (en) | 2023-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463570A (en) | Vehicle detection method based on clustering algorithm | |
CN110264468B (en) | Point cloud data labeling, segmentation model determination, target detection methods and related equipment | |
CN110363182B (en) | Lane detection method based on deep learning | |
CN109684922B (en) | A multi-model recognition method for finished dishes based on convolutional neural network | |
CN107563372B (en) | License plate positioning method based on deep learning SSD frame | |
CN111986125B (en) | Method for multi-target task instance segmentation | |
CN103020985B (en) | A kind of video image conspicuousness detection method based on field-quantity analysis | |
CN112836713A (en) | Identification and Tracking Method of Mesoscale Convective System Based on Image Anchorless Frame Detection | |
CN108009509A (en) | Vehicle target detection method | |
CN107886067B (en) | A pedestrian detection method based on multi-feature fusion based on HIKSVM classifier | |
CN111046787A (en) | A Pedestrian Detection Method Based on Improved YOLO v3 Model | |
CN110309781A (en) | Remote sensing recognition method for house damage based on multi-scale spectral texture adaptive fusion | |
CN109063619A (en) | A kind of traffic lights detection method and system based on adaptive background suppression filter and combinations of directions histogram of gradients | |
CN109255326A (en) | A kind of traffic scene smog intelligent detecting method based on multidimensional information Fusion Features | |
CN114332921A (en) | Pedestrian detection method based on improved clustering algorithm for Faster R-CNN network | |
JP7072765B2 (en) | Image processing device, image recognition device, image processing program, and image recognition program | |
CN111724566A (en) | Pedestrian fall detection method and device based on smart light pole video surveillance system | |
CN111881833B (en) | Vehicle detection method, device, equipment and storage medium | |
EP4323952A1 (en) | Semantically accurate super-resolution generative adversarial networks | |
JP5464739B2 (en) | Image area dividing apparatus, image area dividing method, and image area dividing program | |
CN111461002A (en) | A sample processing method for thermal imaging pedestrian detection | |
CN106548195A (en) | A kind of object detection method based on modified model HOG ULBP feature operators | |
CN110414386B (en) | Lane line detection method based on improved SCNN (traffic channel network) | |
Schulz et al. | Object-class segmentation using deep convolutional neural networks | |
CN103295186B (en) | Image descriptor generates method and system, image detecting method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |