[go: up one dir, main page]

CN116485719A - Self-adaptive canny method for crack detection - Google Patents

Self-adaptive canny method for crack detection Download PDF

Info

Publication number
CN116485719A
CN116485719A CN202310241045.7A CN202310241045A CN116485719A CN 116485719 A CN116485719 A CN 116485719A CN 202310241045 A CN202310241045 A CN 202310241045A CN 116485719 A CN116485719 A CN 116485719A
Authority
CN
China
Prior art keywords
image
pixel
area
value
threshold
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310241045.7A
Other languages
Chinese (zh)
Inventor
姚正安
连祥凯
潘嵘
李嘉
郭梓濠
董喆
刘念祖
兰丽菊
李锐晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202310241045.7A priority Critical patent/CN116485719A/en
Publication of CN116485719A publication Critical patent/CN116485719A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a self-adaptive canny method for crack detection, which solves the problems of more noise points, more dirty blocks and unstable detection effect in the detection of a tunnel wall crack. The algorithm firstly realizes effective improvement of the brightness of the dark part of the image by a method of carrying out illumination compensation on the gray level image; in order to remove the influence of partial isolated bright spot noise in the image, median filtering processing is carried out on the image, so that the impulse noise is reduced while the edge characteristics of cracks are maintained in the image; the method searches the threshold value by a method of maximizing the variance among gradient amplitude classes, and carries out edge detection by a Canny operator based on the threshold value; and finally, automatically detecting and removing noise and dirty blocks in the image by combining closed operation and connectivity to obtain a cleaner crack detection result. The invention can be widely applied to the problem of detecting tunnel cracks and other diseases.

Description

一种用于裂缝检测的自适应canny方法An Adaptive Canny Method for Crack Detection

技术领域technical field

本发明涉及裂缝图像识别领域,更具体地,涉及一种用于裂缝检测的自适应canny方法。The invention relates to the field of crack image recognition, and more particularly, relates to an adaptive canny method for crack detection.

背景技术Background technique

隧道裂缝是影响隧道内部安全的重要因素之一。裂缝的形成会受临近岩石的变形、脱落等情况的影响。施工方法及技术也会使隧道产生裂缝,比如隧道的挖掘方式、混凝土各种原料的比例、振捣方式等。而修葺完工后隧道内混凝土不均匀收缩、偏压、冻胀等多种因素的作用,也是隧道表面产生裂缝的重要原因。裂缝的宽度超过一定限度,就会对隧道产生很大影响,严重影响地铁运营的安全。因此,定期检测隧道内是否存在裂缝以及测定裂缝的宽度十分重要。此外,隧道裂缝的出现以及发展还是一个循序渐进的过程,初期隧道内裂缝较少并且比较分散,但随着时间的推移,较小的裂缝会逐渐发展,所以在检测裂缝的同时还应密切关注隧道裂缝的宽度形变等问题。Tunnel cracks are one of the important factors affecting the safety of tunnel interiors. The formation of cracks will be affected by the deformation and shedding of adjacent rocks. Construction methods and technologies will also cause cracks in the tunnel, such as the excavation method of the tunnel, the proportion of various concrete raw materials, and the vibration method. After the completion of the repair, the uneven shrinkage of the concrete in the tunnel, bias pressure, frost heaving and other factors are also important reasons for the cracks on the tunnel surface. If the width of the crack exceeds a certain limit, it will have a great impact on the tunnel and seriously affect the safety of subway operation. Therefore, it is very important to regularly detect whether there are cracks in the tunnel and determine the width of the cracks. In addition, the appearance and development of tunnel cracks is a gradual process. There are few and scattered cracks in the tunnel at the beginning, but as time goes by, smaller cracks will gradually develop. Therefore, while detecting cracks, we should also pay close attention to issues such as the width and deformation of tunnel cracks.

国内外对于隧道裂缝等病害的识别检测已经研究相当一段时间了,开始普遍采用接触式的检测方法,但是这种方法不方便,测量误差大。国外的研究起步较早,大多为基于激光雷达、超声波、冲击弹性波、图像处理等的方法。The identification and detection of tunnel cracks and other diseases has been studied for quite a while at home and abroad, and contact detection methods are generally used, but this method is inconvenient and has large measurement errors. Foreign research started earlier, mostly based on lidar, ultrasonic, shock elastic wave, image processing and other methods.

随着图像处理技术快速发展,其快速、精确、无损的特性深受广大研究者的青睐,广泛应用到工业生产领域,在地铁隧道裂缝等病害检测方面的研究。由于计算机视觉领域技术的不断突破,在图像领域可以使用最前沿的传统图像处理加深度学习实现隧道裂缝图像的快速获取和隧道表面裂缝位置的检测,为地铁隧道表面裂缝的识别提供一种精确、快捷的方法,图像处理方法克服了传统人工巡检的主观性和低效率,大大提高了检测的灵活性,但在二维的隧道裂纹图像采集,采集裂纹的二维信息,在图像识别时有可能类似于裂纹的砖缝等区别不开,或者对于关键部位裂纹发生的细微变化没法进行监控的问题。With the rapid development of image processing technology, its fast, accurate and non-destructive characteristics are favored by the majority of researchers, and it is widely used in the field of industrial production and research in the detection of cracks in subway tunnels and other diseases. Due to continuous breakthroughs in the field of computer vision, cutting-edge traditional image processing plus deep learning can be used in the image field to quickly acquire images of tunnel cracks and detect the location of cracks on the surface of the tunnel, providing an accurate and fast method for identifying cracks on the surface of subway tunnels. The image processing method overcomes the subjectivity and low efficiency of traditional manual inspections and greatly improves the flexibility of detection. However, in the two-dimensional image collection of tunnel cracks, the two-dimensional information of cracks may be indistinguishable from brick cracks similar to cracks in image recognition, or subtle changes in cracks in key parts cannot be detected. monitoring issues.

近些年来国内外在图像处理方法检测隧道裂缝病害的研究上也使用了许多方法技术,如Ahmed Mahgoub Ahmed Talab等人提出了一种基于图像边缘的混凝土结构裂缝检测算法。该方法首先将图像更改为灰度图像,然后利用Sobel方法检测裂纹边缘。使用合适的阈值将像素进行分类,分为前景和背景两种分类,提取疑似骨折区域的原始图像。然后再次使用Sobel算子过滤掉残余的噪声,使用0tsu算法进行最优分割。其主要思想是反复使用过滤器对断裂区域进行筛选。该方法对可以较好的处理高质量的图像,能过滤一部分噪声。该方法依然存在检测效果不稳定的问题。In recent years, many methods and technologies have been used in the research of image processing methods to detect tunnel cracks at home and abroad. For example, Ahmed Mahgoub Ahmed Talab et al. proposed a concrete structure crack detection algorithm based on image edges. This method firstly changes the image into a grayscale image, and then utilizes the Sobel method to detect crack edges. Use the appropriate threshold to classify the pixels into two categories, foreground and background, and extract the original image of the suspected fracture area. Then use the Sobel operator again to filter out the residual noise, and use the 0tsu algorithm for optimal segmentation. The main idea is to iteratively use a filter to sift through fractured regions. This method can handle high-quality images better and can filter part of the noise. This method still has the problem of unstable detection effect.

发明内容Contents of the invention

本发明提供一种用于裂缝检测的自适应canny方法,解决隧道墙体裂缝检测遇到的噪点多,脏块多和检测效果不稳定的问题。The invention provides an adaptive canny method for crack detection, which solves the problems of many noise points, many dirty blocks and unstable detection effect encountered in the crack detection of tunnel walls.

为解决上述技术问题,本发明的技术方案如下:In order to solve the problems of the technologies described above, the technical solution of the present invention is as follows:

一种用于裂缝检测的自适应canny方法,包括以下步骤:An adaptive canny method for crack detection, comprising the following steps:

S1:获取包括裂缝的图像;S1: acquiring an image including cracks;

S2:根据所述图像的尺寸进行光照补偿,得到亮度均匀的图像;S2: performing light compensation according to the size of the image to obtain an image with uniform brightness;

S3:对所述亮度均匀的图像进行中值滤波,得到降噪图像;S3: performing median filtering on the image with uniform brightness to obtain a noise-reduced image;

S4:计算所述降噪图像的梯度幅值,通过最大化梯度幅值类间方差的方法自动生成最佳分割阈值th;S4: Calculate the gradient magnitude of the noise-reduced image, and automatically generate the optimal segmentation threshold th by maximizing the variance between classes of the gradient magnitude;

S5:根据所述最佳分割阈值th,得到Canny边缘检测算法的最低阈值和最高阈值,并使用该最低阈值和最高阈值进行边缘检测,获得第一图像;S5: Obtain the lowest threshold and the highest threshold of the Canny edge detection algorithm according to the optimal segmentation threshold th, and use the lowest threshold and the highest threshold to perform edge detection to obtain the first image;

S6:对所述第一图像利用八连通域搜索计算连通部件,去除面积小于预设最小值的连通部件,获得第二图像;S6: Using eight connected domains to search and calculate connected components on the first image, remove connected components whose area is smaller than a preset minimum value, and obtain a second image;

S7:对所述第二图像使用闭运算方法将断开的裂缝拼接起来,填充脏块的孔洞,得到第三图像;S7: Using a closed operation method on the second image to splice the disconnected cracks and fill the holes of the dirty block to obtain a third image;

S8:对所述第三图像使用八连通域搜索计算连通部件,若连通部件的长乘宽面积大于原图面积的第一预设值且实际面积大于原图面积的第二预设值,则去除该连通部件,得到最后的裂缝图像。S8: Using eight connected domains to search and calculate the connected components on the third image, if the area of the length multiplied by the width of the connected components is greater than the first preset value of the area of the original image and the actual area is greater than the second preset value of the area of the original image, remove the connected components to obtain the final crack image.

优选地,步骤S2具体为:Preferably, step S2 is specifically:

S2.1:将所述图像转换成灰度图,用f(x,y)=(x,y)表示灰度化后的灰度值,R、G、B分别表示原来真彩色图中的红、绿、蓝分量,有:S2.1: Convert the image into a grayscale image, use f(x, y)=(x, y) to represent the grayscale value after grayscale, and R, G, and B respectively represent the red, green, and blue components in the original true color image, and there are:

f(x,y)=0.299R+0.587G+0.114Bf(x,y)=0.299R+0.587G+0.114B

S2.2:对灰度图进行光照补全,针对图像中不同亮度区域进行亮度补偿:S2.2: Complement the illumination of the grayscale image, and perform brightness compensation for different brightness areas in the image:

求解灰度图I的平均灰度,并记录灰度图I的宽w和高h;Solve the average gray level of the grayscale image I, and record the width w and height h of the grayscale image I;

取m=min(w,h)/20,将灰度图I切分成个方块,求出每块的平均值,得到子块的亮度矩阵D;Take m=min(w,h)/20, and divide the grayscale image I into squares, find the average value of each block, and obtain the brightness matrix D of the sub-block;

用矩阵D的每个元素减去灰度图I的平均灰度,得到子块的亮度差值矩阵E;Subtract the average grayscale of the grayscale image I from each element of the matrix D to obtain the brightness difference matrix E of the sub-block;

用双立方差值法,将矩阵E差值成与灰度图I一样大小的亮度分布矩阵R;Using the bicube difference method, the matrix E is differenced into a brightness distribution matrix R of the same size as the grayscale image I;

得到亮度均匀的图像result=I-R。An image with uniform brightness result=I-R is obtained.

优选地,步骤S3具体为:Preferably, step S3 is specifically:

S3.1:确定卷积核大小;S3.1: Determine the size of the convolution kernel;

S3.2:将卷积核的中心与图像左上方的第一个像素对应,将卷积核所覆盖的区域中的各个像素的灰度值进行排序,选取灰度序列的中位数,并将中间值赋给卷积核中心位置对应的像素;S3.2: Correspond the center of the convolution kernel to the first pixel on the upper left of the image, sort the gray value of each pixel in the area covered by the convolution kernel, select the median of the gray sequence, and assign the median value to the pixel corresponding to the center position of the convolution kernel;

S3.3:对图像上的每一像素重复步骤S3.2,得到降噪图像。S3.3: Repeat step S3.2 for each pixel on the image to obtain a noise-reduced image.

优选地,步骤S4通过最大化梯度幅值类间方差的方法自动生成最佳分割阈值th,具体为:Preferably, step S4 automatically generates the optimal segmentation threshold th by maximizing the variance between classes of gradient magnitudes, specifically:

S4.1:设最佳分割阈值th将图像梯度幅值分为两类C1和C2,C1中的图像梯度幅值小于th,C2中的图像梯度幅值大于th,且这两类梯度幅值各自的均值为m1、m2,梯度幅值全局均值为mG,同时梯度幅值被分为C1和C2类的概率分别为p1、p2,有:S4.1: Set the optimal segmentation threshold th to divide the image gradient amplitude into two categories C1 and C2, the image gradient amplitude in C1 is smaller than th, and the image gradient amplitude in C2 is greater than th, and the respective mean values of these two types of gradient amplitude values are m1 and m2, and the global mean value of the gradient amplitude value is mG. At the same time, the probability that the gradient amplitude value is divided into C1 and C2 classes is p1 and p2 respectively, as follows:

p1*m1+p2*m2=mGp1*m1+p2*m2=mG

p1+p2=1p1+p2=1

S4.2:根据方差的概念,类间方差表达式为:S4.2: According to the concept of variance, the expression of variance between classes is:

σ2=p1(m1-mG)2+p2(m2-mG)2 σ 2 =p1(m1-mG) 2 +p2(m2-mG) 2

S4.3:将梯度幅值矩阵归一化为[0,255]的范围,然后迭代256次求解出使得类间方差σ2达到最大值的值,即是所求阈值th。S4.3: Normalize the gradient magnitude matrix to the range of [0, 255], and then iterate 256 times to find the value that makes the inter-class variance σ 2 reach the maximum value, which is the required threshold th.

优选地,步骤S5具体为:Preferably, step S5 is specifically:

S5.1:计算图像梯度;S5.1: Calculate the image gradient;

S5.2:对梯度幅值进行非极大值抑制;S5.2: Perform non-maximum suppression on the gradient amplitude;

S5.3:进行双阈值筛选,将灰度变化大于最高阈值的像素,设置为强边缘像素,去除低于最低阈值的像素;在最低阈值和最高阈值之间的设置为弱边缘像素,如果该弱边缘像素的领域内有强边缘像素,则保留该弱边缘像素,如果该弱边缘像素的领域内没有强边缘像素,则剔除该弱边缘像素。S5.3: Perform double-threshold screening, set the pixels whose gray scale change is greater than the highest threshold as strong edge pixels, and remove the pixels below the lowest threshold; set the pixels between the lowest threshold and the highest threshold as weak edge pixels, if there are strong edge pixels in the area of the weak edge pixels, keep the weak edge pixels, and remove the weak edge pixels if there are no strong edge pixels in the area of the weak edge pixels.

优选地,所述最低阈值为0.66th,所述最高阈值为0.9h。Preferably, the lowest threshold is 0.66th, and the highest threshold is 0.9h.

优选地,步骤S6具体为:Preferably, step S6 is specifically:

S6.1:扫描第一图像的每个像素点,对于像素值相同的且在8邻域内相互连通的像素分为相同的组,最终得到第一图像中所有的像素的连通部件;S6.1: Scan each pixel of the first image, divide the pixels with the same pixel value and connected with each other in 8 neighborhoods into the same group, and finally obtain the connected components of all pixels in the first image;

S6.2:对所有的连通部件按照面积从小到大排列,去除面积小于预设最小值的连通部件,得到第二图像。S6.2: Arrange all the connected components in ascending order of area, remove the connected components whose area is smaller than the preset minimum value, and obtain the second image.

优选地,步骤S6.2中预设最小值为所有的连通部件的面积的90%分位数。Preferably, the preset minimum value in step S6.2 is the 90% quantile of the areas of all connected components.

优选地,步骤S7具体为:Preferably, step S7 is specifically:

S7.1:使用卷积核B对第二图像进行膨胀运算:S7.1: Use convolution kernel B to perform expansion operation on the second image:

定义卷积核B,卷积核B可以是任何的形状和大小,且拥有一个单独定义的参考点-锚点;Define the convolution kernel B, which can be of any shape and size, and has a separately defined reference point-anchor point;

将卷积核B和第二图像进行卷积,计算卷积核B覆盖区域的像素点最大值;Convolve the convolution kernel B with the second image, and calculate the maximum value of pixels in the area covered by the convolution kernel B;

将所述最大值赋值给参考点指定的像素;assigning said maximum value to the pixel specified by the reference point;

S7.2:使用卷积核C进行腐蚀运算:S7.2: Corrosion operation using convolution kernel C:

定义卷积核C,卷积核C可以是任何的形状和大小,且拥有一个单独定义的参考点-锚点;Define the convolution kernel C, which can be of any shape and size, and has a separately defined reference point-anchor point;

将卷积核C与经膨胀运算的第二图像进行卷积,计算卷积核C覆盖区域的像素点最小值;Convolve the convolution kernel C with the expanded second image, and calculate the minimum pixel value of the area covered by the convolution kernel C;

将所述最小值赋值给参考点指定的像素,得到第三图像。Assign the minimum value to the pixel specified by the reference point to obtain the third image.

优选地,步骤S8具体为:Preferably, step S8 is specifically:

S8.1:扫描第三图像的每个像素点,对于像素值相同的而且在8邻域内相互连通的像素分为相同的组,最终得到图像中所有的像素连通组件,并计算每个连通部件的面积、长度和宽度;S8.1: Scan each pixel of the third image, divide the pixels with the same pixel value and connected with each other in the 8 neighborhoods into the same group, finally obtain all the pixel connected components in the image, and calculate the area, length and width of each connected component;

S8.2:计算每个连通部件的长乘宽面积;S8.2: Calculate the length-by-width area of each connected component;

S8.3:若连通部件的长乘宽面积大于原图面积的第一预设值且实际面积大于原图面积的第二预设值,则去除该连通部件,得到最后的裂缝图像。S8.3: If the area of length multiplied by width of the connected component is greater than the first preset value of the area of the original image and the actual area is greater than the second preset value of the area of the original image, remove the connected component to obtain the final crack image.

与现有技术相比,本发明技术方案的有益效果是:Compared with the prior art, the beneficial effects of the technical solution of the present invention are:

本发明以目标图像的梯度幅值,实现自适应Canny边缘检测算子,并实现自适应地根据连通性质去除块状噪声,可广泛应用于各种暗环境的隧道裂缝检测问题,相对于常用的图像裂缝检测方法,检测效果更加稳定,且能检测出更多的部分。The present invention uses the gradient amplitude of the target image to realize an adaptive Canny edge detection operator, and realizes adaptively removing block noises according to connectivity properties, and can be widely applied to tunnel crack detection problems in various dark environments. Compared with commonly used image crack detection methods, the detection effect is more stable, and more parts can be detected.

附图说明Description of drawings

图1为本发明的方法流程示意图。Fig. 1 is a schematic flow chart of the method of the present invention.

图2为实施例提供的真实数码相机采集的隧道拱部裂缝图像。Fig. 2 is an image of cracks in the tunnel arch collected by a real digital camera provided in the embodiment.

图3为使用labelme对图2进行的标记示意图。Figure 3 is a schematic diagram of labeling Figure 2 using labelme.

图4为使用OTSU算法对图2进行检测的结果示意图。Figure 4 is a schematic diagram of the detection results of Figure 2 using the OTSU algorithm.

图5为对图2进行光照补偿后的结果示意图。FIG. 5 is a schematic diagram of a result after light compensation is performed on FIG. 2 .

图6为对图5进行中值滤波后的结果示意图。FIG. 6 is a schematic diagram of the result of performing median filtering on FIG. 5 .

图7为对图6使用自适应Canny边缘检测算子进行边缘检测后的结果示意图。FIG. 7 is a schematic diagram of an edge detection result using an adaptive Canny edge detection operator in FIG. 6 .

图8为对图7使用去除噪声后的结果示意图。FIG. 8 is a schematic diagram of the result of using noise removal on FIG. 7 .

图9为对图8中的裂缝拼接并填充脏块后的结果示意图。Fig. 9 is a schematic diagram of the result after splicing the cracks in Fig. 8 and filling the dirty blocks.

图10为对图9中的表面脏物去除后的结果示意图。FIG. 10 is a schematic diagram of the result after removing the surface dirt in FIG. 9 .

图11为图2的裂缝标记示意图。FIG. 11 is a schematic diagram of the crack marking in FIG. 2 .

图12为基于OTSU算法检测的裂缝标记示意图。Fig. 12 is a schematic diagram of crack marks detected based on OTSU algorithm.

图13为使用本发明的方法检测的裂缝标记示意图。Fig. 13 is a schematic diagram of crack marks detected by the method of the present invention.

具体实施方式Detailed ways

附图仅用于示例性说明,不能理解为对本专利的限制;The accompanying drawings are for illustrative purposes only and cannot be construed as limiting the patent;

为了更好说明本实施例,附图某些部件会有省略、放大或缩小,并不代表实际产品的尺寸;In order to better illustrate this embodiment, some parts in the drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product;

对于本领域技术人员来说,附图中某些公知结构及其说明可能省略是可以理解的。For those skilled in the art, it is understandable that some well-known structures and descriptions thereof may be omitted in the drawings.

下面结合附图和实施例对本发明的技术方案做进一步的说明。The technical solutions of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

实施例1Example 1

一种用于裂缝检测的自适应canny方法,包括以下步骤:An adaptive canny method for crack detection, comprising the following steps:

S1:获取包括裂缝的图像;S1: acquiring an image including cracks;

S2:根据所述图像的尺寸进行光照补偿,得到亮度均匀的图像;S2: performing light compensation according to the size of the image to obtain an image with uniform brightness;

S3:对所述亮度均匀的图像进行中值滤波,得到降噪图像;S3: performing median filtering on the image with uniform brightness to obtain a noise-reduced image;

S4:计算所述降噪图像的梯度幅值,通过最大化梯度幅值类间方差的方法自动生成最佳分割阈值th;S4: Calculate the gradient magnitude of the noise-reduced image, and automatically generate the optimal segmentation threshold th by maximizing the variance between classes of the gradient magnitude;

S5:根据所述最佳分割阈值th,得到Canny边缘检测算法的最低阈值和最高阈值,并使用该最低阈值和最高阈值进行边缘检测,获得第一图像;S5: Obtain the lowest threshold and the highest threshold of the Canny edge detection algorithm according to the optimal segmentation threshold th, and use the lowest threshold and the highest threshold to perform edge detection to obtain the first image;

S6:对所述第一图像利用八连通域搜索计算连通部件,去除面积小于预设最小值的连通部件,获得第二图像;S6: Using eight connected domains to search and calculate connected components on the first image, remove connected components whose area is smaller than a preset minimum value, and obtain a second image;

S7:对所述第二图像使用闭运算方法将断开的裂缝拼接起来,填充脏块的孔洞,得到第三图像;S7: Using a closed operation method on the second image to splice the disconnected cracks and fill the holes of the dirty block to obtain a third image;

S8:对所述第三图像使用八连通域搜索计算连通部件,若连通部件的长乘宽面积大于原图面积的第一预设值且实际面积大于原图面积的第二预设值,则去除该连通部件,得到最后的裂缝图像。S8: Using eight connected domains to search and calculate the connected components on the third image, if the area of the length multiplied by the width of the connected components is greater than the first preset value of the area of the original image and the actual area is greater than the second preset value of the area of the original image, remove the connected components to obtain the final crack image.

实施例2Example 2

本实施例在实施例1的基础上,继续公开以下内容:On the basis of Embodiment 1, this embodiment continues to disclose the following content:

步骤S2是为了得到亮度范围更加集中的图片,具体为:Step S2 is to obtain a picture with a more concentrated brightness range, specifically:

S2.1:将所述图像转换成灰度图,将原始图片转换成灰度图,一方面便于后续的更快速图像处理,另一方面也是对处理多种颜色裂缝进行了统一。而且后续的一些处理方法也要求图像是灰度图,所以首先对裂缝图像进行灰度化处理。用f(x,y)=(x,y)表示灰度化后的灰度值,R、G、B分别表示原来真彩色图中的红、绿、蓝分量,有:S2.1: Convert the image into a grayscale image, and convert the original image into a grayscale image. On the one hand, it facilitates subsequent faster image processing, and on the other hand, it also unifies the processing of multiple color cracks. Moreover, some subsequent processing methods also require the image to be a grayscale image, so the crack image is first processed in grayscale. Use f(x,y)=(x,y) to represent the grayscale value after grayscale, R, G, and B respectively represent the red, green, and blue components in the original true color image, there are:

f(x,y)=0.299R+0.587G+0.114Bf(x,y)=0.299R+0.587G+0.114B

其中0.299,0.587和0.114是理论和实验推导证明得出的最合理的灰度图像的权值。Among them, 0.299, 0.587 and 0.114 are the most reasonable grayscale image weights proved by theoretical and experimental derivation.

S2.2:对灰度图进行光照补全,针对图像中不同亮度区域进行亮度补偿,使得整个图像的亮度范围更加集中,利于提取信息,其主要步骤为:S2.2: Complement the illumination of the grayscale image, and perform brightness compensation for different brightness regions in the image, so that the brightness range of the entire image is more concentrated, which is conducive to information extraction. The main steps are:

求解灰度图I的平均灰度,并记录灰度图I的宽w和高h;Solve the average gray level of the grayscale image I, and record the width w and height h of the grayscale image I;

取m=min(w,h)/20,将灰度图I切分成个方块,求出每块的平均值,得到子块的亮度矩阵D;Take m=min(w,h)/20, and divide the grayscale image I into squares, find the average value of each block, and obtain the brightness matrix D of the sub-block;

用矩阵D的每个元素减去灰度图I的平均灰度,得到子块的亮度差值矩阵E;Subtract the average grayscale of the grayscale image I from each element of the matrix D to obtain the brightness difference matrix E of the sub-block;

用双立方差值法,将矩阵E差值成与灰度图I一样大小的亮度分布矩阵R;Using the bicube difference method, the matrix E is differenced into a brightness distribution matrix R of the same size as the grayscale image I;

得到亮度均匀的图像result=I-R。An image with uniform brightness result=I-R is obtained.

步骤S3中值滤波去除噪声,去除图像中部分孤立亮点噪声,同时保留裂缝边缘细节,方便后续处理,其主要原理是对图像进行卷积计算,而每次卷积计算都是将某个像素邻域(邻域范围取决于卷积核大小)中的像素按灰度值进行排序,选取序列中的中位数将其赋值给该邻域对应的像素,这样可以让周围像素灰度值的差比较大的像素改取为与周围的像素值接近的值,从而可以消除孤立的噪声点,具体为:Step S3 Median filtering removes noise, removes some isolated bright spot noise in the image, and retains the edge details of the cracks to facilitate subsequent processing. The main principle is to perform convolution calculation on the image, and each convolution calculation is to sort the pixels in a certain pixel neighborhood (the neighborhood range depends on the size of the convolution kernel) according to the gray value, select the median in the sequence and assign it to the pixel corresponding to the neighborhood, so that the pixel with a large difference in the gray value of the surrounding pixels can be changed to a value close to the surrounding pixel value, so that isolated noise points can be eliminated. Specifically:

S3.1:确定卷积核大小,本实施例中使用5×5大小的卷积核;S3.1: Determine the size of the convolution kernel. In this embodiment, a convolution kernel with a size of 5×5 is used;

S3.2:将卷积核的中心与图像左上方的第一个像素对应,将卷积核所覆盖的区域中的各个像素的灰度值进行排序,选取灰度序列的中位数,并将中间值赋给卷积核中心位置对应的像素;S3.2: Correspond the center of the convolution kernel to the first pixel on the upper left of the image, sort the gray value of each pixel in the area covered by the convolution kernel, select the median of the gray sequence, and assign the median value to the pixel corresponding to the center position of the convolution kernel;

S3.3:对图像上的每一像素重复步骤S3.2,得到降噪图像。S3.3: Repeat step S3.2 for each pixel on the image to obtain a noise-reduced image.

步骤S4通过最大化梯度幅值类间方差的方法自动生成最佳分割阈值th,图像阈值的确定通常需要人工调参,这不利于方法的大规模运用,利用最大化梯度幅值类间方差的方法求解出一个阈值,实现自适应过程,具体为:Step S4 automatically generates the optimal segmentation threshold th by maximizing the inter-class variance of the gradient amplitude. The determination of the image threshold usually requires manual parameter adjustment, which is not conducive to the large-scale application of the method. A threshold is solved by using the method of maximizing the inter-class variance of the gradient amplitude to realize the adaptive process, specifically:

S4.1:设最佳分割阈值th将图像梯度幅值分为两类C1和C2,C1中的图像梯度幅值小于th,C2中的图像梯度幅值大于th,且这两类梯度幅值各自的均值为m1、m2,梯度幅值全局均值为mG,同时梯度幅值被分为C1和C2类的概率分别为p1、p2,有:S4.1: Set the optimal segmentation threshold th to divide the image gradient amplitude into two categories C1 and C2, the image gradient amplitude in C1 is smaller than th, and the image gradient amplitude in C2 is greater than th, and the respective mean values of these two types of gradient amplitude values are m1 and m2, and the global mean value of the gradient amplitude value is mG. At the same time, the probability that the gradient amplitude value is divided into C1 and C2 classes is p1 and p2 respectively, as follows:

p1*m1+p2*m2=mGp1*m1+p2*m2=mG

p1+p2=1p1+p2=1

S4.2:根据方差的概念,类间方差表达式为:S4.2: According to the concept of variance, the expression of variance between classes is:

σ2=p1(m1-mG)2+p2(m2-mG)2 σ 2 =p1(m1-mG) 2 +p2(m2-mG) 2

S4.3:将梯度幅值矩阵归一化为[0,255]的范围,然后迭代256次求解出使得类间方差σ2达到最大值的值,即是所求阈值th。S4.3: Normalize the gradient magnitude matrix to the range of [0, 255], and then iterate 256 times to find the value that makes the inter-class variance σ 2 reach the maximum value, which is the required threshold th.

步骤S5具体为:Step S5 is specifically:

S5.1:计算图像梯度,图像边缘是灰度值变化明显的地方,而变化的大小可以用图像梯度幅值来衡量,故借助Sobel算子计算图像梯度值与方向;S5.1: Calculate the image gradient. The edge of the image is the place where the gray value changes significantly, and the size of the change can be measured by the image gradient amplitude, so the Sobel operator is used to calculate the image gradient value and direction;

S5.2:对梯度幅值进行非极大值抑制,通过计算得到全局梯度后,还需要对梯度变化明显的部分进行识别,为确定边缘,必须保留局部梯度极大值点,处理方法是将非极大值点置换为零,这样得到的边缘将是细化的边缘,也得到了一个所有可能是“边缘”的集合;S5.2: Suppress the non-maximum value of the gradient amplitude. After the global gradient is obtained by calculation, it is necessary to identify the part with obvious gradient changes. In order to determine the edge, the local gradient maximum point must be retained. The processing method is to replace the non-maximum value point with zero, so that the obtained edge will be a thinned edge, and a set of all possible "edges" is also obtained;

S5.3:进行双阈值筛选,将灰度变化大于最高阈值的像素,设置为强边缘像素,去除低于最低阈值的像素;在最低阈值和最高阈值之间的设置为弱边缘像素,如果该弱边缘像素的领域内有强边缘像素,则保留该弱边缘像素,如果该弱边缘像素的领域内没有强边缘像素,则剔除该弱边缘像素,这样做的目的是只保留强边缘轮廓的话,将有可能导致部分边缘不闭合,需要从满足low和hh之间的点进行补充,使得边缘尽可能的闭合。S5.3: Perform double-threshold screening, set the pixels whose grayscale change is greater than the highest threshold as strong edge pixels, and remove pixels lower than the lowest threshold; set the pixels between the lowest threshold and the highest threshold as weak edge pixels, if there are strong edge pixels in the area of the weak edge pixels, then keep the weak edge pixels, and if there are no strong edge pixels in the area of the weak edge pixels, then reject the weak edge pixels. Points are added so that the edges are as closed as possible.

所述最低阈值为0.66th,所述最高阈值为0.9h。The lowest threshold is 0.66th, and the highest threshold is 0.9h.

步骤S6根据连通部件的面积去噪,具体为:Step S6 denoises according to the area of connected components, specifically:

S6.1:扫描第一图像的每个像素点,对于像素值相同的且在8邻域内相互连通的像素分为相同的组,最终得到第一图像中所有的像素的连通部件;S6.1: Scan each pixel of the first image, divide the pixels with the same pixel value and connected with each other in 8 neighborhoods into the same group, and finally obtain the connected components of all pixels in the first image;

S6.2:对所有的连通部件按照面积从小到大排列,面积小于预设最小值的连通部件,认为是噪声,去除,得到第二图像。S6.2: Arrange all the connected components in ascending order of area, and the connected components whose area is smaller than the preset minimum value are regarded as noise and removed to obtain the second image.

步骤S6.2中预设最小值为所有的连通部件的面积的90%分位数。In step S6.2, the preset minimum value is the 90% quantile of the areas of all connected components.

步骤S7具体为:Step S7 is specifically:

S7.1:使用卷积核B对第二图像进行膨胀运算,连接断开的裂缝并填充脏块孔洞:S7.1: Use the convolution kernel B to expand the second image, connect the disconnected cracks and fill the dirty block holes:

定义卷积核B,卷积核B可以是任何的形状和大小,且拥有一个单独定义的参考点-锚点,通常和为带参考点的正方形或者圆盘,可将核称为模板或掩膜;Define the convolution kernel B. The convolution kernel B can be of any shape and size, and has a separately defined reference point-anchor point, which is usually a square or a disk with a reference point. The kernel can be called a template or mask;

将卷积核B和第二图像进行卷积,计算卷积核B覆盖区域的像素点最大值;Convolve the convolution kernel B with the second image, and calculate the maximum value of pixels in the area covered by the convolution kernel B;

将所述最大值赋值给参考点指定的像素;assigning said maximum value to the pixel specified by the reference point;

S7.2:使用卷积核C进行腐蚀运算:S7.2: Corrosion operation using convolution kernel C:

定义卷积核C,卷积核C可以是任何的形状和大小,且拥有一个单独定义的参考点-锚点,通常核为带参考点的正方形或者圆盘,可将核称为模板或掩膜;Define the convolution kernel C. The convolution kernel C can be of any shape and size, and has a separately defined reference point-anchor point. Usually, the kernel is a square or disc with a reference point. The kernel can be called a template or a mask;

将卷积核C与经膨胀运算的第二图像进行卷积,计算卷积核C覆盖区域的像素点最小值;Convolve the convolution kernel C with the expanded second image, and calculate the minimum pixel value of the area covered by the convolution kernel C;

将所述最小值赋值给参考点指定的像素,得到第三图像。Assign the minimum value to the pixel specified by the reference point to obtain the third image.

步骤S8具体为:Step S8 is specifically:

S8.1:扫描第三图像的每个像素点,对于像素值相同的而且在8邻域内相互连通的像素分为相同的组,最终得到图像中所有的像素连通组件,并计算每个连通部件的面积、长度和宽度;S8.1: Scan each pixel of the third image, divide the pixels with the same pixel value and connected with each other in the 8 neighborhoods into the same group, finally obtain all the pixel connected components in the image, and calculate the area, length and width of each connected component;

S8.2:计算每个连通部件的长乘宽面积;S8.2: Calculate the length-by-width area of each connected component;

S8.3:若连通部件的长乘宽面积大于原图面积的第一预设值且实际面积大于原图面积的第二预设值,认为是脏块,非裂缝,则去除该连通部件,得到最后的裂缝图像。S8.3: If the area of the length times the width of the connected component is greater than the first preset value of the area of the original image and the actual area is greater than the second preset value of the area of the original image, it is considered to be a dirty block and not a crack, and the connected component is removed to obtain the final crack image.

在本实施例中,第一预设值为1/25,第二预设值为1/100。In this embodiment, the first preset value is 1/25, and the second preset value is 1/100.

实施例3Example 3

本实施例使用实施例1和实施例2的方法,提供以下具体实施例:The present embodiment uses the method of embodiment 1 and embodiment 2 to provide the following specific examples:

图2为真实数码相机采集的隧道拱部裂缝图像。图像分辨率为4019*1000像素。图像存在光照不均导致的真实世界相同颜色物体成像明暗不一问题,噪声过多干扰裂缝识别等问题,且图像中间也存在并非裂缝的铁片脏块,使用labelme对原图进行的标记后,如图3所示。Figure 2 is the image of cracks in the tunnel arch collected by a real digital camera. The image resolution is 4019*1000 pixels. The image has the problem of different brightness and darkness of the same color object in the real world caused by uneven illumination, too much noise interferes with crack recognition, and there are also dirty iron pieces that are not cracks in the middle of the image. After using labelme to mark the original image, it is shown in Figure 3.

目前业界常规处理方法为通过OTSU算法获取阈值th,分别以th和2h为Canny阈值进行边缘检测,其效果如图4所示,存在三个严重问题,该算法虽然检测出了部分边缘,但存在大量雪花噪声,裂缝在右尾部暗处没能检测完全,并且无法过滤中间的铁片脏块。所以需要新的检测方法加以解决。At present, the conventional processing method in the industry is to obtain the threshold th through the OTSU algorithm, and use th and 2h as the Canny threshold for edge detection respectively. The effect is shown in Figure 4. There are three serious problems. Although the algorithm detects some edges, there is a lot of snowflake noise. The cracks in the dark part of the right tail cannot be detected completely, and the dirty iron block in the middle cannot be filtered. Therefore, new detection methods are needed to solve it.

下面利用本发明的裂缝检测方法来检测本实施例图像中的裂缝。具体为以下步骤:Next, the crack detection method of the present invention is used to detect cracks in the image of this embodiment. The specific steps are as follows:

1)1000和4019分别为输入图像的宽和长,50=min(4019,1000)/20,那么将图像切分成80×20个网格,进行光照补偿,得到展现出更多信息的图片,如图5所示,图片的暗部亮度得到有效提升,图片的灰度范围更加集中;1) 1000 and 4019 are the width and length of the input image respectively, 50=min(4019, 1000)/20, then the image is divided into 80×20 grids, and light compensation is performed to obtain a picture showing more information. As shown in Figure 5, the brightness of the dark part of the picture is effectively improved, and the gray scale range of the picture is more concentrated;

2)按照步骤1)获得亮度均匀的图片,对图片进行核为5×5的中值滤波,去除图像中部分孤立亮点噪声,同时保留边缘细节,方便后续处理,如图6所示,经过步骤2)的处理,图像在保留裂缝的边缘特性的同时,在其他部位变得更加平滑大大利于减弱噪声;2) Obtain a picture with uniform brightness according to step 1), and perform a median filter with a kernel of 5×5 on the picture to remove some isolated bright spot noise in the image, while retaining edge details to facilitate subsequent processing, as shown in Figure 6, after the processing of step 2), the image becomes smoother in other parts while retaining the edge characteristics of the crack, which greatly helps to reduce noise;

3)按照步骤2)获得降噪图像,计算图像梯度幅值,通过最大化梯度幅值类间方差的方法自动生成最佳分割阈值th,以0.66h和0.9h分别作为Canny算子的最低阈值和最高阈值进行边缘检测,获得带有噪声但细节信息更丰富的的裂缝边缘,如图7所示,经过S3步的处理,图像变为二值图像,噪声极大减弱,且相较传统Canny,实现了自动设置阈值;3) According to step 2) to obtain the noise-reduced image, calculate the gradient magnitude of the image, automatically generate the optimal segmentation threshold th by maximizing the variance between classes of the gradient magnitude, and use 0.66h and 0.9h as the lowest threshold and the highest threshold of the Canny operator respectively for edge detection, and obtain crack edges with noise but richer detail information.

4)按照步骤3)获得带有边缘和噪声的二值图像,通过八连通域搜索计算连通部件,其中所有连通部件面积的90%分位数为33,对于面积小于33的连通部件认为是噪声,直接去除,其效果图如图8所示;4) Obtain a binary image with edges and noises according to step 3), and search and calculate connected components through eight connected domains, wherein the 90% quantile of the area of all connected components is 33, and the connected components with an area less than 33 are considered to be noise and directly removed, as shown in Figure 8;

5)按照步骤4)获得较为干净但裂缝相对稀疏的图像,使用闭运算方法将断开的裂缝拼接起来,并填充脏块的孔洞,为步骤6)做准备,其效果如图9所示,经过步骤5)的处理,上一步二值图像识别裂缝的断裂处得到有效连接且不改变裂缝的整体特性,并把铁片脏块孔洞连接变为一个整体;5) According to step 4) to obtain a relatively clean but relatively sparse image of the cracks, use the closed operation method to splice the disconnected cracks and fill the holes of the dirty block to prepare for step 6). The effect is shown in Figure 9. After the processing of step 5), the fractures of the cracks identified by the binary image in the previous step are effectively connected without changing the overall characteristics of the cracks, and the connection of the holes and holes of the dirty block of the iron sheet becomes a whole;

6)按照步骤5)获得裂缝进一步拼接的图像,使用八连通域搜索计算连通部件。若部件的长乘宽面积大于原图面积的且实际面积大于原图面积的则认为不是裂缝而是表面赃物,除去这些噪声分量,获得最终干净的裂缝检测图,如图10所示,经过步骤6)的处理,许多噪声,包括中间部位金属的整块噪声都得到有效去除,裂缝整体连接性好。6) Follow step 5) to obtain images of cracks that are further mosaiced, and use eight-connected domain search to calculate connected components. If the length by width of the part is larger than the area of the original image and the actual area is larger than the area of the original image, it is considered not to be a crack but a surface stain. After removing these noise components, the final clean crack detection map is obtained, as shown in Figure 10. After the processing in step 6), many noises, including the entire noise of the metal in the middle, are effectively removed, and the overall connectivity of the crack is good.

将图2和图10进行对比,使用实施例1和实施例2的方法,噪声大大减少,有利于识别裂缝,其次,暗处的裂缝得到有效识别且连接性更好。Comparing Fig. 2 with Fig. 10, using the methods of Embodiment 1 and Embodiment 2, the noise is greatly reduced, which is beneficial to identify cracks. Secondly, cracks in dark places are effectively identified and have better connectivity.

SSIM(Structural Similarity),结构相似性,是一种衡量两幅图像相似度的指标。给定两个图像和,两张图像的结构相似性可按照以下方式求出:SSIM (Structural Similarity), structural similarity, is an index to measure the similarity of two images. Given two images and , the structural similarity of the two images can be found as follows:

其中μx是x的平均值,μy是y的平均值,是x的方差,/>是y的方差,σxy是x和y的协方差。c1=(k1L)2,c2=(k2L)2是用来维持稳定的常数。L是像素值的动态范围。k1=0.01,k2=0.03。where μ x is the mean value of x, μ y is the mean value of y, is the variance of x, /> is the variance of y and σ xy is the covariance of x and y. c 1 =(k 1 L) 2 , c 2 =(k 2 L) 2 are constants for maintaining stability. L is the dynamic range of pixel values. k 1 =0.01, k 2 =0.03.

结构相似性的范围为-1到1,SSIM值越接近于1,说明两图像越接近,当两张图像完全一样时,SSIM值为1。检测结果与原图标签的SSIM值一定程度上可以反映出裂缝检测的准确性,经过计算,图10与图2的SSIM值为0.963,而图4与图3的SSIM值为0.921,可以看出本实施例的方法在SSIM指标上明显好于常用的基于OTSU的方法。The range of structural similarity is -1 to 1. The closer the SSIM value is to 1, the closer the two images are. When the two images are exactly the same, the SSIM value is 1. The detection results and the SSIM value of the original image label can reflect the accuracy of crack detection to a certain extent. After calculation, the SSIM value of Figure 10 and Figure 2 is 0.963, while the SSIM value of Figure 4 and Figure 3 is 0.921. It can be seen that the method of this embodiment is significantly better than the commonly used method based on OTSU in terms of SSIM indicators.

如图11至13,为对检测出的裂缝的长度进行比较,裂缝标记图片是对原图和检测结果图进行人工标记得到的。本实验图中有左右两段裂缝,长宽信息是根据连通性计算得到的。As shown in Figures 11 to 13, in order to compare the length of the detected cracks, the crack mark pictures are obtained by manually marking the original picture and the test result picture. There are two cracks on the left and right in the experimental picture, and the length and width information is calculated based on the connectivity.

通过计算可以发现,左边的裂缝两种算法都能识别的比较完整,与原图很相似;对于右边裂缝,本实施例的方法要明显好于常用的基于OTSU的方法,可以检测出更多的部分。Through calculation, it can be found that both algorithms can identify the cracks on the left relatively completely, which is very similar to the original image; for the cracks on the right, the method of this embodiment is obviously better than the commonly used method based on OTSU, and more parts can be detected.

相同或相似的标号对应相同或相似的部件;The same or similar reference numerals correspond to the same or similar components;

附图中描述位置关系的用语仅用于示例性说明,不能理解为对本专利的限制;The terms describing the positional relationship in the drawings are only for illustrative purposes and cannot be interpreted as limitations on this patent;

显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定。对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动。这里无需也无法对所有的实施方式予以穷举。凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明权利要求的保护范围之内。Apparently, the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, rather than limiting the implementation of the present invention. For those of ordinary skill in the art, other changes or changes in different forms can be made on the basis of the above description. It is not necessary and impossible to exhaustively list all the implementation manners here. All modifications, equivalent replacements and improvements made within the spirit and principles of the present invention shall be included within the protection scope of the claims of the present invention.

Claims (10)

1. An adaptive canny method for crack detection, comprising the steps of:
s1: acquiring an image including a crack;
s2: performing illumination compensation according to the size of the image to obtain an image with uniform brightness;
s3: performing median filtering on the image with uniform brightness to obtain a noise reduction image;
s4: calculating the gradient amplitude of the noise reduction image, and automatically generating an optimal segmentation threshold th by a method of maximizing the variance among gradient amplitude classes;
s5: obtaining a lowest threshold and a highest threshold of a Canny edge detection algorithm according to the optimal segmentation threshold th, and carrying out edge detection by using the lowest threshold and the highest threshold to obtain a first image;
s6: searching and calculating communication components by utilizing eight communication domains on the first image, removing the communication components with the area smaller than a preset minimum value, and obtaining a second image;
s7: the broken cracks are spliced together by using a closed operation method on the second image, holes of dirty blocks are filled, and a third image is obtained;
s8: and searching and calculating a communication component by using the eight communication domains for the third image, and removing the communication component to obtain a final crack image if the length multiplied by the width area of the communication component is larger than a first preset value of the original image area and the actual area is larger than a second preset value of the original image area.
2. The adaptive canny method for crack detection according to claim 1, wherein step S2 is specifically:
s2.1: the image is converted into a gray scale image, the gray scale values after gray scale are expressed by f (x, y) = (x, y), and R, G, B respectively represent red, green and blue components in the original true color image, and the method comprises the following steps:
f(x,y)=0.299R+0.587G+0.114B
s2.2: performing illumination complementation on the gray level image, and performing brightness compensation on different brightness areas in the image:
solving the average gray level of the gray level diagram I, and recording the width w and the height h of the gray level diagram I;
taking m=min (w, h)/20, dividing the gray scale image I intoEach block is used for calculating the average value of each block to obtain a brightness matrix D of each sub-block;
subtracting the average gray level of the gray level map I from each element of the matrix D to obtain a brightness difference matrix E of the sub-block;
the matrix E is differentiated into a brightness distribution matrix R with the same size as the gray level diagram I by a double-cube difference method;
an image result=i-R of uniform brightness is obtained.
3. The adaptive canny method for crack detection according to claim 2, wherein step S3 is specifically:
s3.1: determining a convolution kernel size;
s3.2: the center of the convolution kernel corresponds to the first pixel at the upper left of the image, the gray values of all pixels in the area covered by the convolution kernel are ordered, the median of the gray sequence is selected, and the median is assigned to the pixel corresponding to the center of the convolution kernel;
s3.3: and repeating the step S3.2 for each pixel on the image to obtain a noise reduction image.
4. The adaptive canny method for crack detection of claim 3, wherein step S4 automatically generates an optimal segmentation threshold th by maximizing the inter-class variance of the gradient magnitude, specifically:
s4.1: setting an optimal segmentation threshold th to divide the image gradient amplitude into two types C1 and C2, wherein the image gradient amplitude in C1 is smaller than th, the image gradient amplitude in C2 is larger than th, the respective average value of the two types of gradient amplitude is m1 and m2, the global average value of the gradient amplitude is mG, and the probability that the gradient amplitude is divided into the C1 and C2 type is p1 and p2 respectively, and the method comprises the following steps:
p1*m1+p2*m2=mG
p1+p2=1
s4.2: based on the concept of variance, the inter-class variance expression is:
σ 2 =1(m1-mG) 2 +2(m2-G) 2
s4.3: normalize gradient magnitude matrix to [0,255]Is then iterated 256 times to solve for such a way that the inter-class variance sigma 2 The value at which the maximum value is reached is the threshold th.
5. The adaptive canny method for crack detection of claim 4, wherein step S5 is specifically:
s5.1: calculating an image gradient;
s5.2: performing non-maximum suppression on the gradient amplitude;
s5.3: performing double-threshold screening, setting pixels with gray level change larger than the highest threshold value as strong edge pixels, and removing pixels lower than the lowest threshold value; the setting between the lowest threshold and the highest threshold is a weak edge pixel, if there is a strong edge pixel in the field of the weak edge pixel, the weak edge pixel is reserved, and if there is no strong edge pixel in the field of the weak edge pixel, the weak edge pixel is rejected.
6. The adaptive canny method for fracture detection of claim 5, wherein the minimum threshold is 0.66th and the maximum threshold is 0.9h.
7. The adaptive canny method for crack detection of claim 6, wherein step S6 is specifically:
s6.1: scanning each pixel point of the first image, and dividing pixels which have the same pixel value and are mutually communicated in 8 adjacent domains into the same group to finally obtain communication components of all pixels in the first image;
s6.2: and arranging all the communication components from small to large in area, and removing the communication components with the areas smaller than the preset minimum value to obtain a second image.
8. The adaptive canny method for crack detection of claim 7, wherein the preset minimum in step S6.2 is 90% quantile of the area of all connected components.
9. The adaptive canny method for crack detection of claim 8, wherein step S7 is specifically:
s7.1: performing expansion operation on the second image by using a convolution kernel B:
defining a convolution kernel B, which may be of any shape and size, and has a separately defined reference point-anchor point;
convolving the convolution kernel B with the second image, and calculating the maximum value of the pixel points of the coverage area of the convolution kernel B;
assigning the maximum value to the pixel designated by the reference point;
s7.2: corrosion operation using convolution kernel C:
defining a convolution kernel C, which may be of any shape and size, and having a separately defined reference point-anchor point;
convolving the convolution kernel C with the second image subjected to expansion operation, and calculating a minimum value of pixel points of a coverage area of the convolution kernel C;
and assigning the minimum value to the pixel designated by the reference point to obtain a third image.
10. The adaptive canny method for crack detection of claim 9, wherein step S8 is specifically:
s8.1: scanning each pixel point of the third image, dividing pixels which have the same pixel value and are mutually communicated in 8 adjacent domains into the same group, finally obtaining all pixel communication components in the image, and calculating the area, the length and the width of each communication component;
s8.2: calculating the length by width area of each communication component;
s8.3: and if the length-width area of the communication component is larger than the first preset value of the original image area and the actual area is larger than the second preset value of the original image area, removing the communication component to obtain a final crack image.
CN202310241045.7A 2023-03-13 2023-03-13 Self-adaptive canny method for crack detection Pending CN116485719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310241045.7A CN116485719A (en) 2023-03-13 2023-03-13 Self-adaptive canny method for crack detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310241045.7A CN116485719A (en) 2023-03-13 2023-03-13 Self-adaptive canny method for crack detection

Publications (1)

Publication Number Publication Date
CN116485719A true CN116485719A (en) 2023-07-25

Family

ID=87225778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310241045.7A Pending CN116485719A (en) 2023-03-13 2023-03-13 Self-adaptive canny method for crack detection

Country Status (1)

Country Link
CN (1) CN116485719A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557873A (en) * 2024-01-12 2024-02-13 四川高速公路建设开发集团有限公司 Tunnel face crack identification method based on image identification
CN117975374A (en) * 2024-03-29 2024-05-03 山东天意机械股份有限公司 Intelligent visual monitoring method for double-skin wall automatic production line
CN120088260A (en) * 2025-05-06 2025-06-03 成都盛锴科技有限公司 Tunnel crack image identification method and tunnel wall uniform image detection system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862677A (en) * 2017-10-16 2018-03-30 中铁第四勘察设计院集团有限公司 The Tunnel Lining Cracks recognition methods of thresholding algorithm and system between a kind of class based on gradient
CN110390664A (en) * 2018-11-30 2019-10-29 武汉滨湖电子有限责任公司 One kind being based on the recognition methods of holes filling pavement crack
CN111833366A (en) * 2020-06-03 2020-10-27 佛山科学技术学院 An Edge Detection Method Based on Canny Algorithm

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862677A (en) * 2017-10-16 2018-03-30 中铁第四勘察设计院集团有限公司 The Tunnel Lining Cracks recognition methods of thresholding algorithm and system between a kind of class based on gradient
CN110390664A (en) * 2018-11-30 2019-10-29 武汉滨湖电子有限责任公司 One kind being based on the recognition methods of holes filling pavement crack
CN111833366A (en) * 2020-06-03 2020-10-27 佛山科学技术学院 An Edge Detection Method Based on Canny Algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐发东: "基于图像处理的桥梁表面裂缝病害检测的研 究", 中国优秀硕士学位论文全文数据库工程科技II辑(月刊), no. 2, 15 February 2023 (2023-02-15), pages 034 - 908 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557873A (en) * 2024-01-12 2024-02-13 四川高速公路建设开发集团有限公司 Tunnel face crack identification method based on image identification
CN117557873B (en) * 2024-01-12 2024-04-05 四川高速公路建设开发集团有限公司 Tunnel face crack identification method based on image identification
CN117975374A (en) * 2024-03-29 2024-05-03 山东天意机械股份有限公司 Intelligent visual monitoring method for double-skin wall automatic production line
CN120088260A (en) * 2025-05-06 2025-06-03 成都盛锴科技有限公司 Tunnel crack image identification method and tunnel wall uniform image detection system

Similar Documents

Publication Publication Date Title
CN112419250B (en) Digital Image Extraction, Crack Repair and Crack Parameter Calculation Method for Pavement Cracks
CN110866924B (en) Line structured light center line extraction method and storage medium
CN116485719A (en) Self-adaptive canny method for crack detection
CN108416766B (en) Double-side light-entering type light guide plate defect visual detection method
CN110286124B (en) Machine vision-based refractory brick measuring system
CN114723681B (en) Concrete crack defect detection method based on machine vision
WO2019134252A1 (en) Method and device for automated portrayal and accurate measurement of width of structural crack
CN108629775B (en) A kind of hot high-speed wire surface image processing method
WO2018107939A1 (en) Edge completeness-based optimal identification method for image segmentation
CN102914545B (en) A method and system for detecting gear defects based on computer vision
CN106780486B (en) A method for image extraction of steel plate surface defects
TW536919B (en) Color image representing a patterned article
CN111179243A (en) Small-size chip crack detection method and system based on computer vision
CN115345885A (en) Method for detecting appearance quality of metal fitness equipment
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
CN111272768B (en) Ceramic tube detection method
CN104700112A (en) Method for detecting parasite eggs in excrement based on morphological characteristics
CN101995412B (en) Robust glass scratch defect detection method and device thereof
CN116485764B (en) Structural surface defect identification method, system, terminal and medium
CN116718599B (en) A method for measuring apparent crack length based on three-dimensional point cloud data
CN107545557A (en) Egg detecting method and device in excrement image
CN115578343A (en) Crack size measuring method based on image communication and skeleton analysis
CN116542968A (en) A Smart Rebar Counting Method Based on Template Matching
CN112465817B (en) Pavement crack detection method based on directional filter
CN112017109B (en) Online ferrographic video image bubble elimination method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination