[go: up one dir, main page]

CN112304960B - High-resolution image object surface defect detection method based on deep learning - Google Patents

High-resolution image object surface defect detection method based on deep learning Download PDF

Info

Publication number
CN112304960B
CN112304960B CN202011600441.7A CN202011600441A CN112304960B CN 112304960 B CN112304960 B CN 112304960B CN 202011600441 A CN202011600441 A CN 202011600441A CN 112304960 B CN112304960 B CN 112304960B
Authority
CN
China
Prior art keywords
image
resolution
edge
images
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011600441.7A
Other languages
Chinese (zh)
Other versions
CN112304960A (en
Inventor
曾向荣
钟志伟
刘衍
张政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN202011600441.7A priority Critical patent/CN112304960B/en
Publication of CN112304960A publication Critical patent/CN112304960A/en
Application granted granted Critical
Publication of CN112304960B publication Critical patent/CN112304960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8874Taking dimensions of defect into account
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8887Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

本发明公开了一种基于深度学习的高分辨率图像物体表面缺陷检测方法,包括以下步骤:S1:将多个高分辨率相机固定在物体正上方,相机对物体表面图像采集,进行图像拼接,以获得高分辨率图像;S2:对高分辨率图像进行预处理图像边缘检测,获得强边缘区域以及对应的原始图像区域;S3:将边缘图像和原始区域图像输入到深度卷积神经网络中进行特征融合识别目标区域分类;S4:输出表面裂纹的类型和属于该类型的概率。本发明主要是针对大型目标采用单一相机检测视场角受限或多相机检测效率低的问题,采用了一种基于深度学习的大范围物体表面裂纹检测方法,有效的提高目标检测范围和检测精度,为大型目标无损检测提供依据。

Figure 202011600441

The invention discloses a deep learning-based high-resolution image object surface defect detection method, comprising the following steps: S1: Fix a plurality of high-resolution cameras directly above the object, the cameras collect images of the object surface, and perform image stitching, to obtain high-resolution images; S2: preprocess image edge detection on high-resolution images to obtain strong edge regions and corresponding original image regions; S3: input edge images and original region images into a deep convolutional neural network for Feature fusion identifies target area classification; S4: Outputs the type of surface crack and the probability of belonging to this type. The invention mainly aims at the problem that the single camera is used to detect the limited field of view or the multi-camera detection efficiency is low for large targets, and adopts a large-scale object surface crack detection method based on deep learning, which effectively improves the target detection range and detection accuracy. , which provides a basis for nondestructive testing of large targets.

Figure 202011600441

Description

High-resolution image object surface defect detection method based on deep learning
Technical Field
The invention relates to the technical field of image processing, in particular to a high-resolution image object surface defect detection method based on deep learning.
Background
With the rapid development of industry, the service life of large targets such as large workpieces, surface pressing platforms, aircraft skins and the like is prolonged, nondestructive surface crack detection becomes a key and difficult problem of research more and more, especially for the detection of objects with irregular surfaces, if cracks exist on the surfaces of the aircraft, a large risk exists in navigation, the cracks can be caused by improper operation in the construction process, and the target can be damaged.
With the development of machine vision technology, people have been deeply penetrated into the social aspect instead of human eyes, and the living environment of people is thoroughly changed. Machine vision inspection integrates machine vision and automation technology, is widely applied to product defect inspection in the manufacturing industry, such as product assembly process inspection and positioning, product packaging inspection, product appearance quality inspection, goods distribution or fruit distribution in the logistics industry and the like, and can replace manual work to complete various operations quickly and accurately.
The method comprises the following steps that 1, an application number of 201910264717.X is that image preprocessing and a PixelNet network are adopted to segment a defect image, and defect identification is not carried out on a defect surface; application number 201810820348.3 introduces an attention module into the convolution module to improve detection accuracy, but increases training difficulty.
Disclosure of Invention
The high-resolution image object surface defect detection method based on deep learning effectively improves the target detection range and detection precision and provides a basis for large-scale target nondestructive detection.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high-resolution image object surface defect detection method based on deep learning comprises the following steps:
s1: fixing a plurality of high-resolution cameras right above an object, collecting images on the surface of the object by the cameras, and splicing the images to obtain high-resolution images;
s2: preprocessing the high-resolution image to detect the edge of the suspected defect part, obtaining an area with a remarkable edge, partitioning the area with the remarkable edge by adopting a circumscribed rectangle fitting mode, and corresponding to the original image to obtain an edge image area and an original image area;
s3: inputting the edge image area and the original image area into a deep convolutional neural network for feature fusion and identification of target area classification;
s4: the type of surface crack and the probability of belonging to that type are output.
Preferably, the image stitching in step S1 includes image preprocessing, image feature point matching, image registration, and image fusion.
Preferably, the image preprocessing comprises operations of image ray correction, image denoising and camera distortion correction;
the image feature point matching adopts an image matching method based on image features, and comprises a corner point detection method, image registration based on contour features and image matching based on SIFT;
after image feature point matching, the image registration calculates a space model among a plurality of images and carries out space transformation, so that the overlapped parts of the two images are aligned in space and are the key for image splicing;
the image fusion aims to obtain a seamless high-quality image, eliminate the difference between seams and brightness on the premise of not losing original image information and realize smooth transition of a splicing boundary.
Preferably, the spatial transformation between the multiple images in the image registration comprises: translation, rotation, scaling, affine transformation, projective transformation.
Preferably, the projective transformation is more generalized than the translation, rotation, scaling and affine transformations;
if an image
Figure 270611DEST_PATH_IMAGE001
Figure 374702DEST_PATH_IMAGE002
If the projective transformation relation exists, the homogeneous equation is used for expressing:
Figure 685598DEST_PATH_IMAGE003
wherein: m is0、m1、m3And m4Collectively representing a rotation angle and a zoom scale; m is2And m5Respectively representing the translation amount in the x direction and the y direction; m is6And m7The deformation quantities in the x direction and the y direction are respectively expressed, and the key of image registration is to determine the parameters of a space transformation model M by using a homogeneous equation.
Preferably, in step S2, an edge operator is used to perform image edge preprocessing on the high-resolution image, detect the edge of the suspected defect portion, obtain an area with a significant edge, perform blocking processing on the area with a significant edge by using a circumscribed rectangle fitting method, and perform area correspondence with the original image, so as to obtain an edge image area and an original image area.
Preferably, the edge operator adopts any one of a Canny edge detection operator, a laplacian operator, a Prewitt operator and a Sobel operator.
Preferably, the convolutional neural network structure in step S3 adopts a modified NIN network.
Preferably, the feature fusion uses a softmax function to map the output scalar to the probability distribution of the corresponding category of the image, and the objective function is:
Figure 18490DEST_PATH_IMAGE004
in the formula (I), the compound is shown in the specification,
Figure 754234DEST_PATH_IMAGE005
in order to train the number of samples,
Figure 248800DEST_PATH_IMAGE006
for the actual class of the sample,
Figure 312571DEST_PATH_IMAGE007
which represents the predicted output of the sample(s),
Figure 370526DEST_PATH_IMAGE008
as are the parameters of the network model,
Figure 960776DEST_PATH_IMAGE009
arranged to prevent overfitting during network training
Figure 626244DEST_PATH_IMAGE010
The regularization term, as shown in equation,
Figure 177311DEST_PATH_IMAGE011
value 0.00005;
Figure 835694DEST_PATH_IMAGE012
compared with the prior art, the invention has the beneficial effects that: the invention provides a high-resolution image object surface defect detection method based on deep learning, which aims at the problems that the field angle of large-scale targets is limited by adopting a single camera or the detection efficiency of multiple cameras is low, adopts a large-scale object surface crack detection method based on deep learning, effectively improves the target detection range and detection precision, and provides a basis for the nondestructive detection of the large-scale targets.
Drawings
FIG. 1 is a general flow chart of examples 1 and 2 of the present invention;
FIG. 2 is a high resolution image mosaic of example 1 of the present invention;
FIG. 3 is a NIN convolutional neural block diagram of example 2 of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments.
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", "front", "rear", "head", "tail", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "connected" and "connected" are to be interpreted broadly, e.g., as being fixed or detachable or integrally connected; can be mechanically or electrically connected; may be directly connected or indirectly connected through an intermediate. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
Referring to fig. 1 and 2, a method for detecting surface defects of a high-resolution image object based on deep learning includes the following steps:
s1: fixing a plurality of high-resolution cameras right above an object, collecting images on the surface of the object by the cameras, and splicing the images to obtain high-resolution images;
s2: preprocessing the image edge detection of the high-resolution image to obtain an area with obvious edge and a corresponding original image area;
s3: inputting the region with obvious edge and the original gray image into a deep convolution neural network for feature fusion and identification of target region classification;
s4: the type of surface crack and the probability of belonging to that type are output.
The image stitching in step S1 includes image preprocessing, image feature point matching, image registration, and image fusion.
The image preprocessing comprises basic operations of image light correction, image denoising and camera distortion correction.
The image feature point matching adopts an image matching method based on image features, only uses partial information of the image, such as features of contours, corners and the like, and mainly adopts a corner point detection method, image registration based on contour features and image registration based on SIFT.
After image registration is matched through image feature points, a space model among a plurality of images is calculated and space transformation is carried out, so that the overlapped parts of the two images are aligned in space and are the key of image splicing.
The spatial transformation between the multiple images in image registration includes: translation, rotation, scaling, affine transformation, projective transformation. Where projective transformations are more prevalent than translation, rotation, scaling and affine transformations.
Hypothetical image
Figure 296762DEST_PATH_IMAGE001
Figure 647978DEST_PATH_IMAGE002
If the projective transformation relation exists, the homogeneous equation is used for expressing:
Figure 754518DEST_PATH_IMAGE003
wherein: m is0、m1、m3And m4Collectively representing a rotation angle and a zoom scale; m is2And m5Respectively representing the translation amount in the x direction and the y direction; m is6And m7The deformation quantities in the x direction and the y direction are respectively expressed, and the key of image registration is to determine the parameters of a space transformation model M by using a homogeneous equation.
The purpose of image fusion is to obtain a seamless high-quality image, eliminate the difference between seams and brightness on the premise of not losing original image information, and realize smooth transition of a splicing boundary.
Example 2
Referring to fig. 1 and 3, a method for detecting surface defects of a high-resolution image object based on deep learning includes the following steps:
s1: fixing a plurality of high-resolution cameras right above an object, collecting images on the surface of the object by the cameras, and splicing the images to obtain high-resolution images;
s2: preprocessing the image edge detection of the high-resolution image to obtain an area with obvious edge and a corresponding original image area;
s3: inputting the region with obvious edge and the original gray image into a deep convolution neural network for feature fusion and identification of target region classification;
s4: the type of surface crack and the probability of belonging to that type are output.
In step S2, an edge operator is used to perform image edge preprocessing on the high-resolution image, because the target surface defect is generally obvious, an edge operator is used to perform image preprocessing to detect the edge of the suspected defect portion, wherein the edge operator can be any one of Canny edge detection, laplacian operator, Prewitt operator and Sobel operator to obtain a region with a significant edge, and finally, a circumscribed rectangle fitting manner is used to perform blocking processing on the region with a significant edge, and the region corresponds to the original image to obtain an edge image region and an original image region.
In step S3, the convolutional neural network structure adopts an improved NIN network, and the network layer is shown in table 1:
TABLE 1 improved NIN network
Figure 826379DEST_PATH_IMAGE013
The NIN Network does not contain a full-connection layer, and the improved NIN Network is a Network In Network published In 2014, wherein the full-connection layer is introduced on the basis of the Network In Network, an image with the size of 32x32 is subjected to forward propagation layer by layer, ReLU is taken as an activation function, and two characteristic diagrams with the size of 8x8 are output at the last convolutional layer. And modifying the last two layers, and introducing a full connection layer, so that the original characteristic diagram of 8x8 is output and converted into a vector of 64 dimensions.
The feature fusion adopts a softmax function to map the output scalar quantity into the probability distribution of the corresponding category of the image, and the target function is as follows:
Figure 141954DEST_PATH_IMAGE004
in the formula (I), the compound is shown in the specification,
Figure 398492DEST_PATH_IMAGE005
in order to train the number of samples,
Figure 799517DEST_PATH_IMAGE006
for the actual class of the sample,
Figure 675069DEST_PATH_IMAGE007
which represents the predicted output of the sample(s),
Figure 94418DEST_PATH_IMAGE008
as are the parameters of the network model,
Figure 538169DEST_PATH_IMAGE009
arranged to prevent overfitting during network training
Figure 675758DEST_PATH_IMAGE010
The regularization term, as shown in equation,
Figure 230367DEST_PATH_IMAGE011
value 0.00005;
Figure 379589DEST_PATH_IMAGE012
the above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art should be able to cover the technical scope of the present invention and the equivalent alternatives or modifications according to the technical solution and the inventive concept of the present invention within the technical scope of the present invention.

Claims (6)

1.一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,包括以下步骤:1. a high-resolution image object surface defect detection method based on deep learning, is characterized in that, comprises the following steps: S1:将多个高分辨率相机固定在物体正上方,相机对物体表面图像采集,进行图像拼接,以获得高分辨率图像;S1: Fix multiple high-resolution cameras directly above the object, and the cameras collect images of the surface of the object and perform image stitching to obtain high-resolution images; S2:采用边缘算子对高分辨率图像进行预处理图像边缘检测,检测疑似缺陷部分的边缘,获得强的边缘区域,再对强的边缘区域采用外接矩形拟合方式进行分块处理,与原始图像进行区域对应,获得边缘图像区域和原始图像区域;S2: Use the edge operator to preprocess the image edge detection of the high-resolution image, detect the edge of the suspected defect part, obtain the strong edge area, and then use the circumscribed rectangle fitting method to perform block processing on the strong edge area. The image is corresponding to the area, and the edge image area and the original image area are obtained; S3:将边缘图像和原始区域图像输入到深度卷积神经网络中进行特征融合识别目标区域分类;S3: Input the edge image and the original area image into the deep convolutional neural network for feature fusion recognition and target area classification; 步骤S3中卷积神经网络结构采用改进NIN网络;In step S3, the convolutional neural network structure adopts an improved NIN network; 改进NIN网络是在Network In Network基于上引入全连接层,一个大小为32x32的图像,经过逐层的前向传播,以ReLU为激活函数,在最后一个卷积层输出两个大小为8x8的特征图;修改最后两层,引入全连接层,使得原来输出8x8的特征图,转变为输出64维的向量;The improved NIN network is to introduce a fully connected layer on the basis of Network In Network, an image of size 32x32, through layer-by-layer forward propagation, using ReLU as the activation function, and outputting two features of size 8x8 in the last convolutional layer Figure; Modify the last two layers and introduce a fully connected layer, so that the original output 8x8 feature map is converted into an output 64-dimensional vector; 特征融合采用softmax函数将输出的标量映射为该图像对应类别的概率分布,目标函数为:The feature fusion uses the softmax function to map the output scalar to the probability distribution of the corresponding category of the image. The objective function is:
Figure DEST_PATH_IMAGE002
Figure DEST_PATH_IMAGE002
;
式中,
Figure DEST_PATH_IMAGE004
为训练样本数量,
Figure DEST_PATH_IMAGE006
为样本实际类别,
Figure DEST_PATH_IMAGE008
表示样本的预测输出,
Figure DEST_PATH_IMAGE010
为网络模型参数,
Figure DEST_PATH_IMAGE012
为防止网络训练过程中出现过拟合而设置的
Figure DEST_PATH_IMAGE014
正则项,如式所示,
Figure DEST_PATH_IMAGE016
取值0.00005;
In the formula,
Figure DEST_PATH_IMAGE004
is the number of training samples,
Figure DEST_PATH_IMAGE006
is the actual category of the sample,
Figure DEST_PATH_IMAGE008
represents the predicted output of the sample,
Figure DEST_PATH_IMAGE010
are the network model parameters,
Figure DEST_PATH_IMAGE012
Set to prevent overfitting during network training
Figure DEST_PATH_IMAGE014
The regular term, as shown in the formula,
Figure DEST_PATH_IMAGE016
The value is 0.00005;
Figure DEST_PATH_IMAGE018
Figure DEST_PATH_IMAGE018
;
S4:输出表面裂纹的类型和属于该类型的概率。S4: Output the type of surface crack and the probability of belonging to this type.
2.根据权利要求1所述的一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,所述步骤S1中图像拼接包括图像预处理、图像特征点匹配、图像配准和图像融合。2. a kind of deep learning-based high-resolution image object surface defect detection method according to claim 1, is characterized in that, in described step S1, image stitching comprises image preprocessing, image feature point matching, image registration and Image fusion. 3.根据权利要求2所述的一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,3. a kind of high-resolution image object surface defect detection method based on deep learning according to claim 2, is characterized in that, 所述图像预处理包括的操作为图像光线校正、图像去噪和相机畸变校正;The image preprocessing includes operations including image light correction, image denoising and camera distortion correction; 所述图像特征点匹配采用基于图像特征的图像匹配方法,包括角点检测法、基于轮廓特征的图像配准、基于SIFT的图像匹配;The image feature point matching adopts an image matching method based on image features, including corner detection method, image registration based on contour feature, and image matching based on SIFT; 所述图像配准通过图像特征点匹配后计算出多副图像间的空间模型并进行空间变换,使两幅图像的重叠部分在空间上对准,为图像拼接的关键;The image registration calculates the spatial model between the multiple images after matching the image feature points and performs spatial transformation, so that the overlapping parts of the two images are aligned in space, which is the key to image stitching; 所述图像融合的目的在于得到无缝的高质量图像,在不损失原始图像信息的前提下,消除接缝与亮度差异,实现拼接边界的平滑过渡。The purpose of the image fusion is to obtain seamless high-quality images, eliminate seams and differences in brightness without losing the original image information, and achieve a smooth transition of the splicing boundary. 4.根据权利要求3所述的一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,所述图像配准中多副图像之间的空间变换包括:平移、旋转、尺度缩放、仿射变换、投影变换。4. The deep learning-based method for detecting surface defects of high-resolution image objects according to claim 3, wherein the spatial transformation between multiple images in the image registration comprises: translation, rotation, scale Scaling, affine transformation, projection transformation. 5.根据权利要求4所述的一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,所述投影变换相较于平移、旋转、尺度缩放和仿射变换更具有普遍性;5. A deep learning-based method for detecting surface defects of high-resolution image objects according to claim 4, wherein the projection transformation is more universal than translation, rotation, scaling and affine transformation ; 若图像
Figure DEST_PATH_IMAGE020
Figure DEST_PATH_IMAGE022
存在投影变换关系,则用齐次方程表示:
If the image
Figure DEST_PATH_IMAGE020
,
Figure DEST_PATH_IMAGE022
If there is a projection transformation relationship, it is represented by a homogeneous equation:
Figure DEST_PATH_IMAGE024
Figure DEST_PATH_IMAGE024
其中:m0、m1、m3和m4共同表示旋转角度和缩放尺度;m2和m5分别表示x方向与y方向上的平移量;m6和m7分别表示x方向和y方向上的变形量,图像配准的关键是用齐次方程确定空间变换模型M的参数。Among them: m 0 , m 1 , m 3 and m 4 jointly represent the rotation angle and scaling scale; m 2 and m 5 respectively represent the translation in the x and y directions; m 6 and m 7 respectively represent the x and y directions The key to image registration is to use homogeneous equations to determine the parameters of the spatial transformation model M.
6. 根据权利要求1所述的一种基于深度学习的高分辨率图像物体表面缺陷检测方法,其特征在于,所述边缘算子采用Canny 边缘检测算子、拉普拉斯算子、Prewitt算子和Sobel算子中的任意一种。6. a kind of high-resolution image object surface defect detection method based on deep learning according to claim 1, is characterized in that, described edge operator adopts Canny edge detection operator, Laplacian operator, Prewitt operator. Any of the sub and Sobel operators.
CN202011600441.7A 2020-12-30 2020-12-30 High-resolution image object surface defect detection method based on deep learning Active CN112304960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011600441.7A CN112304960B (en) 2020-12-30 2020-12-30 High-resolution image object surface defect detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011600441.7A CN112304960B (en) 2020-12-30 2020-12-30 High-resolution image object surface defect detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN112304960A CN112304960A (en) 2021-02-02
CN112304960B true CN112304960B (en) 2021-08-10

Family

ID=74487577

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011600441.7A Active CN112304960B (en) 2020-12-30 2020-12-30 High-resolution image object surface defect detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN112304960B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113283111B (en) * 2021-06-11 2022-05-27 中国人民解放军国防科技大学 A transformation method for model deduction to intelligent deduction
CN114445331A (en) * 2021-12-21 2022-05-06 国网江苏省电力有限公司淮安供电分公司 Cable intermediate joint construction defect detection method, system and device based on image recognition
CN115049604B (en) * 2022-06-09 2023-04-07 佛山科学技术学院 Method for rapidly detecting tiny defects of large-width plate ultrahigh-resolution image
CN115375693B (en) * 2022-10-27 2023-02-10 浙江托普云农科技股份有限公司 Method, system and device for detecting defects of probe of agricultural information acquisition sensor
CN117853826B (en) * 2024-03-07 2024-05-10 誊展精密科技(深圳)有限公司 Object surface precision identification method based on machine vision and related equipment
CN119067941A (en) * 2024-08-22 2024-12-03 北京市城捷建设工程有限公司 Protective gloves safety detection method, computer equipment and readable medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111260876A (en) * 2018-11-30 2020-06-09 北京欣奕华科技有限公司 Image processing method and device
CN111612747A (en) * 2020-04-30 2020-09-01 重庆见芒信息技术咨询服务有限公司 Method and system for rapidly detecting surface cracks of product
US10796202B2 (en) * 2017-09-21 2020-10-06 VIMOC Technologies, Inc. System and method for building an edge CNN system for the internet of things

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230202B (en) * 2017-05-16 2020-02-18 淮阴工学院 Method and system for automatic identification of pavement disease images
CN107643295B (en) * 2017-08-24 2019-08-20 中国地质大学(武汉) A method and system for online detection of cloth defects based on machine vision
CN108765416B (en) * 2018-06-15 2023-10-03 福建工程学院 PCB surface defect detection method and device based on rapid geometric alignment
TWI787296B (en) * 2018-06-29 2022-12-21 由田新技股份有限公司 Optical inspection method, optical inspection device and optical inspection system
CN109255787B (en) * 2018-10-15 2021-02-26 杭州慧知连科技有限公司 System and method for detecting scratch of silk ingot based on deep learning and image processing technology
CN109580630B (en) * 2018-11-10 2022-02-18 东莞理工学院 Visual inspection method for defects of mechanical parts
CN109919908B (en) * 2019-01-23 2020-11-10 华灿光电(浙江)有限公司 Method and device for detecting defects of light-emitting diode chip
CN110044905A (en) * 2019-03-27 2019-07-23 北京好运达智创科技有限公司 A kind of crack detecting method of double-block type sleeper
CN110349126B (en) * 2019-06-20 2022-11-18 武汉科技大学 A Marked Steel Plate Surface Defect Detection Method Based on Convolutional Neural Network
CN110927171A (en) * 2019-12-09 2020-03-27 中国科学院沈阳自动化研究所 A method for detecting defects on the chamfered surface of bearing rollers based on machine vision
CN111738994B (en) * 2020-06-10 2024-04-16 南京航空航天大学 Lightweight PCB defect detection method
CN111862039A (en) * 2020-07-20 2020-10-30 中山西尼视觉科技有限公司 Rapid visual detection method for braid

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796202B2 (en) * 2017-09-21 2020-10-06 VIMOC Technologies, Inc. System and method for building an edge CNN system for the internet of things
CN111260876A (en) * 2018-11-30 2020-06-09 北京欣奕华科技有限公司 Image processing method and device
CN111612747A (en) * 2020-04-30 2020-09-01 重庆见芒信息技术咨询服务有限公司 Method and system for rapidly detecting surface cracks of product

Also Published As

Publication number Publication date
CN112304960A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN112304960B (en) High-resolution image object surface defect detection method based on deep learning
Wang et al. Measurement for cracks at the bottom of bridges based on tethered creeping unmanned aerial vehicle
CN108776140B (en) Machine vision-based printed matter flaw detection method and system
CN105067638B (en) Tire fetal membrane face character defect inspection method based on machine vision
CN114897864B (en) Workpiece detection and defect judgment method based on digital model information
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN111982916A (en) Welding seam surface defect detection method and system based on machine vision
JP5096620B2 (en) Join feature boundaries
WO2021102741A1 (en) Image analysis method and system for immunochromatographic detection
CN107063458A (en) Ceramic tile colourity piecemeal detection method based on machine vision
CN111739003B (en) Machine vision method for appearance detection
CN111915485A (en) Rapid splicing method and system for feature point sparse workpiece images
CN112730454A (en) Intelligent damage detection method for composite material based on fusion of optics, infrared thermal waves and ultrasonic waves
CN113706496B (en) Aircraft structure crack detection method based on deep learning model
CN114119591A (en) A kind of display screen picture quality detection method
CN112164048A (en) Magnetic shoe surface defect automatic detection method and device based on deep learning
CN114219802B (en) Skin connecting hole position detection method based on image processing
CN110969135B (en) Vehicle logo recognition method in natural scene
CN114965693B (en) Ultrasonic C-scan automatic alignment system based on virtual-real registration
CN114998571B (en) Image processing and color detection method based on fixed-size markers
CN112634269A (en) Rail vehicle body detection method
CN116563229A (en) Agricultural product defect detection method and system based on multi-view image feature fusion
CN114170202B (en) Weld joint segmentation and milling discrimination method and device based on area array structured light 3D vision
CN114677326A (en) Device, system and method for detecting surface flaws of printed mobile phone shell based on machine vision and deep learning
CN111861889B (en) Automatic splicing method and system for solar cell images based on semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant