[go: up one dir, main page]

CN117975175A - Plastic pipeline appearance defect detection method based on machine vision - Google Patents

Plastic pipeline appearance defect detection method based on machine vision Download PDF

Info

Publication number
CN117975175A
CN117975175A CN202410392069.7A CN202410392069A CN117975175A CN 117975175 A CN117975175 A CN 117975175A CN 202410392069 A CN202410392069 A CN 202410392069A CN 117975175 A CN117975175 A CN 117975175A
Authority
CN
China
Prior art keywords
super
pixel
value
defect
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410392069.7A
Other languages
Chinese (zh)
Other versions
CN117975175B (en
Inventor
豆利军
赵海波
李凤利
韩玉波
郑磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Yada Plastic Products Co ltd
Original Assignee
Xi'an Yada Plastic Products Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Yada Plastic Products Co ltd filed Critical Xi'an Yada Plastic Products Co ltd
Priority to CN202410392069.7A priority Critical patent/CN117975175B/en
Publication of CN117975175A publication Critical patent/CN117975175A/en
Application granted granted Critical
Publication of CN117975175B publication Critical patent/CN117975175B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the field of image data processing, in particular to a plastic pipeline appearance defect detection method based on machine vision, which comprises the following steps: collecting plastic pipeline images, preprocessing and carrying out super-pixel segmentation to obtain segmentation results, calculating similarity values of the segmentation results, carrying out image merging to obtain a defect possible degree value in an area, carrying out merging again to obtain an area with a high defect possible degree value by using a Dajin method, carrying out area growth by taking a central point of the merged area as an area growth seed point, obtaining appearance defect images of the plastic pipeline, setting different labels, generating a defect image data set, training a preset neural network model to obtain a defect network model, acquiring images in real time, and inputting the images into the defect network model to obtain a plastic pipeline defect detection and classification result. According to the invention, the seed growing points of the defective area are accurately selected, so that the area growing result is more accurate, and the defect detection is more accurate.

Description

Plastic pipeline appearance defect detection method based on machine vision
Technical Field
The present invention relates generally to the field of image data processing. More particularly, the invention relates to a machine vision-based plastic pipeline appearance defect detection method.
Background
Machine Vision (Computer Vision) is a discipline that utilizes Computer science and engineering techniques to achieve understanding and analysis of images and video. Its objective is to enable a computer to simulate the functions of the human visual system, thereby enabling the computer to understand and interpret image or video data.
The plastic pipeline appearance defects can reduce the service life and appearance quality of the plastic pipeline, and certain potential safety hazards exist in severe plastic pipeline appearance defects, so that the timely detection and treatment of the plastic pipeline appearance defects are very important, and the plastic pipeline appearance defects can be detected and classified in real time, efficiently and accurately by means of a machine vision technology, so that the production quality of products can be effectively guaranteed.
At present, when a region growing method in a machine vision technology is used for extracting appearance defects of a plastic pipeline, the conventional random region growing seed point selecting method is easy to select a non-defect region because the difference degree of gray values of appearance defect positions, partial noise and background regions of the plastic pipeline is higher than that of a normal region, so that virtual shadows are caused during region growth, and the appearance defect detection result of the plastic pipeline is inaccurate.
Disclosure of Invention
In order to solve one or more of the above technical problems, the present invention proposes to obtain a super-pixel segmentation result of a plastic pipeline image by super-pixel segmentation, combine the similarity degree of any one super-pixel block and a neighborhood super-pixel block, calculate a defect probability value of the region being a defect region according to each connected region after the combination, and select more accurate region growing seed points from each connected region with a high defect probability value.
A plastic pipeline appearance defect detection method based on machine vision comprises the following steps: collecting a plastic pipeline image and preprocessing the plastic pipeline image; performing super-pixel segmentation on the preprocessed plastic pipeline image to obtain a convergent super-pixel segmentation result, and performing image merging according to a similarity value of the convergent super-pixel segmentation result, wherein the similarity value comprises a gray level similarity value and a gradient similarity value; calculating the comprehensive similarity value of two adjacent super-pixel blocks according to the gray level similarity value and the gradient similarity value, and merging the two adjacent super-pixel blocks according to the comprehensive similarity value; obtaining a defect possible degree value in the region according to the area, perimeter and gray average value of each connected region in the combined region, setting a threshold value for the defect possible degree value by using an Ojin method to obtain a region with high defect possible degree value, combining the regions with high defect possible degree value again, taking the central point in each combined connected region as a region growing seed point, and carrying out region growth according to the region growing seed point to obtain an appearance defect image of the plastic pipeline; setting labels for different defects of the appearance defect image, wherein the labels are pits, scratches and stains, and a defect image data set is generated; training a preset neural network model according to the defect image data set to obtain a defect network model, acquiring a produced plastic pipeline image in real time, and inputting the produced plastic pipeline image into the defect network model to obtain a plastic pipeline defect detection and classification result.
In one embodiment, the pre-processed plastic pipeline image is subjected to super-pixel segmentation to obtain a convergent super-pixel segmentation result, and image merging is performed according to a similarity value of the convergent super-pixel segmentation result, including:
Uniformly distributing super-pixel segmentation seed points on the preprocessed image, wherein the super-pixel segmentation seed points are central coordinate points in the region after super-pixel segmentation;
Calculating the distance between any pixel point in the region after super-pixel segmentation and the segmentation seed point, wherein the distance satisfies the following relation:
in the method, in the process of the invention, Represents the/>Pixel to the/>Distance between individual split seed points,/>Represents the/>The pixel points correspond to gray values/>And/>The individual segmentation seed points correspond to gray values/>Gray level difference between,/>Represents the/>Pixel dot and/>Euclidean distance value of each divided seed point,/>The influence coefficient of the super pixel division of each pixel point by the gray scale distance is expressed,The square root of the ratio of the number of pixel points in the image to the number of the divided areas is represented and used as the maximum space distance value in the class;
Clustering each pixel point in the image according to the segmentation seed points to obtain a plurality of clustering clusters, obtaining a plurality of new super-pixel segmentation blocks according to the clustering result, calculating the coordinate center point of the pixel point in each new super-pixel segmentation block, and re-clustering the pixel point as the new super-pixel segmentation seed points to obtain a convergence super-pixel segmentation result;
And calculating the similarity value between each super pixel block and the adjacent super pixel blocks according to the convergence super pixel segmentation result, and merging the super pixel blocks according to the similarity value.
In one embodiment, merging the super pixel blocks according to the similarity value includes the steps of:
respectively obtaining gray histograms of two adjacent super-pixel blocks, respectively calculating corresponding gray frequency values of the two super-pixel blocks, and calculating gray similarity degree values between each super-pixel block and the adjacent super-pixel blocks according to the gray frequency values of the two adjacent super-pixel blocks;
Gradient histograms of two adjacent super-pixel blocks are obtained respectively, corresponding gradient frequency values of the two super-pixel blocks are calculated respectively, and gradient similarity degree values between each super-pixel block and the adjacent super-pixel blocks are calculated according to the gradient frequency values of the two adjacent super-pixel blocks.
In one embodiment, the gray level similarity value satisfies the following relationship:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gray level similarity values for the super pixel blocks,Index representing gray value is/>When (1)Gray scale frequency value corresponding to each super pixel block,/>Index representing gray value is/>When (1)Gray scale frequency values corresponding to the super pixel blocks; /(I)Is the maximum value of gray value,/>Representing an index the function of the function is that,Expressed as/>Base logarithm.
In one embodiment, the gradient similarity value satisfies the following relationship:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gradient similarity value of each super pixel block,/>Represents the/>Super pixel block and adjacent first/>Gradient maximum value of each super pixel block,/>Representing gradient value index,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Representing a hyperbolic tangent function.
In one embodiment, a comprehensive similarity value of two adjacent super-pixel blocks is calculated, and the two adjacent super-pixel blocks are combined according to the comprehensive similarity value, including the following steps:
the comprehensive similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Comprehensive similarity value of each super-pixel block,/>Entropy weight representing gray level similarity value,/>Represents the/>Each super pixel block and adjacent/>Gray level similarity value of each super pixel block,/>Entropy weight representing gradient similarity value,/>Represents the/>Each super pixel block and adjacent/>Gradient similarity values for the super pixel blocks;
And carrying out normalization processing on the integrated similarity degree value, and merging two adjacent super-pixel blocks in response to the normalized integrated similarity degree value being greater than a preset similarity threshold.
In one embodiment, the defect likelihood value satisfies the following relationship:
in the method, in the process of the invention, Representing the/>, in the combined imageThe defect probability of each connected domain,/>Representing the number of connected domains,/>Represents the/>Gray level average value of each pixel point in each connected domain,/>Representing all/>Average value of gray scale average value in connected domain of connected domains,/>For/>Area of the connected domains,/>For/>Perimeter of the connected domains,/>Representing the normalization function.
In one embodiment, training a preset neural network model according to the defect image data set to obtain a defect network model, including the following steps:
Encoding the appearance defect image, extracting image characteristics, inputting the appearance defect image of the plastic pipeline, and outputting whether the defect image has pits, scratches and stains or not;
In the process of sampling the image by convolution and pooling operation, extracting spatial domain features in the image, wherein the output of the encoder is the extracted feature vector;
the input of the full-connection layer is a feature vector output by the encoder, and the output layer of the full-connection layer is provided with three neurons which are respectively used for calculating the confidence coefficient of whether the dent, the scratch and the stain exist or not;
The label is the defect type corresponding to the image Wherein/>,/>,/>Binary variables with values of 0 and 1 are adopted, wherein 0 indicates absence and 1 indicates presence.
In one embodiment, the method for obtaining the defect detection and classification result of the plastic pipeline comprises the steps of:
the defect network model is output as the confidence coefficient corresponding to the label, a defect threshold is set, and when the confidence coefficient is larger than the defect threshold, the label corresponding to the confidence coefficient is judged to have defects;
classifying according to the defects of the labels to obtain classification results.
The invention has the following effects:
1. According to the method, the super pixel blocks and the neighborhood super pixel blocks are combined according to the super pixel segmentation result and the similarity degree of the super pixel blocks, the defect possible degree of the region is obtained according to the area, the perimeter and the gray average value of each connected region after the combination, after the combination is carried out according to the defect possible degree value, a more accurate region is selected from the combined connected region for carrying out the operation of growing seed points, so that the region growth result is more accurate, and the detection of the appearance defects of the plastic pipeline is more accurate.
2. According to the invention, the region with obvious gray level difference can be segmented through super pixel segmentation, the defects and the background can be primarily separated by combining super pixels with similar degrees in the neighborhood, the possible degree value of the defects of the connected domain can be determined by combining the outline characteristics of the connected domain, more accurate region growing seed points can be obtained in the region with high possible degree value of the defects, the influence of the virtual shadow of the non-defect region when the defect image is obtained according to the appearance image of the plastic pipeline can be effectively reduced, the appearance defects of the plastic pipeline can be classified and detected in real time, high efficiency and accuracy, and the production quality of products can be effectively ensured.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
fig. 1 is a flowchart of a method for detecting appearance defects of plastic pipes based on machine vision in steps S1 to S7 according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of gray scale images of appearance pits of a plastic pipe in a machine vision-based method for detecting appearance defects of a plastic pipe according to an embodiment of the present invention.
Fig. 3 is a schematic diagram of a gray scale image of an appearance scratch of a plastic pipe in a machine vision-based method for detecting appearance defects of a plastic pipe according to an embodiment of the present invention.
Fig. 4 is a schematic diagram of gray scale images of appearance stains of a plastic pipeline in a machine vision-based method for detecting appearance defects of a plastic pipeline according to an embodiment of the invention.
Fig. 5 is a flowchart of a method for detecting an appearance defect of a plastic pipe based on machine vision in the embodiment of the invention, from step S20 to step S25.
Fig. 6 is a flowchart of a method for detecting an appearance defect of a plastic pipe based on machine vision in the embodiment of the invention, from step S60 to step S63.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a method for detecting appearance defects of plastic pipes based on machine vision includes steps S1 to S7, specifically as follows:
s1: and (5) collecting a plastic pipeline image and preprocessing.
Further, in the embodiment, an industrial camera is adopted to acquire the plastic pipeline image in real time, and the industrial camera keeps fixed and faces the outer surface of the plastic pipeline in the process of acquiring the image, and ensures that the central axis of the plastic pipeline is positioned at the vertical center of the camera shooting, so that the acquired plastic pipeline appearance image is clear and complete; referring to fig. 2 to 4, the plastic pipe image is subjected to graying processing to obtain a gray image, the gray image is subjected to gaussian filtering to reduce noise in the gray image, and the influence of inaccuracy of defect detection results caused by the background and noise of the plastic pipe image is primarily reduced through the preprocessing.
S2: and carrying out super-pixel segmentation on the preprocessed plastic pipeline image to obtain a convergent super-pixel segmentation result, and carrying out image merging according to a similarity value of the convergent super-pixel segmentation result, wherein the similarity value comprises a gray level similarity value and a gradient similarity value.
Further, when the pipeline defect image is obtained through the region growing method, the pixel points in the defect region are selected as far as possible according to the selection of the region growing seed points, so that the situation that the non-defect regions such as the background and noise with obvious gray value change are selected as the region growing seed points is avoided, and the non-defect regions which are misjudged as the defect regions are grown.
The super-pixel segmentation refers to an image segmentation technology, and aims to divide an image into tightly connected regions or region combinations with semantic consistency, the super-pixel segmentation can segment regions with obvious gray level differences, super-pixel blocks with similar degree values in the neighborhood are combined, defects and backgrounds can be initially separated, outline features of all connected regions after combination can be combined, defect possible degree values of all the connected regions after combination can be determined, more accurate region growing seed points are obtained in the regions with high defect possible degree values, and the influence of the generation of non-defect region ghosts when the defect images are obtained according to the appearance images of plastic pipelines can be effectively reduced.
The following is a specific step analysis:
Referring to fig. 5, step S20 to step S25 are included:
S20: uniformly distributing super-pixel segmentation seed points on the preprocessed image, wherein the super-pixel segmentation seed points are central coordinate points in the region after super-pixel segmentation;
S21: calculating the distance between any pixel point in the region after super-pixel segmentation and the segmentation seed point, wherein the distance satisfies the following relation:
in the method, in the process of the invention, Represents the/>Pixel to the/>Distance between individual split seed points,/>Represents the/>The pixel points correspond to gray values/>And/>The individual segmentation seed points correspond to gray values/>Gray level difference between,/>Represents the/>Pixel dot and/>Euclidean distance value of each divided seed point,/>The influence coefficient of the super pixel division of each pixel point by the gray scale distance is expressed,The square root of the ratio of the number of pixel points in the image to the number of the divided areas is represented and used as the maximum space distance value in the class;
further, super-pixel segmentation seed points are uniformly distributed on the preprocessed image, and the number of marked pixel points is as follows The number of divided regions is/>In the present embodiment, set/>; Calculating the distance between any pixel point in the super-pixel region and the segmentation seed point,/>Is the influence coefficient of the gray scale distance to the super pixel division of each pixel point, and usually takes a fixed number,/>
S22: clustering each pixel point in the image according to the segmentation seed points to obtain a plurality of clustering clusters, obtaining a plurality of new super-pixel segmentation blocks according to the clustering result, calculating the coordinate center point of the pixel point in each new super-pixel segmentation block, and re-clustering the pixel point as the new super-pixel segmentation seed points to obtain a convergence super-pixel segmentation result;
further, super-pixel division seed points are uniformly distributed on the gray level image, and the number of division areas is as follows Common/>And combining the super-pixel blocks with high similarity degree to divide the background and the defect area, and calculating the similarity degree of the super-pixel blocks and the adjacent super-pixel blocks by considering that the background and the defect area are both connected areas.
S23: calculating the similarity value between each super pixel block and the adjacent super pixel blocks according to the convergence super pixel segmentation result, and merging the super pixel blocks according to the similarity value;
S24: respectively obtaining gray histograms of two adjacent super-pixel blocks, respectively calculating corresponding gray frequency values of the two super-pixel blocks, and calculating gray similarity degree values between each super-pixel block and the adjacent super-pixel blocks according to the gray frequency values of the two adjacent super-pixel blocks;
The gray level similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gray level similarity values for the super pixel blocks,Index representing gray value is/>When (1)Gray scale frequency value corresponding to each super pixel block,/>Index representing gray value is/>When (1)Gray scale frequency values corresponding to the super pixel blocks; /(I)Is the maximum value of gray value,/>Representing an index the function of the function is that,Expressed as/>Base logarithm;
Further, in the present embodiment, Is the maximum gray level,/>When/>Each super pixel block and adjacent/>The smaller the overall difference of the gray frequency of each super pixel block at each gray value index, the/>Each super pixel block and adjacent/>Gray level similarity degree/>, of each super pixel blockThe higher.
S25: respectively obtaining gradient histograms of two adjacent super-pixel blocks, respectively calculating corresponding gradient frequency values of the two super-pixel blocks, and calculating gradient similarity degree values between each super-pixel block and the adjacent super-pixel blocks according to the gradient frequency values of the two adjacent super-pixel blocks;
The gradient similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gradient similarity value of each super pixel block,/>Represents the/>Super pixel block and adjacent first/>Gradient maximum value of each super pixel block,/>Representing gradient value index,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Representing a hyperbolic tangent function;
Further, in this embodiment, the first and second objects are obtained by sobel (sobel) operators Each super pixel block and adjacent/>Gradient histogram of each super pixel block, wherein, by calculating the sum of absolute values of adjacent gray differences in horizontal and vertical directions of each pixel point in the super pixel block as the gradient value of the pixel point, when the/>Each super pixel block and adjacent/>The smaller the overall difference of the values of the gradient frequency values of the super pixel blocks at the index positions of the gradient values, the/>Each super pixel block and adjacent/>Gradient similarity degree/>, of individual superpixel blocksThe higher the sobel operator is, a commonly used image edge detection operator for detecting edges and contours in images.
S3: and calculating the comprehensive similarity value of the two adjacent super-pixel blocks according to the gray level similarity value and the gradient similarity value, and merging the two adjacent super-pixel blocks according to the comprehensive similarity value.
The integrated similarity value satisfies the following relationship:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Comprehensive similarity value of each super-pixel block,/>Entropy weight representing gray level similarity value,/>Represents the/>Each super pixel block and adjacent/>Gray level similarity value of each super pixel block,/>Entropy weight representing gradient similarity value,/>Represents the/>Each super pixel block and adjacent/>Gradient similarity values for the super pixel blocks;
normalizing the comprehensive similarity value, and merging two adjacent super-pixel blocks in response to the normalized comprehensive similarity value being greater than a preset similarity threshold;
Further, in this embodiment, when the gray level similarity value and the gradient similarity value of the super pixel block and the adjacent super pixel block are both high, it is indicated that the integrated similarity of the two adjacent super pixel blocks is high, and the combination can be performed.
Further, in this embodiment, the preset similarity threshold is
S4: obtaining the possible degree value of the defect in the region according to the area, perimeter and gray average value of each connected region in the combined region, setting a threshold value for the possible degree value of the defect by using an Ojin method to obtain a region with high possible degree value of the defect, combining the regions with high possible degree value of the defect again, taking the central point in each connected region after combination as a region growing seed point, and carrying out region growth according to the region growing seed point to obtain an appearance defect image of the plastic pipeline.
The defect likelihood value satisfies the following relation:
in the method, in the process of the invention, Representing the/>, in the combined imageThe defect probability of each connected domain,/>Representing the number of connected domains,/>Represents the/>Gray level average value of each pixel point in each connected domain,/>Representing all/>Average value of gray scale average value in connected domain of connected domains,/>For/>Area of the connected domains,/>For/>Perimeter of the connected domains,/>Representing the normalization function.
Further, the defect region has a smaller area ratio of the connected region to the background region, a smaller circumference of the connected region, and a higher degree of outlier gray value, so that the defect region can be determined according to the first image after mergingAnd obtaining the possible degree of the defect of the region by the area, the perimeter and the gray average value of the connected region. When the first/> in the combined imageWhen the area and perimeter values of each connected domain are small and the gray average value in the connected domain is higher than the outlier degree of the gray average value of each connected domain, the more the defect possibility degree of the region is higher.
In this embodiment, the regions with high defect probability values are obtained to merge the connected regions, and the center points of the merged connected regions are selected as Region Growing seed points, and the plastic pipeline appearance defect image is obtained by a Region Growing method, wherein the Region Growing method (Region Growing) is an image segmentation method based on pixel similarity, and is used for segmenting the image into regions or objects with similar features, and the technology is a well-known field and will not be described in detail.
S5: and setting labels for different defects of the appearance defect image, wherein the labels are pits, scratches and stains, and a defect image data set is generated.
S6: training a preset neural network model according to the defect image data set to obtain a defect network model.
Further, in this embodiment, DNN (Deep Neural Network, deep learning model) neural network is used to identify and classify plastic pipe appearance defect images in the structure of Encoder-FC (Encoder-full Connected).
Referring to fig. 6, step S60 to step S63 are included:
S60: encoding the appearance defect image, extracting image characteristics, inputting the appearance defect image of the plastic pipeline, and outputting whether the defect image has pits, scratches and stains or not;
S61: in the process of sampling the image by convolution and pooling operation, extracting spatial domain features in the image, wherein the output of the encoder is the extracted feature vector;
S62: the input of the full-connection layer is a feature vector output by the encoder, and the output layer of the full-connection layer is provided with three neurons which are respectively used for calculating the confidence coefficient of whether the dent, the scratch and the stain exist or not;
s63: the label is the defect type corresponding to the image Wherein/>,/>,/>Binary variables with values of 0 and 1 are adopted, wherein 0 indicates absence and 1 indicates presence.
S7: and acquiring the produced plastic pipeline image in real time, and inputting the image into a defect network model to obtain a plastic pipeline defect detection and classification result.
Outputting the defect network model as the confidence coefficient corresponding to the label, setting a defect threshold value, and judging that the label corresponding to the confidence coefficient has defects when the confidence coefficient is larger than the defect threshold value;
classifying according to the defects of the labels to obtain classification results;
Further, in this embodiment, the defect level threshold is
In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.

Claims (9)

1. The plastic pipeline appearance defect detection method based on machine vision is characterized by comprising the following steps of:
Collecting a plastic pipeline image and preprocessing the plastic pipeline image;
Performing super-pixel segmentation on the preprocessed plastic pipeline image to obtain a convergent super-pixel segmentation result, and performing image merging according to a similarity value of the convergent super-pixel segmentation result, wherein the similarity value comprises a gray level similarity value and a gradient similarity value;
Calculating the comprehensive similarity value of two adjacent super-pixel blocks according to the gray level similarity value and the gradient similarity value, and merging the two adjacent super-pixel blocks according to the comprehensive similarity value;
Obtaining a defect possible degree value in the region according to the area, perimeter and gray average value of each connected region in the combined region, setting a threshold value for the defect possible degree value by using an Ojin method to obtain a region with high defect possible degree value, combining the regions with high defect possible degree value again, taking the central point in each combined connected region as a region growing seed point, and carrying out region growth according to the region growing seed point to obtain an appearance defect image of the plastic pipeline;
setting labels for different defects of the appearance defect image, wherein the labels are pits, scratches and stains, and a defect image data set is generated;
training a preset neural network model according to the defect image data set to obtain a defect network model;
And acquiring the produced plastic pipeline image in real time, and inputting the image into a defect network model to obtain a plastic pipeline defect detection and classification result.
2. The machine vision-based plastic pipeline appearance defect detection method according to claim 1, wherein the pre-processed plastic pipeline image is subjected to super-pixel segmentation to obtain a convergent super-pixel segmentation result, and image merging is performed according to a similarity value of the convergent super-pixel segmentation result, and the method comprises the following steps:
Uniformly distributing super-pixel segmentation seed points on the preprocessed image, wherein the super-pixel segmentation seed points are central coordinate points in the region after super-pixel segmentation;
Calculating the distance between any pixel point in the region after super-pixel segmentation and the segmentation seed point, wherein the distance satisfies the following relation:
in the method, in the process of the invention, Represents the/>Pixel to the/>Distance between individual split seed points,/>Represents the/>The pixel points correspond to gray values/>And/>The individual segmentation seed points correspond to gray values/>Gray level difference between,/>Represents the/>Pixel dot and/>Euclidean distance value of each divided seed point,/>Representing the influence coefficient of the gray scale distance on the super-pixel segmentation of each pixel point,/>The square root of the ratio of the number of pixel points in the image to the number of the divided areas is represented and used as the maximum space distance value in the class;
Clustering each pixel point in the image according to the segmentation seed points to obtain a plurality of clustering clusters, obtaining a plurality of new super-pixel segmentation blocks according to the clustering result, calculating the coordinate center point of the pixel point in each new super-pixel segmentation block, and re-clustering the pixel point as the new super-pixel segmentation seed points to obtain a convergence super-pixel segmentation result;
And calculating the similarity value between each super pixel block and the adjacent super pixel blocks according to the convergence super pixel segmentation result, and merging the super pixel blocks according to the similarity value.
3. The machine vision-based plastic pipeline appearance defect detection method according to claim 2, wherein merging super pixel blocks according to the similarity value comprises the following steps:
respectively obtaining gray histograms of two adjacent super-pixel blocks, respectively calculating corresponding gray frequency values of the two super-pixel blocks, and calculating gray similarity degree values between each super-pixel block and the adjacent super-pixel blocks according to the gray frequency values of the two adjacent super-pixel blocks;
Gradient histograms of two adjacent super-pixel blocks are obtained respectively, corresponding gradient frequency values of the two super-pixel blocks are calculated respectively, and gradient similarity degree values between each super-pixel block and the adjacent super-pixel blocks are calculated according to the gradient frequency values of the two adjacent super-pixel blocks.
4. The machine vision-based plastic pipe appearance defect detection method according to claim 1, wherein the gray level similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gray level similarity value of each super pixel block,/>Index representing gray value is/>When (1)Gray scale frequency value corresponding to each super pixel block,/>Index representing gray value is/>When (1)Gray scale frequency values corresponding to the super pixel blocks; /(I)Is the maximum value of gray value,/>Representing an exponential function,/>Expressed as/>Base logarithm.
5. The machine vision-based plastic pipe appearance defect detection method according to claim 1, wherein the gradient similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Gradient similarity value of each super pixel block,/>Represents the/>Super pixel block and adjacent first/>Gradient maximum value of each super pixel block,/>Representing gradient value index,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Indicating when the gradient index value is/>When (1)Gradient frequency corresponding to each super pixel block,/>Representing a hyperbolic tangent function.
6. The machine vision-based plastic pipeline appearance defect detection method according to claim 1, wherein a comprehensive similarity value of two adjacent super-pixel blocks is calculated, and the two adjacent super-pixel blocks are combined according to the comprehensive similarity value, comprising the following steps:
the comprehensive similarity value satisfies the following relation:
in the method, in the process of the invention, Represents the/>Each super pixel block and adjacent/>Comprehensive similarity value of each super-pixel block,/>Entropy weight representing gray level similarity value,/>Represents the/>Each super pixel block and adjacent/>Gray level similarity value of each super pixel block,/>Entropy weight representing gradient similarity value,/>Represents the/>Each super pixel block and adjacent/>Gradient similarity values for the super pixel blocks;
And carrying out normalization processing on the integrated similarity degree value, and merging two adjacent super-pixel blocks in response to the normalized integrated similarity degree value being greater than a preset similarity threshold.
7. The machine vision-based plastic pipe appearance defect detection method of claim 1, wherein the defect likelihood value satisfies the following relationship:
in the method, in the process of the invention, Representing the/>, in the combined imageThe defect probability of each connected domain,/>Representing the number of connected domains,/>Represents the/>Gray level average value of each pixel point in each connected domain,/>Representing all/>Average value of gray scale average value in connected domain of connected domains,/>For/>Area of the connected domains,/>For/>Perimeter of the connected domains,/>Representing the normalization function.
8. The machine vision-based plastic pipeline appearance defect detection method according to claim 1, wherein training a preset neural network model according to the defect image data set to obtain a defect network model comprises the following steps:
Encoding the appearance defect image, extracting image characteristics, inputting the appearance defect image of the plastic pipeline, and outputting whether the defect image has pits, scratches and stains or not;
In the process of sampling the image by convolution and pooling operation, extracting spatial domain features in the image, wherein the output of the encoder is the extracted feature vector;
the input of the full-connection layer is a feature vector output by the encoder, and the output layer of the full-connection layer is provided with three neurons which are respectively used for calculating the confidence coefficient of whether the dent, the scratch and the stain exist or not;
The label is the defect type corresponding to the image Wherein/>,/>,/>Binary variables with values of 0 and 1 are adopted, wherein 0 indicates absence and 1 indicates presence.
9. The machine vision-based plastic pipeline appearance defect detection method according to claim 1, wherein the method comprises the steps of obtaining the produced plastic pipeline image in real time and inputting the produced plastic pipeline image into a defect network model to obtain a plastic pipeline defect detection and classification result, and comprises the following steps:
the defect network model is output as the confidence coefficient corresponding to the label, a defect threshold is set, and when the confidence coefficient is larger than the defect threshold, the label corresponding to the confidence coefficient is judged to have defects;
classifying according to the defects of the labels to obtain classification results.
CN202410392069.7A 2024-04-02 2024-04-02 Plastic pipeline appearance defect detection method based on machine vision Active CN117975175B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410392069.7A CN117975175B (en) 2024-04-02 2024-04-02 Plastic pipeline appearance defect detection method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410392069.7A CN117975175B (en) 2024-04-02 2024-04-02 Plastic pipeline appearance defect detection method based on machine vision

Publications (2)

Publication Number Publication Date
CN117975175A true CN117975175A (en) 2024-05-03
CN117975175B CN117975175B (en) 2024-06-25

Family

ID=90864793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410392069.7A Active CN117975175B (en) 2024-04-02 2024-04-02 Plastic pipeline appearance defect detection method based on machine vision

Country Status (1)

Country Link
CN (1) CN117975175B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118628489A (en) * 2024-08-12 2024-09-10 宝鸡新华利机械科技有限公司 Packaging box sealing quality inspection method based on machine vision

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114140462A (en) * 2021-12-10 2022-03-04 江苏牛犇轴承有限公司 Bearing wear degree evaluation method based on image processing
CN114972329A (en) * 2022-07-13 2022-08-30 江苏裕荣光电科技有限公司 Image enhancement method and system of surface defect detector based on image processing
CN114998198A (en) * 2022-04-24 2022-09-02 南通夏克塑料包装有限公司 Injection molding surface defect identification method
CN115100174A (en) * 2022-07-14 2022-09-23 上海群乐船舶附件启东有限公司 Ship sheet metal part paint surface defect detection method
EP4078514A1 (en) * 2019-12-17 2022-10-26 Abyss Solutions Pty Ltd Method and system for detecting physical features of objects
CN115294139A (en) * 2022-10-08 2022-11-04 中国电建集团江西省电力设计院有限公司 Image-based slope crack monitoring method
CN115909079A (en) * 2023-01-09 2023-04-04 深圳大学 Crack detection method combining depth feature and self-attention model and related equipment
CN116740070A (en) * 2023-08-15 2023-09-12 青岛宇通管业有限公司 Plastic pipeline appearance defect detection method based on machine vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4078514A1 (en) * 2019-12-17 2022-10-26 Abyss Solutions Pty Ltd Method and system for detecting physical features of objects
CN114140462A (en) * 2021-12-10 2022-03-04 江苏牛犇轴承有限公司 Bearing wear degree evaluation method based on image processing
CN114998198A (en) * 2022-04-24 2022-09-02 南通夏克塑料包装有限公司 Injection molding surface defect identification method
CN114972329A (en) * 2022-07-13 2022-08-30 江苏裕荣光电科技有限公司 Image enhancement method and system of surface defect detector based on image processing
CN115100174A (en) * 2022-07-14 2022-09-23 上海群乐船舶附件启东有限公司 Ship sheet metal part paint surface defect detection method
CN115294139A (en) * 2022-10-08 2022-11-04 中国电建集团江西省电力设计院有限公司 Image-based slope crack monitoring method
CN115909079A (en) * 2023-01-09 2023-04-04 深圳大学 Crack detection method combining depth feature and self-attention model and related equipment
CN116740070A (en) * 2023-08-15 2023-09-12 青岛宇通管业有限公司 Plastic pipeline appearance defect detection method based on machine vision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118628489A (en) * 2024-08-12 2024-09-10 宝鸡新华利机械科技有限公司 Packaging box sealing quality inspection method based on machine vision

Also Published As

Publication number Publication date
CN117975175B (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN110148130B (en) Method and device for detecting part defects
CN115082683B (en) Injection molding defect detection method based on image processing
CN118608504B (en) Machine vision-based part surface quality detection method and system
CN115082419B (en) Blow-molded luggage production defect detection method
CN113435460B (en) A recognition method for bright crystal granular limestone images
CN108562589B (en) Method for detecting surface defects of magnetic circuit material
CN114972329A (en) Image enhancement method and system of surface defect detector based on image processing
CN111582294A (en) Method for constructing convolutional neural network model for surface defect detection and application thereof
CN110473201A (en) A kind of automatic testing method and device of disc surface defect
CN111968095A (en) Product surface defect detection method, system, device and medium
CN117974601B (en) Method and system for detecting surface defects of silicon wafer based on template matching
CN113221881B (en) A multi-level smartphone screen defect detection method
CN114820625B (en) Automobile top block defect detection method
CN117975175B (en) Plastic pipeline appearance defect detection method based on machine vision
CN117635615B (en) Defect detection method and system for realizing punching die based on deep learning
CN117911409B (en) Mobile phone screen bad line defect diagnosis method based on machine vision
CN110458812B (en) Quasi-circular fruit defect detection method based on color description and sparse expression
CN119941740B (en) Part machining detection method and system
CN117333796A (en) Ship target automatic identification method and system based on vision and electronic equipment
CN112381140A (en) Abrasive particle image machine learning identification method based on new characteristic parameters
CN114926635B (en) Target segmentation method in multi-focus image combined with deep learning method
CN116580006A (en) Bottled product labeling quality detection method based on machine vision
CN114863464A (en) Second-order identification method for PID drawing picture information
CN112396580B (en) Method for detecting defects of round part
CN112396648A (en) Target identification method and system capable of positioning mass center of target object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant