Industrial part size detection method based on machine vision
Technical Field
The invention belongs to the technical field of machine vision detection, and particularly relates to a machine vision-based industrial part size detection method.
Background
With the continuous development of scientific technology, the traditional machining process of mechanical parts is developed towards high-precision, high-efficiency and high-grade materials, and the machining automation is also one of important development directions, and the technology upgrading of the whole machining industry is accompanied by higher requirements on the inspection and detection of mechanical finished products. In the traditional machining process, workers need to judge whether parts are qualified or not by means of naked eyes and simple tools, and the detection mode depends on the technical experience and the subjective nature of the workers seriously. Even if workers with skilled technology repeatedly perform detection work, fatigue and negligence are easy to generate, detection omission or wrong detection is caused, the improvement of machining precision and production efficiency is greatly restricted, and more advanced detection technology needs to be introduced in the machining industry.
Machine vision is a modern detection technology which utilizes an industrial camera CCD to replace human eyes for detection, and the industrial camera CCD is used for carrying out image processing on a detected object, converting image information into a digital signal and extracting required characteristics from the digital signal, thereby realizing the detection of the state of the detected object. The development of digital processing technology and artificial intelligence makes people have higher and higher requirements on product quality and benefit, and it is also very important to find a method capable of improving product quality and increasing detection speed.
Disclosure of Invention
The invention aims to provide a machine vision-based industrial part size detection method, which solves the problems of low part measurement efficiency, poor accuracy, high price of a detection system in the current market and high operation difficulty in the traditional technology.
The technical scheme adopted by the invention is that the industrial part size detection method based on machine vision is implemented according to the following steps:
step 1, collecting an image by using an image collection system;
step 2, sequentially carrying out median filtering, threshold segmentation, image filling, Canny edge rough extraction, edge fine extraction and size calibration on the image to obtain size data;
step 3, comparing the detected size data with the standard size data, and judging whether the detected size data is within the error range of the standard size data; if yes, the product is qualified; if not, the product is not qualified.
The invention is also characterized in that:
in step 1, the image acquisition system comprises a CCD industrial camera; the CCD industrial camera, the computer, the singlechip and the relay are connected in sequence; the computer is also connected with an infrared sensor.
The infrared sensor is an active infrared sensor;
the type of the singlechip is 89C51 singlechip.
In step 2, the specific process of median filtering is as follows:
and (3) sorting pixels in the plate according to the size of pixel values by adopting a 3X 3 two-dimensional sliding template to generate a monotonously rising two-dimensional data sequence, putting an intermediate value into the central position of the template, restoring the intermediate value into an original image, and scanning the whole image by analogy to obtain an image f (x, y).
In step 2, the specific process of threshold segmentation is as follows:
(1) given an initial threshold Th=Th0The original image can be classified into two types, namely C1 and C2, when the original image is searched from the beginning by default to be 1;
(2) the intra-class variance of two classes is calculated respectively:
wherein f (x, y) is the acquired image; n is a radical ofc1Is the probability that the pixel is classified at C1; n is a radical ofc2Is the probability that the pixel is classified at C2; mu is a mean value; sigma2Is the variance;
(3) and (4) carrying out classification treatment: if | f (x, y) - μ1|≤|f(x,y)-μ2If f (x, y) belongs to C1, otherwise f (x, y) belongs to C2;
(4) respectively recalculating the mean value and the variance of all pixels in C1 and C2 obtained after the last step of reclassification;
(5) equation (II) of
If true, the calculated threshold value T is output
h(t-1), otherwise repeating (4) and (5).
In step 2, the specific process of image filling is as follows:
(1) selecting a seed point in the image, namely a seed pixel point;
(2) pressing the point into a stack by taking the point as a starting point, setting the color of the point as A if the color to be filled is A, and then judging four neighborhood pixels of the point, wherein a color threshold value T is set, and the gray value of the current pixel is p (x, y), the four neighborhood pixels are M (n), and n is 1,2,3 and 4; judging the gray difference D between the current pixel and the four-adjacent-domain pixel as | P-M |, if D is less than T, using the pixel M as the next seed point and pressing the next seed point into a stack, otherwise, continuing the judgment;
(3) when the stack is empty, the seed padding ends, otherwise (2) is repeated.
In step 2, the specific process of crude extraction of the Canny edge is as follows:
(1) gaussian filter smoothing image to eliminate noise
Scanning each pixel in the image by adopting a 3 x 3 template, and replacing the value of the central pixel point of the template by using the weighted average gray value of the pixels in the field determined by the template:
wherein f (m, n) is the filled image; w is amnThe gray value of the pixel point of the mth row and the nth column is obtained; m is a filtering template;
(2) calculating the gradient strength and direction of each pixel point in the image
Approximation is performed using first order finite differences, resulting in two matrices of partial derivatives of the image in the x and y directions:
in the formula, the mathematical expressions of the first-order partial derivative matrix in the x direction and the y direction, the gradient amplitude and the gradient direction are as follows:
P[i,j]=(f[i,j+1]-f[i,j]+f[i+1,j+1]-f[i+1,j])/2 (9)
Q[i,j]=(f[i,j]-f[i+1,j]+f[i,j+1]-f[i+1,j+1])/2 (10)
θ[i,j]=arctan(Q[i,j]/P[i,j]) (12)
in the formula, P [ i, j ] is the difference of the image in the horizontal direction; q [ i, j ] is the difference of the image in the vertical direction; m [ i, j ] is gradient strength; theta [ i, j ] is the gradient direction;
(3) applying non-maximum suppression to eliminate spurious response caused by edge detection
Comparing the gradient strength of the current pixel with two pixels along the positive and negative gradient directions; if the gradient intensity of the current pixel is maximum compared with the other two pixels, the pixel point is reserved as an edge point, otherwise, the pixel point is inhibited;
(4) applying dual threshold detection and connection edges
Two thresholds th1 and th2 for non-maximal inhibition, with the relationship th1 being 0.4th 2; firstly, setting the gray value of a pixel with the gradient value smaller than th1 as 0 to obtain an image 1; then, setting the gray value of the pixel with the gradient value less than th2 as 0 to obtain an image 2; finally, the edges of the images are connected on the basis of image 2, supplemented by image 1.
In the step 2, the specific process of edge fine extraction is as follows:
and (3) further extracting by adopting a cubic spline interpolation method:
in the formula, S (w) is an interpolation kernel; w is a spline node;
the calculation formula of spline interpolation is represented by a matrix:
F(m,n)=ABC (14)
in the formula (I), the compound is shown in the specification,
f (m, n) represents the interpolated image; f (i, j) represents a pixel point before interpolation; v ═ n- [ n ], [ ] denotes rounding.
In step 2, the specific process of size calibration is as follows:
obtaining the corresponding relation between the real value and the pixel value of the gauge block through the image after the edge fine extraction to obtain a calibration coefficient K1, and then calibrating the measured part to further realize the size measurement;
the actual length of the gauge block is M (in mm), the pixel size of the gauge block in the image acquired by the camera is N (in number of pixels), and the ratio of the actual size M to the pixel size N is the calibration coefficient K1 of the gauge block, and is expressed by the formula:
assuming that the actual side length dimension of the part is L (in mm), and the pixel size of the side length of the part in the image collected by the camera is P (in number of pixels), the calibration coefficient K2 is expressed by the following formula:
when the parameters of the camera lens during image acquisition, namely the visual distance, the focal length and the magnification, and the external conditions, namely the relative positions of the illumination, the camera and the target are unchanged, the calibration coefficient K1 of the gauge block is equal to the calibration coefficient K2 of the side length of the part; then it can be derived from the above two equations:
the invention has the beneficial effects that:
the industrial part size detection method based on machine vision adopts an edge extraction method combining a Canny algorithm and a cubic spline interpolation method, and obtains a more accurate edge position; the method solves the problems of low size measurement precision and high fault tolerance rate in industrial production, has wide application in size measurement, provides a new idea for size measurement, and provides a new idea for part detection.
Detailed Description
The present invention will be described in detail with reference to the following embodiments.
The invention relates to a machine vision-based industrial part size detection method, which is implemented according to the following steps:
step 1, collecting an image by using an image collection system; wherein, the image acquisition system comprises a CCD industrial camera; the CCD industrial camera, the computer, the singlechip and the relay are connected in sequence; the computer is also connected with an infrared sensor;
the infrared sensor is an active infrared sensor;
the type of the singlechip is 89C51 singlechip.
Step 2, sequentially carrying out median filtering, threshold segmentation, image filling, Canny edge rough extraction, edge fine extraction and size calibration on the image to obtain size data;
the specific process of median filtering is as follows:
and (3) sorting pixels in the plate according to the size of pixel values by adopting a 3X 3 two-dimensional sliding template to generate a monotonously rising two-dimensional data sequence, putting an intermediate value into the central position of the template, restoring the intermediate value into an original image, and scanning the whole image by analogy to obtain an image f (x, y).
The specific process of threshold segmentation is as follows:
(1) given an initial threshold Th=Th0The original image can be classified into two types, namely C1 and C2, when the original image is searched from the beginning by default to be 1;
(2) the intra-class variance of two classes is calculated respectively:
wherein f (x, y) is the acquired image; n is a radical ofc1Is the probability that the pixel is classified at C1; n is a radical ofc2For pixels classified at C2Rate; mu is a mean value; sigma2Is the variance;
(3) and (4) carrying out classification treatment: if | f (x, y) - μ1|≤|f(x,y)-μ2If f (x, y) belongs to C1, otherwise f (x, y) belongs to C2;
(4) respectively recalculating the mean value and the variance of all pixels in C1 and C2 obtained after the last step of reclassification;
(5) equation (II) of
If true, the calculated threshold value T is output
h(t-1), otherwise repeating (4) and (5).
The specific process of image filling is as follows:
(1) selecting a seed point in the image, namely a seed pixel point;
(2) pressing the point into a stack by taking the point as a starting point, setting the color of the point as A if the color to be filled is A, and then judging four neighborhood pixels of the point, wherein a color threshold value T is set, and the gray value of the current pixel is p (x, y), the four neighborhood pixels are M (n), and n is 1,2,3 and 4; judging the gray difference D between the current pixel and the four-adjacent-domain pixel as | P-M |, if D is less than T, using the pixel M as the next seed point and pressing the next seed point into a stack, otherwise, continuing the judgment;
(3) when the stack is empty, the seed padding ends, otherwise (2) is repeated.
The Canny edge crude extraction process is as follows:
(1) gaussian filter smoothing image to eliminate noise
Scanning each pixel in the image by adopting a 3 x 3 template, and replacing the value of the central pixel point of the template by using the weighted average gray value of the pixels in the field determined by the template:
wherein f (m, n) is the filled image; w is amnThe gray value of the pixel point of the mth row and the nth column is obtained; m is a filtering template;
(2) calculating the gradient strength and direction of each pixel point in the image
Approximation is performed using first order finite differences, resulting in two matrices of partial derivatives of the image in the x and y directions:
in the formula, the mathematical expressions of the first-order partial derivative matrix in the x direction and the y direction, the gradient amplitude and the gradient direction are as follows:
P[i,j]=(f[i,j+1]-f[i,j]+f[i+1,j+1]-f[i+1,j])/2 (9)
Q[i,j]=(f[i,j]-f[i+1,j]+f[i,j+1]-f[i+1,j+1])/2 (10)
θ[i,j]=arctan(Q[i,j]/P[i,j]) (12)
in the formula, P [ i, j ] is the difference of the image in the horizontal direction; q [ i, j ] is the difference of the image in the vertical direction; m [ i, j ] is gradient strength; theta [ i, j ] is the gradient direction;
(3) applying non-maximum suppression to eliminate spurious response caused by edge detection
Comparing the gradient strength of the current pixel with two pixels along the positive and negative gradient directions; if the gradient intensity of the current pixel is maximum compared with the other two pixels, the pixel point is reserved as an edge point, otherwise, the pixel point is inhibited;
(4) applying dual threshold detection and connection edges
Two thresholds th1 and th2 for non-maximal inhibition, with the relationship th1 being 0.4th 2; firstly, setting the gray value of a pixel with the gradient value smaller than th1 as 0 to obtain an image 1; then, setting the gray value of the pixel with the gradient value less than th2 as 0 to obtain an image 2; finally, the edges of the images are connected on the basis of image 2, supplemented by image 1.
The specific process of edge fine extraction is as follows:
and (3) further extracting by adopting a cubic spline interpolation method:
in the formula, S (w) is an interpolation kernel; w is a spline node;
the calculation formula of spline interpolation is represented by a matrix:
F(m,n)=ABC (14)
in the formula (I), the compound is shown in the specification,
f (m, n) represents the interpolated image; f (i, j) represents a pixel point before interpolation; v ═ n- [ n ], [ ] denotes rounding.
The specific process of dimension calibration is as follows:
obtaining the corresponding relation between the real value and the pixel value of the gauge block through the image after the edge fine extraction to obtain a calibration coefficient K1, and then calibrating the measured part to further realize the size measurement;
the actual length of the gauge block is M (in mm), the pixel size of the gauge block in the image acquired by the camera is N (in number of pixels), and the ratio of the actual size M to the pixel size N is the calibration coefficient K1 of the gauge block, and is expressed by the formula:
assuming that the actual side length dimension of the part is L (in mm), and the pixel size of the side length of the part in the image collected by the camera is P (in number of pixels), the calibration coefficient K2 is expressed by the following formula:
when the parameters of the camera lens during image acquisition, namely the visual distance, the focal length and the magnification, and the external conditions, namely the relative positions of the illumination, the camera and the target are unchanged, the calibration coefficient K1 of the gauge block is equal to the calibration coefficient K2 of the side length of the part; then it can be derived from the above two equations:
step 3, comparing the detected size data with the standard size data, and judging whether the detected size data is within the error range of the standard size data; if yes, the product is qualified; if not, the product is not qualified.
The device in the image acquisition system has the following functions:
CCD industry camera: collecting images, and converting optical signals into ordered telecommunication signals;
a computer: receiving signals of an industrial camera, and carrying out image processing to obtain required characteristics of the part; receiving signals of the infrared sensor, and making a camera photographing instruction according to the signals;
a single chip microcomputer: receiving a separation instruction of a computer and controlling the rotation and stop of the motor;
a relay: an automatic switch which uses small current to control large current operation, an actuating mechanism which can realize on and off control to a controlled circuit;
an infrared sensor: the infrared sensor is a pair of infrared signal transmitting and receiving diodes, the transmitting tube transmits an infrared signal with a specific frequency, the receiving tube receives the infrared signal with the frequency, when the infrared detection direction meets an obstacle, the infrared signal cannot be received by the receiving tube, and the receiver signal changes and returns to the computer through a self-carried digital sensor interface.
The industrial part size detection method based on machine vision adopts an edge extraction method combining a Canny algorithm and a cubic spline interpolation method, and obtains a more accurate edge position; the method solves the problems of low size measurement precision and high fault tolerance rate in industrial production, has wide application in size measurement, provides a new idea for size measurement, and provides a new idea for part detection.