Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when …" or "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted depending on the context to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
The technical solutions in the embodiments of the present invention are clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those specifically described and will be readily apparent to those of ordinary skill in the art without departing from the spirit of the present invention, and therefore the present invention is not limited to the specific embodiments disclosed below.
Efficient and reliable defect detection is an important link for automation of electronic manufacturing industry. At present, along with the development of scientific technology, the size of components and parts adopted in the production of electronic products is smaller and smaller, the assembly integration density is higher and higher, and the requirement on defect detection is higher and higher. In the prior art, usually, an inspector performs visual inspection on an object to be inspected for defects, that is, the inspector performs surface defect inspection by visual inspection. The problem in the prior art is that the manual visual inspection mode is influenced by subjective factors and is difficult to detect small defects, which is not beneficial to improving the accuracy and efficiency of defect detection.
At present, the length and width of some patch elements are only 0.1x0.2mm, and the pin pitch of the thin device is only 0.1 mm. The main production links of the surface mounting production line of the printed circuit board comprise board mounting, soldering paste printing, shooting, IC chip placing, reflow soldering fixing, perforating element assembling, wave soldering, packaging and the like. And the quality defect of the electronic product produced by the patch is more than 80% caused by the printing defect of the soldering paste. However, the technical difficulty of inspecting the printing quality of the soldering paste is high, and the fine defect is difficult to judge by a manual visual inspection mode.
Therefore, defect detection can be carried out based on the microstructure 3D detection equipment and the three-dimensional reconstruction technology. The method for detecting the defects based on the microstructure 3D detection equipment and the three-dimensional reconstruction technology roughly comprises three types: the method comprises the steps of a rapid laser triangulation method, three-dimensional attribute analysis based on two-dimensional images collected under the illumination of a special light source, and three-dimensional reconstruction based on binocular two-dimensional image feature point matching position analysis collected by a multi-view Charge-coupled Device (CCD) camera. However, the method of the two-dimensional image is not easy to establish an accurate model; in the three-dimensional detection, the matching calculation amount of the characteristic points is large, the speed is low, the stability is poor due to the matching error of the characteristic points, and the obtained three-dimensional precision is not high; in the invention, a grating projection Phase Measurement Profilometry (PMP) which is further developed based on a laser triangulation method is adopted for detection, a deep learning model is trained through a neural network based on a two-dimensional image to improve the three-dimensional measurement performance of the PMP, a more accurate picture is obtained, and then a relevant defect is detected through a YOLO deep learning algorithm.
Specifically, in order to solve the problems in the prior art, in the embodiment, a defect detection method, a defect detection device, an intelligent terminal, and a computer-readable storage medium are provided to obtain a target neural network; acquiring a two-dimensional image to be trained and a three-dimensional image to be trained, and training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network to obtain a model training set; acquiring a defect detection model to be trained, and training the defect detection model to be trained based on the model training set to obtain a target defect detection model; and acquiring an image to be detected, and detecting the defects based on the target defect detection model and the image to be detected. Compared with the method for detecting the surface defects through visual inspection by detection personnel in the prior art, the method for detecting the surface defects disclosed by the invention automatically detects the defects by combining the neural network and the defect detection model, and is favorable for improving the accuracy and efficiency of the defect detection. Meanwhile, the two-dimensional image to be trained and the three-dimensional image to be trained are trained on the basis of the target neural network to obtain a model training set, and a target defect detection model is obtained through the training of the model training set, so that the detection precision of the target defect detection model is improved, and the defect detection accuracy is further improved.
Exemplary method
As shown in fig. 1, an embodiment of the present invention provides a defect detection method, specifically, the method includes the following steps:
and step S100, acquiring a target neural network.
Wherein, the target neural network is a trained neural network. In this embodiment, the two-dimensional image and the three-dimensional image that are correspondingly paired can be trained based on the target neural network, so as to obtain a more accurate three-dimensional image, thereby improving the accuracy of defect detection.
And S200, acquiring a two-dimensional image to be trained and a three-dimensional image to be trained, and training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network to obtain a model training set.
The two-dimensional image to be trained and the three-dimensional image to be trained are images used for inputting a target neural network, after the two-dimensional image to be trained and the three-dimensional image to be trained are input into the target neural network, the two-dimensional image to be trained and the three-dimensional image to be trained are trained through the target neural network, the two-dimensional image to be trained and the three-dimensional image to be trained are optimized, an accurate three-dimensional image is obtained, and a model training set is obtained based on a set of the obtained accurate three-dimensional images.
And step S300, acquiring a defect detection model to be trained, and training the defect detection model to be trained based on the model training set to obtain a target defect detection model.
The defect detection model to be trained is a model which can be used for defect detection after further training, and a target detection model based on machine learning can be selected as the defect detection model to be trained, such as a YOLO model. And obtaining a model training set based on the obtained accurate three-dimensional image and training the defect detection model to be trained, so that the defect detection performance of the defect detection model to be trained is improved, and the target defect detection model is obtained. The target defect detection model is a to-be-trained defect detection model after training is completed, and can be directly used for defect detection.
And S400, acquiring an image to be detected, and detecting the defect based on the target defect detection model and the image to be detected.
The image to be detected is a three-dimensional image of an object (e.g., an electronic component) to be detected, and can be obtained based on a PMP three-dimensional measurement method (the three-dimensional image is obtained by modulating a projection light source acquired by PMP, calculating a phase change of modulation by acquiring a gray level of the image, and finally calculating a three-dimensional height). And inputting the image to be detected into a target defect detection model, and carrying out target identification on the electronic component in the image to be detected based on the target defect detection model, thereby realizing defect detection.
As can be seen from the above, in the defect detection method provided in the embodiment of the present invention, a target neural network is obtained; acquiring a two-dimensional image to be trained and a three-dimensional image to be trained, and training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network to obtain a model training set; acquiring a defect detection model to be trained, and training the defect detection model to be trained based on the model training set to obtain a target defect detection model; and acquiring an image to be detected, and detecting the defects based on the target defect detection model and the image to be detected. Compared with the method for detecting the surface defects through visual inspection by detection personnel in the prior art, the method for detecting the surface defects disclosed by the invention automatically detects the defects by combining the neural network and the defect detection model, and is favorable for improving the accuracy and efficiency of the defect detection. Meanwhile, the two-dimensional image to be trained and the three-dimensional image to be trained are trained on the basis of the target neural network to obtain a model training set, and a target defect detection model is obtained through the training of the model training set, so that the detection precision of the target defect detection model is improved, and the defect detection accuracy is further improved.
Specifically, in this embodiment, as shown in fig. 2, the step S100 includes:
step S101, obtaining a neural network training set, wherein the neural network training set comprises a plurality of groups of corresponding two-dimensional training images and three-dimensional training images.
And S102, acquiring a neural network to be trained, and training the neural network to be trained based on the neural network training set to obtain a target neural network.
The sets of corresponding two-dimensional training images and three-dimensional training images included in the neural network training set are sets of two-dimensional images and three-dimensional images corresponding to a plurality of structures or devices similar to an object to be detected (such as a microstructure or an electronic component) which are obtained in advance. The two-dimensional training image can be obtained by shooting with a CCD camera, and the three-dimensional training image can be obtained by measuring in a PMP mode. And training the neural network to be trained based on each associated group of two-dimensional training images and three-dimensional training images.
In one application scenario, a two-dimensional training image (acquired by a CCD camera) and a three-dimensional training image (acquired based on the PMP method) are acquired based on the following methods. The photoelectric aiming device is arranged on the working surface of the three-dimensional moving platform (a laser is lightened), and the three-dimensional moving platform is controlled to be adjusted along the X direction at the initial Z position (X is the horizontal direction, Y is the vertical direction, and Z is the height), so that the output voltage of a silicon photoelectric triode in the photoelectric aiming device is maximum, and the accurate coincidence of the light plane and the center of an aiming hole on the photoelectric aiming device is completed. Note that the spatial position of the collimation hole at this time is the start position of the object coordinate (origin of the object coordinate system). And turning off the laser, acquiring a two-dimensional image of the aiming hole on the photoelectric aiming device by using the CCD camera, and obtaining a sub-pixel level image coordinate of the center of the aiming hole image corresponding to the space coordinate through edge detection and ellipse fitting. And (3) lightening the laser again, controlling the three-dimensional moving platform to move for a certain distance along the vertical (Y) direction, then controlling the three-dimensional moving platform to adjust along the horizontal (X) direction again, so that the output voltage of the silicon photoelectric triode in the photoelectric aiming device is maximum, so as to complete accurate coincidence of the light plane and the hole center of the aiming hole on the photoelectric aiming device again, and recording the space object coordinate of the hole center of the aiming hole according to the movement amount of the three-dimensional moving platform. Wherein, the Z position can be set as a certain designated position of the shooting object (such as the initial position of the circuit board welding point). In the process, an original point is set firstly, then the hole center is aligned to the central position of the structure to be detected, which needs to be obtained, the hole center is moved to the original point again, and the moving direction and distance are recorded to measure the coordinate of the structure to be detected, so that the corresponding three-dimensional image can be obtained.
In an application scenario, a two-dimensional image obtained by shooting with a CCD camera may also be preprocessed to obtain a two-dimensional training image (and other two-dimensional images required in the embodiment, such as a two-dimensional image to be trained), which is needed in this embodiment, the purpose of the preprocessing is to facilitate calculation and save time overhead, and the preprocessing process may include graying, filtering, sharpening, pin frame selection, and the like. Specifically, the image obtained by the CCD imaging is color, and when the color image is processed, three channels need to be processed in sequence, which results in a large time overhead. Therefore, in order to increase the processing speed and meet the real-time requirement, the amount of data to be processed needs to be reduced, and the amount of data to be processed can be reduced by performing the gradation processing on the color image. After the graying processing is performed, the image noise is reduced through filtering, and optionally, mean filtering, median filtering, and the like can be used. Further, the processed image is sharpened, the edge contour of an object in the image is highlighted, the edge of the object is convenient to identify, and specifically, the edge of the image can be sharpened by using a sharpening function library carried by OpenCV. And further, selecting a pin frame, processing the sharpened image by using the minimum circumscribed function of the OpenCV, and reserving the defective pin frame according to the area of each input rectangle. Specifically, the minimum external connection graph of the whole component and the pins is obtained, and the defective pin frame is reserved based on the size of the input rectangular area (or the preset rectangular area), wherein the input rectangular area is the standard area of the rectangle corresponding to each input pin, and whether the marked rectangle is a pin, the size of the marked rectangle is standard, and whether the marked rectangle is defective or not is judged according to the difference value between the obtained area size and the standard area size.
OpenCV is an open source function library used for image processing, analysis and machine vision. The library is written by C and C + + languages and can run on windows, linux and mac OSX systems. The OpenCV function is used for carrying out operations such as filtering, binaryzation, chip positioning and the like on the surface image of the workpiece, so that pixel optimization on the surface image of the workpiece is realized, and the accuracy of subsequent surface quality detection of the workpiece is improved. And simultaneously, the functions of positioning and marking the defects are completed, so that the subsequent operations such as picking processing and the like are facilitated.
In this embodiment, the neural network to be trained is a BP neural network. The BP model is one of important models of a neural network, and has wide application in classification, prediction, fault diagnosis and parameter detection. The network structure not only has input layer nodes and output layer nodes, but also can have one or more layers of hidden layer nodes. For an input signal, the input signal is firstly transmitted to the hidden node forwards, the output information of the hidden node is transmitted to the output node after the input signal passes through the action function, and finally the output node gives an output result through the output function. The learning process of this algorithm consists of forward propagation and backward propagation. In the forward propagation process, an input signal is processed layer by layer from an input layer through a hidden unit layer and is transmitted to an output layer, and the state of each layer of neurons only affects the state of the next layer of neurons. If the output layer can not obtain the expected output, the method shifts to the reverse propagation, returns the error signal along the original connecting path, and ensures that the error signal is minimum by modifying the weight of each layer of neuron. In effect the BP model turns the I/O problem for a set of samples into a non-linear optimization problem. The most common gradient descent method in optimization is used, iterative operation is used for solving, and hidden nodes are added to increase adjustable parameters of the optimization problem corresponding to the learning and memory problem, so that a more accurate solution can be obtained.
Specifically, for an image which needs to be input into a BP neural network for training (or testing), the image is divided into mutually disconnected subregions with the same size, a local minimum value point of each subregion, namely a central point of a receptive field, is scanned, tops nearest to the central point are respectively searched in 380 neighborhood directions of the central point, and pixel points corresponding to the tops are peripheral points. According to the method for acquiring and processing the sample data, 380 space sampling points (the specific number is related to the number of the divided areas and can be set and adjusted according to actual requirements) are acquired, and the sampling interval is 1 mm. 280 points of the training samples are taken as network training samples. 100 points are used as network test samples. The sample data acquisition method can ensure that the spatial positioning precision and the image positioning precision of the sampling points are about 10 mu m and 0.1 pixel respectively, the precision is high, and excessive subdivision is not required. In network training and testing, a sigmoid function and a hyperbolic function may be used as the transfer function of the neuron, and the hyperbolic function is preferred in this embodiment to improve the accuracy. Since the hyperbolic function is a saturation function, in the preprocessing of data, the input training sample (i.e. the image input into the BP neural network for training the BP neural network) is transformed into [ -1, 1] by using linear transformation, and the output sample is transformed into [ -0.95, 0.95] to reduce the calculation amount.
In the BP neural network of this embodiment, there are 2 nodes in the input layer, 3 nodes in the output layer, and the image coordinates (x, y) of a spatial point are used as the input values of the network input nodes, the image coordinates are determined after a rectangular coordinate system is established on an image acquired by the PMP, and the object coordinates (x, y, z) corresponding to the spatial point are used as the output values of the output nodes (the whole image is a rectangular coordinate system, and the detection point is an object coordinate). Training and testing the network by adjusting adjustable parameters (hidden layer number, hidden layer node number, learning rate, momentum factor and characteristic parameters in a hyperbolic function) in the network, and calculating training precision and testing precision by adopting root mean square error.
Specifically, in this embodiment, the BP neural network is trained and optimized based on the following steps: firstly, initializing the network and randomly setting the weight of each layer of neuron. Secondly, input objects and target output of a given network are given; wherein, the input object is a two-dimensional training image and a three-dimensional training image obtained by PMP, and the output object is a more accurate three-dimensional image. Thirdly, calculating the output of each unit of the network hidden layer and the output layer; the network automatically calculates the output of each unit of each layer through the input object and the weight of each layer, and each unit is each neuron. Fourthly, calculating the deviation between the target value and the actual output; the target value refers to an output value of the network in an ideal state, which is obtained by the user through self calculation (or other calculation) in the training process. Fifthly, adjusting the weight of each layer of neuron, specifically, adjusting the weight of each neuron according to the deviation of the target value and the output of each unit; for example, if the computation is too fast and the desired value is not available, the learning rate may be reduced to traverse more data. And sixthly, returning to the third step to restart the learning until the deviation of the target value and the actual output meets the preset training precision requirement (for example, the error between the target value and the expected optimal value does not exceed 0.1). Thus, in each learning process, the optimal weight among the neurons in each layer can be obtained by adjusting parameters such as the number of nodes in the input layer, the number of hidden layers, the number of hidden nodes, the number of nodes in the output layer, the learning rate, the momentum factor, the output function, the action function and the gain coefficient thereof. The output layer can obtain the expected output as much as possible, and the deviation signal is minimized. And then determining different parts to be detected through the coordinates of the sample points, and optimizing a three-dimensional image of the PMP through a two-dimensional image of the CCD. After the BP neural network training is finished, testing is carried out through the test sample, and the deviation between the test sample and the true value is detected, so that whether the precision meets the requirement or not is judged.
Specifically, in this embodiment, as shown in fig. 3, the step S200 includes:
step S201, acquiring a plurality of groups of corresponding two-dimensional images to be trained and three-dimensional images to be trained, wherein the two-dimensional images to be trained are obtained by shooting through a CCD camera, and the three-dimensional images to be trained are obtained based on a grating projection phase measurement profilometry.
And S202, training each group of the two-dimensional images to be trained and the three-dimensional images to be trained respectively based on the target neural network to obtain a model training set.
The two-dimensional image to be trained and the three-dimensional image to be trained are images which are in one-to-one correspondence and need to be trained through a target neural network, the accuracy of the three-dimensional image can be improved based on the two-dimensional image to be trained in the training process, and the accurate three-dimensional image is obtained, so that a model training set is obtained based on the accurate three-dimensional image, and a defect detection model to be trained is trained.
The grating projection Phase Measurement Profilometry (PMP) further developed based on the laser triangulation method is high in precision, and the speed is improved by adopting surface structure light projection. And acquiring a plurality of images of the projection grating phase shift to calculate a modulated phase, and converting the image-based distance measurement into an image gray scale measurement. PMP and laser triangulation are similar, both active stereo vision measurements, and both measure three-dimensional height based on the modulation of projected light by the object being measured. The difference is that the grating projection adopts the grating with the sine-shaped brightness change as a projection light source with a surface structure, so that the whole image can be measured at one time, and the measurement speed is improved. The three-dimensional height is obtained by utilizing the modulation of a projection light source by a measured object, calculating the phase change of the modulation by acquiring the gray level of an image and finally calculating. However, in the structured light projection measurement method, a certain included angle is required between a projection angle and a CCD (charge coupled device) acquisition angle, and the height of solder paste is changed, so that shadow is inevitably generated in grating projection. The grating projection image cannot be acquired at the shadow, so that the height information of the shadow part cannot be calculated, and meanwhile, a discontinuous area is caused to the phase unwrapping, so that the conventional phase unwrapping method has errors in the area. And some electronic microstructures also need to detect two-dimensional defects, PMP can be improved by utilizing the two-dimensional information, and PMP can improve the phase measurement precision, the phase expansion and the shadow area processing, so that the microstructure 3D detection is accurate and rapid. Therefore, in the embodiment, the three-dimensional image is reconstructed by combining the characteristics of the three-dimensional shape, the gray information of the two-dimensional image and the three-dimensional height interpolation of the adjacent points. Features extracted from the two-dimensional image and three-dimensional information obtained by grating projection phase measurement are matched and fused by using a deep learning BP neural network, so that the three-dimensional change of a shadow is accurately reflected by the features of the two-dimensional image, and the accuracy of the three-dimensional image is improved. Therefore, the BP neural network is trained based on the two-dimensional image characteristics, the PMP three-dimensional measurement performance is improved, the PMP phase unwrapping algorithm speed is improved, and the measurement problem of a shadow projection area is solved. The microstructure 3D detection precision and speed can be improved, the robustness is enhanced, and the influence of noise and other interference is small.
It should be noted that, after the acquisition, the images to be used as the two-dimensional image to be trained and the three-dimensional image to be trained may also be preprocessed first. The preprocessing process includes graying, filtering, sharpening, and pin frame selection, and the specific preprocessing process may refer to the preprocessing process for obtaining the two-dimensional training image, which is not described herein again.
In this embodiment, the defect detection model to be trained is a YOLOv3 model. Specifically, YOLOv3 is selected as the model in the present embodiment, and the model is optimized by modifying the code under the darknet framework. The step S202 specifically includes: obtaining an accurate three-dimensional image trained by the BP neural network, calibrating a target in the image by using a yolo-mark calibration tool, and training a YOLOv3 model by using a manufactured data set as a training set of the YOLOv3 model. After the training is finished, the model can be tested and improved, and finally the target defect detection model is obtained. Specifically, because the training process of the neural network model is unpredictable, after the training is finished, the model test result may not meet the use condition, and the data set needs to be adjusted to be as close as possible to the final image to be detected, so that the identification accuracy of the model is improved.
Specifically, in this embodiment, as shown in fig. 4, the step S300 includes:
step S301, acquiring an image of the object to be detected as an image to be detected.
Step S302, inputting the image to be detected into the target defect detection model, and detecting the defect of the object to be detected.
The object to be detected is an object needing defect detection, such as a circuit board, an electronic component or other microstructure workpieces. The image to be detected is a three-dimensional image of the object to be detected acquired through the PMP. And inputting the image to be detected into the trained target defect detection model, and detecting related defects (such as missing welding) through a YOLO deep learning algorithm. Therefore, two-dimensional and three-dimensional images of the microstructure on the workpiece can be acquired, the connection between the two-dimensional image and the three-dimensional image is trained through the BP neural network, so that a deep learning model is trained through the neural network based on the two-dimensional image to improve the three-dimensional measurement performance of the PMP, a more accurate image is obtained, and then related defects are detected through a YOLO deep learning algorithm, so that the rapid identification and detection of the three-dimensional structure are achieved. The three-dimensional reconstruction technology is combined with the deep learning algorithm, only the data set needs to be subjected to relevant training, and the relevant function does not need to be solved, so that the detection precision and speed are improved, and the calculated amount is reduced.
Further, in this embodiment, after the defect detection is performed, a result of the defect detection is output, so that the detection personnel can know the defect condition in time. Optionally, the defect detection result may be output in a voice broadcast mode, a text output mode, or the like, and a defect report may also be output, which is not specifically limited herein.
In the embodiment, the industrial camera, the industrial computer and the like replace manpower to carry out rapid operation, so that the detection speed and efficiency can be improved, and the detection cost can be reduced during batch processing. Meanwhile, the high-accuracy identification of the detection target can be realized through the machine vision and deep learning technology. Meanwhile, the method can be applied to other industrial fields, when the method is applied to other industrial fields, only the model structure needs to be reconstructed, and a proper data set is made to retrain the model, so that a new defect detection model is obtained again, and the method is high in application expansibility.
In this embodiment, the performance of the defect detection method is also verified through a specific experiment. Fig. 5 is a schematic diagram of a two-dimensional image according to an embodiment of the present invention, and specifically, the two-dimensional image shown in fig. 5 is a two-dimensional image obtained after a chip is subjected to preprocessing processes such as graying, filtering, sharpening, and lead frame selection. Fig. 6 is a schematic diagram of a three-dimensional image according to an embodiment of the present invention, and fig. 7 is an enlarged view of a welding point in fig. 6. Specifically, the three-dimensional image in fig. 6 is an image to be detected, and the height of the weld point in fig. 6 is measured by different methods, so that the methods are compared. The measurement data for 3 printing pastes by two methods are shown in the following table:
sample numbering
|
1
|
2
|
3
|
Mean value of laser triangulation
|
152.0
|
157.5
|
152.5
|
Standard deviation of laser triangulation
|
362.4
|
479.5
|
514.8
|
Mean value of the method of the present example
|
155.0
|
162.5
|
157.5
|
Standard deviation of the method of this example
|
122.6
|
142.7
|
98.3 |
As can be seen from the above table, the standard deviation of the data obtained by the laser triangulation method is larger than that of the data obtained by the method of the present embodiment. The average value of the heights measured by the laser triangulation method is smaller than that measured by the method of this embodiment. The method improves the accuracy of measurement.
Exemplary device
As shown in fig. 8, an embodiment of the present invention further provides a defect detecting apparatus corresponding to the defect detecting method, where the defect detecting apparatus includes:
and a target neural network obtaining module 510, configured to obtain a target neural network.
Wherein, the target neural network is a trained neural network. In this embodiment, the two-dimensional image and the three-dimensional image that are correspondingly paired can be trained based on the target neural network, so as to obtain a more accurate three-dimensional image, thereby improving the accuracy of defect detection.
And a model training set obtaining module 520, configured to obtain a two-dimensional image to be trained and a three-dimensional image to be trained, and obtain a model training set after training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network.
The two-dimensional image to be trained and the three-dimensional image to be trained are images used for inputting a target neural network, after the two-dimensional image to be trained and the three-dimensional image to be trained are input into the target neural network, the two-dimensional image to be trained and the three-dimensional image to be trained are trained through the target neural network, the two-dimensional image to be trained and the three-dimensional image to be trained are optimized, an accurate three-dimensional image is obtained, and a model training set is obtained based on a set of the obtained accurate three-dimensional images.
And a target defect detection model obtaining module 530, configured to obtain a defect detection model to be trained, and train the defect detection model to be trained based on the model training set to obtain a target defect detection model.
The defect detection model to be trained is a model which can be used for defect detection after further training, and a target detection model based on machine learning can be selected as the defect detection model to be trained, such as a YOLO model. And obtaining a model training set based on the obtained accurate three-dimensional image and training the defect detection model to be trained, so that the defect detection performance of the defect detection model to be trained is improved, and the target defect detection model is obtained. The target defect detection model is a to-be-trained defect detection model after training is completed, and can be directly used for defect detection.
And a defect detection module 540, configured to obtain an image to be detected, and perform defect detection based on the target defect detection model and the image to be detected.
The image to be detected is a three-dimensional image of an object (e.g., an electronic component) to be detected, and can be obtained based on a PMP three-dimensional measurement method (the three-dimensional image is obtained by modulating a projection light source acquired by PMP, calculating a phase change of modulation by acquiring a gray level of the image, and finally calculating a three-dimensional height). And inputting the image to be detected into a target defect detection model, and carrying out target identification on the electronic component in the image to be detected based on the target defect detection model, thereby realizing defect detection.
As can be seen from the above, in the defect detection apparatus provided in the embodiment of the present invention, the target neural network is obtained by the target neural network obtaining module 510; acquiring a two-dimensional image to be trained and a three-dimensional image to be trained through a model training set acquisition module 520, and training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network to obtain a model training set; acquiring a defect detection model to be trained through a target defect detection model acquisition module 530, and training the defect detection model to be trained based on the model training set to obtain a target defect detection model; and acquiring an image to be detected through a defect detection module 540, and performing defect detection based on the target defect detection model and the image to be detected. Compared with the method for detecting the surface defects through visual inspection by detection personnel in the prior art, the method for detecting the surface defects disclosed by the invention automatically detects the defects by combining the neural network and the defect detection model, and is favorable for improving the accuracy and efficiency of the defect detection. Meanwhile, the two-dimensional image to be trained and the three-dimensional image to be trained are trained on the basis of the target neural network to obtain a model training set, and a target defect detection model is obtained through the training of the model training set, so that the detection precision of the target defect detection model is improved, and the defect detection accuracy is further improved.
Specifically, in this embodiment, the specific functions of the defect detection apparatus and the modules thereof may also refer to the corresponding descriptions in the defect detection method, which are not described herein again.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a schematic block diagram thereof may be as shown in fig. 9. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein, the processor of the intelligent terminal is used for providing calculation and control capability. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a defect detection program. The internal memory provides an environment for the operation of an operating system and a defect detection program in the nonvolatile storage medium. The network interface of the intelligent terminal is used for being connected and communicated with an external terminal through a network. The defect detection program, when executed by a processor, implements the steps of any of the above-described defect detection methods. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 9 is only a block diagram of a part of the structure related to the solution of the present invention, and does not constitute a limitation to the intelligent terminal to which the solution of the present invention is applied, and a specific intelligent terminal may include more or less components than those shown in the figure, or combine some components, or have different arrangements of components.
In one embodiment, an intelligent terminal is provided, where the intelligent terminal includes a memory, a processor, and a defect detection program stored in the memory and executable on the processor, and the defect detection program performs the following operations when executed by the processor:
acquiring a target neural network;
acquiring a two-dimensional image to be trained and a three-dimensional image to be trained, and training the two-dimensional image to be trained and the three-dimensional image to be trained based on the target neural network to obtain a model training set;
acquiring a defect detection model to be trained, and training the defect detection model to be trained based on the model training set to obtain a target defect detection model;
and acquiring an image to be detected, and detecting the defects based on the target defect detection model and the image to be detected.
The embodiment of the present invention further provides a computer-readable storage medium, where a defect detection program is stored on the computer-readable storage medium, and when the defect detection program is executed by a processor, the steps of any one of the defect detection methods provided in the embodiments of the present invention are implemented.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned functions may be distributed as different functional units and modules according to needs, that is, the internal structure of the apparatus may be divided into different functional units or modules to implement all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art would appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the above modules or units is only one logical division, and the actual implementation may be implemented by another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed.
The integrated modules/units described above, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium and can implement the steps of the embodiments of the method when the computer program is executed by a processor. The computer program includes computer program code, and the computer program code may be in a source code form, an object code form, an executable file or some intermediate form. The computer readable medium may include: any entity or device capable of carrying the above-mentioned computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the contents contained in the computer-readable storage medium can be increased or decreased as required by legislation and patent practice in the jurisdiction.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art; the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein.