CN114202489B - PCB board mark point reflective spot segmentation method based on deep learning - Google Patents
PCB board mark point reflective spot segmentation method based on deep learning Download PDFInfo
- Publication number
- CN114202489B CN114202489B CN202111273505.1A CN202111273505A CN114202489B CN 114202489 B CN114202489 B CN 114202489B CN 202111273505 A CN202111273505 A CN 202111273505A CN 114202489 B CN114202489 B CN 114202489B
- Authority
- CN
- China
- Prior art keywords
- image
- pcb board
- picture
- pixel
- pcb
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000011218 segmentation Effects 0.000 title claims abstract description 21
- 238000013135 deep learning Methods 0.000 title claims abstract description 18
- 238000013528 artificial neural network Methods 0.000 claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 13
- 238000012360 testing method Methods 0.000 claims abstract description 9
- 238000010606 normalization Methods 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 7
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 238000004364 calculation method Methods 0.000 claims description 14
- 230000005284 excitation Effects 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 230000006835 compression Effects 0.000 claims description 7
- 238000007906 compression Methods 0.000 claims description 7
- 238000011176 pooling Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000003062 neural network model Methods 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 230000002146 bilateral effect Effects 0.000 claims 1
- 238000002372 labelling Methods 0.000 abstract description 28
- 238000012545 processing Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 11
- 238000001125 extrusion Methods 0.000 description 8
- 238000004519 manufacturing process Methods 0.000 description 4
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 239000011889 copper foil Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000012938 design process Methods 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000004064 recycling Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011946 reduction process Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4023—Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30141—Printed circuit board [PCB]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps: marking all reflective pixels in a gray level map of a Mark point of the PCB on the same coordinate position of another gray level map to obtain a marked picture; preprocessing the PCB picture and the corresponding labeling picture to obtain a training set and a testing set; filtering and denoising the PCB picture, and scaling the PCB picture and the labeling picture according to the same proportion; simultaneously, carrying out normalization processing on gray values of all pixels of the PCB picture; generating a space weight array by using the preprocessed PCB picture and the preprocessed labeling picture; dividing the PCB picture by using the space weight array, the labeling picture and the neural network; and transferring the segmentation result to a PCB picture to cover the reflective area. The invention can eliminate the reflection generated by the irradiation of the annular light source on the photosensitive film on the PCB board, and improve the Mark point identification precision.
Description
Technical Field
The invention belongs to the technical field of manufacturing of printed circuit boards, and particularly relates to a method for dividing Mark point reflection light spots of a PCB based on deep learning.
Background
The reference (Mark) points are positioning marks on a printed circuit board (Printed Circuit Board, PCB), and common shapes include circles, diamonds, triangles, cross shapes and the like. In the production process of the PCB board, a layer of copper foil is usually already attached to the PCB board before a layer of photosensitive film is attached. When the annular light source is used for irradiation, due to the unevenness of the photosensitive film and the specular reflection of the copper foil, the industrial camera obtains light spots with various shapes in Mark points of the PCB picture, and the recognition effect is affected. The production variety and the number of PCB circuit boards are improved year by year, so that the reduction of the influence of reflection is extremely important to the production process of the PCB boards
Disclosure of Invention
Aiming at the problem that Mark points in PCB images have light spots with various shapes, the invention provides a method for dividing the reflective light spots of the Mark points of the PCB based on deep learning, which aims to eliminate the reflective light generated by irradiation of an annular light source on a photosensitive film on the PCB and improve the identification precision of the Mark points.
The invention provides a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps:
S1, acquiring a gray level image of Mark points of a PCB, and marking all reflective pixels in the gray level image of the Mark points of the PCB at the same coordinate position of the other gray level image to obtain a marked image;
s2, preprocessing a gray level image of Mark points of the PCB and a corresponding labeling image to obtain a training set and a testing set;
filtering and denoising the PCB picture, and scaling the PCB picture and the labeling picture according to the same proportion to ensure that the size of the picture is the same as the size of the input picture defined by the neural network;
Simultaneously, carrying out normalization processing on gray values of all pixels of the PCB picture, and normalizing the gray values to a [0,1] interval; the normalization calculation mode is as follows: the gray values of all pixels divided by 255;
S3, generating a space weight array by using the preprocessed PCB picture and the preprocessed labeling picture;
S4, dividing the PCB picture by using the space weight array, the labeling picture and the neural network;
S5, transferring the segmentation result to the PCB picture to cover the reflective area.
Further, in the step S1, in the labeling picture, all the labeled pixel points are connected into a plurality of continuous closed areas, wherein the gray value of the background pixel is 0, and the gray value of the reflective pixel is 1.
Further, in the step S2, the training set and the test set are obtained through the following steps:
and removing PCB pictures containing artifacts and blurring and corresponding labeling pictures, randomly extracting a part of the PCB pictures and the corresponding labeling pictures as training sets, and taking the other part of the PCB pictures and the corresponding labeling pictures as test sets.
Further, in the step S2, the filtering and noise reduction process for the PCB picture is as follows: scanning all pixels of the image by using an n multiplied by n filter, and replacing the central pixel point by using a weighted average value of pixels in the neighborhood determined by the mask and the mask at the corresponding position, wherein n is a positive integer.
Further, in the step S2, in the process of scaling the PCB picture and the labeling picture in the same scale, if the length or width of the picture scaled in the same scale is smaller than the area of the picture defined by the neural network, the area of the picture is smaller than the area of the picture defined by the neural network, then the part smaller than the picture defined by the neural network is filled with the background color, and the final picture size is the same as the size of the picture defined by the neural network.
Further, in the step S3, generating a space weight array by using the preprocessed PCB board picture and the labeling picture specifically includes:
S31, preprocessing the marked picture, including: finding out the arc edge of a circular hole area in a marked picture, fitting a circle to form a circular contour circle1 with a radius r, and drawing a circular contour circle2 with the radius r of (1+mu%), wherein mu is a threshold value; selecting a region surrounded by the circular contour circle2 as a circular ROI;
S32, firstly calculating the maximum distance l max from the edge of the circular ROI to the center point of the circular ROI, dividing the distance into n sections, wherein the length of each section is l max/n, and then the distance d between the pixel positioned in the i section and the center point is as follows:
Calculating the distance between all reflection pixels in the circular ROI and the center point of the circular ROI, classifying the calculated reflection pixels according to the distance between the calculated reflection pixels and the center point of the circular ROI, dividing the distances between the calculated reflection pixels and the i-th section into i-th class, and calculating the number of similar pixels;
s33, defining a space weight array A with the length of n, wherein the element value of the array index i is equal to the number of the i+1st element, and finally normalizing the weight array to ensure that all values are distributed between 0 and 1;
X is a weight value in the array, xmax, xmin, xnorm is a maximum value and a minimum value of the values in the space weight array and a normalized value, and the normalization method is as follows:
S34, replacing Xnorm with the original weight value of the array to generate a new space weight array.
Further, the value of mu is 10-20.
Further, the step S34 further includes the following steps:
S35, if the last weight value An of the new weight array is 0, the last weight value An is removed, the array dimension is reduced by 1, the classification book is n-1, and the operation is repeated until the last weight value is not 0.
Further, in the step S4, the space weight array, the labeling picture and the neural network are used to segment the PCB picture, which includes the following steps:
s41, constructing a U-shaped neural network:
The U-shaped neural network comprises a left side unit, a right side unit and a convolution unit connected with the left side unit and the right side unit;
The left side unit comprises four compression path units, and two adjacent compression path units are sequentially subjected to two-side convolution operation and pooling operation;
the right side unit comprises four extended path units, and two adjacent extended path units sequentially undergo up-sampling operation, splicing operation, two convolution operation and extrusion excitation operation of the extrusion excitation module;
The convolution operation is used for transmitting the input of the left unit to the right unit after performing convolution operation twice;
inputting a characteristic diagram X into the extrusion excitation module; the feature map X is subjected to convolution operation to obtain a feature map U; the feature map U is subjected to convolution operation to obtain a feature map V, and a sigmoid activation function is used for obtaining a feature vector of 1 multiplied by C through global tie pooling, extrusion operation and full connection operation; finally multiplying the characteristic vector of 1 multiplied by C with the characteristic diagram V to finally realize the excitation of important channels and the suppression of unimportant channels;
S42, inputting the PCB picture into the U-shaped neural network of S41, calculating the PCB picture through the U-shaped neural network to obtain a prediction picture, wherein the size of the prediction picture is the same as the size of the marked picture, and determining whether the predicted value of the prediction picture is the same as the gray value of the same position of the marked picture;
If the two types of the space weight arrays are different, the loss is generated, the difference degree between the predictive graph and the labeling graph is calculated by using a cross entropy loss function to be used as the loss L 1, and the difference degree between the result of combining the predictive graph and the space weight array and the labeling graph is used as the loss L 2, wherein the specific calculation result is as follows:
(α11,α21)、(α12,α22)、(α13,α23)、…、(α1k,α2k) is taken as the loss function weight of k groups of network models, the weight values are not repeated, each learner has different preference, and the loss L of k models is as follows:
L=α1 iL1+α2i·L2 (wherein i=1, 2,3, …, k)
The calculation method of L 2 is as follows:
If a pixel point predicts that no reflection exists, namely the predicted value is 0, and the pixel at the same position of the actual marked picture is reflected, namely the gray value is 1, the pixel is a false negative example FN, or the pixel point predicts that reflection exists, namely the predicted value is 1, the pixel in the marked picture does not reflect, namely the gray value is 0, the pixel is a false positive example FP, and loss can occur only when the false negative example and the false positive example occur; assuming that the predicted value of a certain pixel point is y pred, the gray value of the pixel at the same position of the actual labeling picture is y true, and calculating that the pixel belongs to a jth section according to the method described in S32, and the spatial weight value of the section is Aj, and then the calculation formula of L 2 is as follows:
L2=(ytrue-ypred)2·ln(1+Aj)
And S43, updating the parameters of the U-shaped neural network by using back propagation, and recycling the steps to obtain the optimized U-shaped neural network model.
Further, in S5, transferring the segmentation result to the PCB board picture to cover the reflective area, which specifically includes:
When the image is segmented, the image is transmitted into k models to obtain k prediction images, so that k prediction results exist for each pixel, and a few prediction values are determined by adopting a majority rule to determine whether the pixel is a reflective pixel or not;
And transmitting the coordinate position identified as the reflective pixel in the segmentation result to the PCB picture, and setting the pixel gray value of the coordinate position on the PCB picture to be 0 to cover the reflective light spot in the Mark point on the PCB picture.
Compared with the prior art, the invention has the beneficial effects that by adopting the scheme, the invention has the following advantages:
The method can eliminate the reflection generated by the irradiation of the annular light source on the photosensitive film on the PCB, and improve the Mark point identification precision. In addition, the method has the advantages of simple image acquisition, relevant open source functions of pretreatment and neural network establishment, modularized design process, simplicity and easiness in use. The industrial computer is used for automatic segmentation, manual participation is not needed, and the operation time is reduced, so that the segmentation speed is high and the precision is high. Because the neural network has strong learning ability, when there are enough PCB pictures, it can learn to divide various difficult-to-recognize areas, so it can divide the difficult-to-recognize areas. The method is suitable for various reflective segmentation applications, has wide applicability to various units and modules in a network model, does not need large adjustment of the structure, and has wide applicable range.
Drawings
Fig. 1 is a PCB board picture;
FIG. 2 is a labeling picture;
FIG. 3 is a picture pixel interval diagram of a PCB;
FIG. 4 is a diagram of a spatial weight array;
FIG. 5 is a diagram of an extrusion excitation module;
FIG. 6 is a compressed path cell diagram;
FIG. 7 is an expanded path element diagram;
FIG. 8 is a convolution cell diagram;
FIG. 9 is a complete network architecture diagram;
Fig. 10 is a result chart.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The embodiment provides a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps:
S1, acquiring a gray level image of a Mark point of a PCB, marking all reflective pixels in the gray level image of the Mark point of the PCB at the same coordinate position of another gray level image, and obtaining a marked image, wherein the method comprises the following steps:
First, a gray scale of Mark points of the PCB board is obtained using a camera or other image capturing apparatus, as shown in fig. 1. All reflective pixels in Mark points in a Mark point picture of the PCB are marked on the same coordinate position of another gray level picture by using marking tools (such as Photoshop or labelme software, etc.), and the gray level picture is the marked picture. All the marked pixel points are connected into a plurality of continuous closed areas, as shown in fig. 2, the gray value of the background pixel is 0, and the gray value of the reflective pixel is 1.
S2, preprocessing a gray level image of Mark points of the PCB and a labeling image corresponding to the gray level image to obtain a training set and a testing set, wherein the method comprises the following steps:
Removing PCB pictures and corresponding labeling pictures which contain artifacts and are fuzzy and influence the training effect of the neural network, then taking 80% of PCB pictures and corresponding labeling pictures randomly as training sets, and taking the remaining 20% of PCB pictures and corresponding labeling pictures as test sets;
Filtering and denoising the PCB picture, namely scanning all pixels of the image by using a filter of m multiplied by m (m is a positive integer, for example, m is 5), and replacing the value of a central pixel point by using a weighted average value of pixels in a neighborhood determined by a mask and a mask at a corresponding position;
And scaling the PCB picture and the labeling picture in the same proportion (mostly reducing the calculation amount of the neural network) so as to facilitate the deep learning network learning. If the length or width of the picture scaled according to the same proportion is smaller than the area defined by the neural network, the picture area is smaller than the area defined by the neural network, the part smaller than the picture defined by the neural network is filled by background color, and finally the sizes of the scaled PCB picture and the scaled label picture are the same as the size of the picture defined by the neural network;
and normalizing the gray values of all pixels of the PCB picture, wherein the normalized values are between 0 and 1. The manner of calculation is the gray values of all pixels divided by 255.
S3, generating a space weight array by the preprocessed PCB picture and the preprocessed label picture, wherein the space weight array comprises the following steps:
S31, preprocessing the marked picture, including: and (3) finding out the arc edge of the circular hole area in the marked picture, and fitting the circle to form a circular contour circle1 through opencv or other image processing libraries. The radius of the circular outline is r, r is taken as a radius drawing circle2, mu is taken as a threshold value, and 10-20 can be taken. And selecting the area surrounded by the circular contour circle2 as a circular ROI.
S32, firstly calculating the radius l max of the circular ROI, dividing the distance into n sections, wherein the length of each section isThe distance d between the pixel located in the i-th segment and the center point is located in a section, and the section where d is located is represented by the following formula:
all pixel interval diagrams are shown in fig. 3.
And calculating the distance between all the reflection pixels in the circular ROI and the center point of the circular ROI. Classifying the calculated reflective pixels according to the distance from the center point of the circular ROI, classifying the distances in the ith section into the ith class, and then calculating the number of similar pixels. (x, y) is the coordinates of reflective pixels in the circular ROI, and the coordinates of the central point of the circular ROI are (x 0,y0), the calculation formula of the distance D between the reflective pixels and the central point of the circular ROI is as follows:
s33, defining a space weight array A with the length of n, wherein the element value of the array index i is equal to the number of the i+1st element, and finally normalizing the weight array to enable all values to be distributed between 0 and 1. X is a weight value in the array, xmax, xmin, xnorm is a maximum value and a minimum value of the values in the space weight array and a normalized value, and the normalization method is as follows:
S34, xnorm replaces the original weight value of the array, the finally generated space weight array is shown in FIG. 4, ai represents the value of the ith element (namely Xnorm), namely the space weight values of all the pixels positioned at the ith section in the PCB picture.
S35, if the last weight value of the weight array, namely An, is 0, removing the An, subtracting 1 from the array dimension, and reducing the classification number to n-1. This operation is repeated until the last weight value is not 0.
S4, dividing the PCB picture by using the space weight array, the labeling picture and the neural network, wherein the method comprises the following steps of:
s41, constructing a U-shaped neural network:
a U-shaped neural network (U-Net) including left and right side units, and a convolution unit connecting the left and right side units, as shown in FIG. 9;
The left side unit includes four compression path units, each containing one Max-Pooling (Max pooling) operation and two Conv (convolution) operations, the compression path units are seen in fig. 6; the right side cell consists of four extended path cells, each containing an Upsampling operation, a Concat (concatenation) operation, two Conv (convolution) operations, and a SE (extrusion excitation) module, see fig. 7.
The extrusion excitation module is shown in fig. 5, and the input feature map X is subjected to convolution operation to obtain a feature map U; the feature map U is deconvoluted to obtain a feature map V, and the feature map U can obtain a feature vector of 1 multiplied by C through GAP (global tie pooling, extrusion) operation, FN (full connection) operation and sigmoid activation function; finally, the eigenvectors of 1×1×c are multiplied by the eigenvector diagram V, so that the excitation of important channels and the suppression of unimportant channels can be realized. The convolution unit is two convolution operations for connecting the compression path and the expansion path, see fig. 8.
S42, inputting the PCB picture into the U-shaped neural network of S41, and obtaining a prediction picture in the U-shaped neural network through complex calculation, wherein the size of the prediction picture is the same as the size of the marked picture. The training adopts an integrated learning mode, and trains k network models, and the specific training process of each network model is as follows: inputting the training set PCB picture into a neural network, outputting a prediction graph, wherein loss is generated when the prediction value of the prediction graph is different from the gray value of the same position of the labeling graph, the loss is divided into two parts, the difference degree between the prediction graph and the labeling graph is calculated by using a cross entropy loss function and is used as loss L 1, and the difference degree between the result of combining the prediction graph and the spatial weight and the labeling graph is used as loss L 2.
(α11,α21)、(α12,α22)、(α13,α23)、…、(α1k,α2k) The loss function weights of k groups of network models are not repeated, so that each learner has different preference, and the loss L of k models is as follows:
L=α1 iL1+α2i·L2 (where i=1, 2,3, …, k) (4)
In the loss function weight of each group of network models, alpha determines the duty ratio of L 1 and L 2, the value range is [0,1], and in the loss function weight of each group of network models, alpha 1+alpha 2=1.
The calculation method of L 2 is as follows:
if a pixel point predicts no reflection (the predicted value is 0), and the pixel at the same position of the actual label picture is reflection (the gray value is 1), the pixel is a false negative example FN, or the pixel point predicts reflection (the predicted value is 1), the pixel in the label picture is no reflection (the gray value is 0), the pixel is a false positive example FP, and losses can be generated when the false negative example and the false positive example occur. The predicted value of a certain pixel point is y pred, the gray value of a pixel at the same position of an actual label picture is y true, the pixel belongs to a jth section calculated by the method of S32, and the spatial weight value of the section is Aj, and the calculation formula of L 2 is shown as the following formula 5:
L2=(ytrue-ypred)2·ln(1+Aj) (5)
and S43, finally updating the U-shaped neural network parameters by using back propagation. And (5) circulating the steps to obtain the optimized U-shaped neural network model.
S5, transferring the segmentation result to a PCB picture to cover a light reflecting area, wherein the method specifically comprises the following steps of:
When in segmentation, the pictures are transmitted into k models to obtain k prediction graphs, so that k prediction results exist for each pixel, and a few prediction values are determined by adopting a majority rule, namely: assuming k takes 3, if there are two or three models to predict the pixel as reflective, then this pixel is considered to be reflective, otherwise it is considered to be background. And transmitting the coordinate position of the pixel identified as the reflection in the segmentation result to the PCB picture, and setting the pixel gray value of the coordinate position on the PCB picture to be 0 (black, background color) to cover the reflection light spots in the Mark points on the PCB picture. As shown in fig. 10, the reflective light spots in the mark points of the PCB picture are covered with black, which is beneficial to the identification of the positioning coordinates.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111273505.1A CN114202489B (en) | 2021-10-29 | 2021-10-29 | PCB board mark point reflective spot segmentation method based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111273505.1A CN114202489B (en) | 2021-10-29 | 2021-10-29 | PCB board mark point reflective spot segmentation method based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114202489A CN114202489A (en) | 2022-03-18 |
CN114202489B true CN114202489B (en) | 2024-11-26 |
Family
ID=80646488
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111273505.1A Active CN114202489B (en) | 2021-10-29 | 2021-10-29 | PCB board mark point reflective spot segmentation method based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114202489B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103162192A (en) * | 2013-03-18 | 2013-06-19 | 晶科电子(广州)有限公司 | Direct down type backlight module |
AU2020103901A4 (en) * | 2020-12-04 | 2021-02-11 | Chongqing Normal University | Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107345789B (en) * | 2017-07-06 | 2023-06-30 | 深圳市强华科技发展有限公司 | PCB hole position detection device and method |
CN206944930U (en) * | 2017-07-06 | 2018-01-30 | 深圳市强华科技发展有限公司 | A kind of pcb board hole location detecting device |
CN109711413B (en) * | 2018-12-30 | 2023-04-07 | 陕西师范大学 | Image semantic segmentation method based on deep learning |
CN112347950B (en) * | 2020-11-11 | 2024-04-05 | 湖北大学 | Deep learning-based PCB laser target identification method and system |
-
2021
- 2021-10-29 CN CN202111273505.1A patent/CN114202489B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103162192A (en) * | 2013-03-18 | 2013-06-19 | 晶科电子(广州)有限公司 | Direct down type backlight module |
AU2020103901A4 (en) * | 2020-12-04 | 2021-02-11 | Chongqing Normal University | Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field |
Also Published As
Publication number | Publication date |
---|---|
CN114202489A (en) | 2022-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200133182A1 (en) | Defect classification in an image or printed output | |
CN106875373B (en) | Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm | |
CN104463209B (en) | Method for recognizing digital code on PCB based on BP neural network | |
CN108918536B (en) | Tire mold surface character defect detection method, device, equipment and storage medium | |
CN114663346A (en) | Strip steel surface defect detection method based on improved YOLOv5 network | |
CN113592911B (en) | Apparent enhanced depth target tracking method | |
CN108921916B (en) | Method, device and equipment for coloring multi-target area in picture and storage medium | |
CN116563237B (en) | A hyperspectral image detection method for chicken carcass defects based on deep learning | |
CN106096610A (en) | A kind of file and picture binary coding method based on support vector machine | |
CN106846011A (en) | Business license recognition methods and device | |
CN112036454B (en) | An Image Classification Method Based on Multi-core Densely Connected Networks | |
CN111310746B (en) | Text line detection method, model training method, device, server and medium | |
CN111460947B (en) | BP neural network-based method and system for identifying metal minerals under microscope | |
CN113205507A (en) | Visual question answering method, system and server | |
CN113627302A (en) | Method and system for detecting compliance of ascending construction | |
CN113870342A (en) | Appearance defect detection method, intelligent terminal and storage device | |
JP2006048370A (en) | Pattern recognition method, teaching data generation method used therefor, and pattern recognition apparatus | |
CN118446987A (en) | A visual inspection method for corrosion on the inner surface of cabin sections in narrow and confined spaces | |
CN117423040A (en) | Visual garbage identification method for unmanned garbage sweeper based on improved YOLOv8 | |
CN116342536A (en) | Aluminum strip surface defect detection method, system and equipment based on lightweight model | |
CN114202489B (en) | PCB board mark point reflective spot segmentation method based on deep learning | |
CN114219757A (en) | An Intelligent Vehicle Loss Determination Method Based on Improved Mask R-CNN | |
CN110956623B (en) | Wrinkle detection method, wrinkle detection device, wrinkle detection equipment and computer-readable storage medium | |
CN116883313A (en) | Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium | |
CN116452965A (en) | An underwater target detection and recognition method based on acousto-optic fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |