[go: up one dir, main page]

CN114202489B - PCB board mark point reflective spot segmentation method based on deep learning - Google Patents

PCB board mark point reflective spot segmentation method based on deep learning Download PDF

Info

Publication number
CN114202489B
CN114202489B CN202111273505.1A CN202111273505A CN114202489B CN 114202489 B CN114202489 B CN 114202489B CN 202111273505 A CN202111273505 A CN 202111273505A CN 114202489 B CN114202489 B CN 114202489B
Authority
CN
China
Prior art keywords
image
pcb board
picture
pixel
pcb
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111273505.1A
Other languages
Chinese (zh)
Other versions
CN114202489A (en
Inventor
宋建华
余佳文
王业率
王时绘
张龑
杨超
何鹏
马传香
吕顺营
朱荣钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University
Original Assignee
Hubei University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University filed Critical Hubei University
Priority to CN202111273505.1A priority Critical patent/CN114202489B/en
Publication of CN114202489A publication Critical patent/CN114202489A/en
Application granted granted Critical
Publication of CN114202489B publication Critical patent/CN114202489B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4023Scaling of whole images or parts thereof, e.g. expanding or contracting based on decimating pixels or lines of pixels; based on inserting pixels or lines of pixels
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30141Printed circuit board [PCB]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps: marking all reflective pixels in a gray level map of a Mark point of the PCB on the same coordinate position of another gray level map to obtain a marked picture; preprocessing the PCB picture and the corresponding labeling picture to obtain a training set and a testing set; filtering and denoising the PCB picture, and scaling the PCB picture and the labeling picture according to the same proportion; simultaneously, carrying out normalization processing on gray values of all pixels of the PCB picture; generating a space weight array by using the preprocessed PCB picture and the preprocessed labeling picture; dividing the PCB picture by using the space weight array, the labeling picture and the neural network; and transferring the segmentation result to a PCB picture to cover the reflective area. The invention can eliminate the reflection generated by the irradiation of the annular light source on the photosensitive film on the PCB board, and improve the Mark point identification precision.

Description

PCB Mark point reflection light spot segmentation method based on deep learning
Technical Field
The invention belongs to the technical field of manufacturing of printed circuit boards, and particularly relates to a method for dividing Mark point reflection light spots of a PCB based on deep learning.
Background
The reference (Mark) points are positioning marks on a printed circuit board (Printed Circuit Board, PCB), and common shapes include circles, diamonds, triangles, cross shapes and the like. In the production process of the PCB board, a layer of copper foil is usually already attached to the PCB board before a layer of photosensitive film is attached. When the annular light source is used for irradiation, due to the unevenness of the photosensitive film and the specular reflection of the copper foil, the industrial camera obtains light spots with various shapes in Mark points of the PCB picture, and the recognition effect is affected. The production variety and the number of PCB circuit boards are improved year by year, so that the reduction of the influence of reflection is extremely important to the production process of the PCB boards
Disclosure of Invention
Aiming at the problem that Mark points in PCB images have light spots with various shapes, the invention provides a method for dividing the reflective light spots of the Mark points of the PCB based on deep learning, which aims to eliminate the reflective light generated by irradiation of an annular light source on a photosensitive film on the PCB and improve the identification precision of the Mark points.
The invention provides a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps:
S1, acquiring a gray level image of Mark points of a PCB, and marking all reflective pixels in the gray level image of the Mark points of the PCB at the same coordinate position of the other gray level image to obtain a marked image;
s2, preprocessing a gray level image of Mark points of the PCB and a corresponding labeling image to obtain a training set and a testing set;
filtering and denoising the PCB picture, and scaling the PCB picture and the labeling picture according to the same proportion to ensure that the size of the picture is the same as the size of the input picture defined by the neural network;
Simultaneously, carrying out normalization processing on gray values of all pixels of the PCB picture, and normalizing the gray values to a [0,1] interval; the normalization calculation mode is as follows: the gray values of all pixels divided by 255;
S3, generating a space weight array by using the preprocessed PCB picture and the preprocessed labeling picture;
S4, dividing the PCB picture by using the space weight array, the labeling picture and the neural network;
S5, transferring the segmentation result to the PCB picture to cover the reflective area.
Further, in the step S1, in the labeling picture, all the labeled pixel points are connected into a plurality of continuous closed areas, wherein the gray value of the background pixel is 0, and the gray value of the reflective pixel is 1.
Further, in the step S2, the training set and the test set are obtained through the following steps:
and removing PCB pictures containing artifacts and blurring and corresponding labeling pictures, randomly extracting a part of the PCB pictures and the corresponding labeling pictures as training sets, and taking the other part of the PCB pictures and the corresponding labeling pictures as test sets.
Further, in the step S2, the filtering and noise reduction process for the PCB picture is as follows: scanning all pixels of the image by using an n multiplied by n filter, and replacing the central pixel point by using a weighted average value of pixels in the neighborhood determined by the mask and the mask at the corresponding position, wherein n is a positive integer.
Further, in the step S2, in the process of scaling the PCB picture and the labeling picture in the same scale, if the length or width of the picture scaled in the same scale is smaller than the area of the picture defined by the neural network, the area of the picture is smaller than the area of the picture defined by the neural network, then the part smaller than the picture defined by the neural network is filled with the background color, and the final picture size is the same as the size of the picture defined by the neural network.
Further, in the step S3, generating a space weight array by using the preprocessed PCB board picture and the labeling picture specifically includes:
S31, preprocessing the marked picture, including: finding out the arc edge of a circular hole area in a marked picture, fitting a circle to form a circular contour circle1 with a radius r, and drawing a circular contour circle2 with the radius r of (1+mu%), wherein mu is a threshold value; selecting a region surrounded by the circular contour circle2 as a circular ROI;
S32, firstly calculating the maximum distance l max from the edge of the circular ROI to the center point of the circular ROI, dividing the distance into n sections, wherein the length of each section is l max/n, and then the distance d between the pixel positioned in the i section and the center point is as follows:
Calculating the distance between all reflection pixels in the circular ROI and the center point of the circular ROI, classifying the calculated reflection pixels according to the distance between the calculated reflection pixels and the center point of the circular ROI, dividing the distances between the calculated reflection pixels and the i-th section into i-th class, and calculating the number of similar pixels;
s33, defining a space weight array A with the length of n, wherein the element value of the array index i is equal to the number of the i+1st element, and finally normalizing the weight array to ensure that all values are distributed between 0 and 1;
X is a weight value in the array, xmax, xmin, xnorm is a maximum value and a minimum value of the values in the space weight array and a normalized value, and the normalization method is as follows:
S34, replacing Xnorm with the original weight value of the array to generate a new space weight array.
Further, the value of mu is 10-20.
Further, the step S34 further includes the following steps:
S35, if the last weight value An of the new weight array is 0, the last weight value An is removed, the array dimension is reduced by 1, the classification book is n-1, and the operation is repeated until the last weight value is not 0.
Further, in the step S4, the space weight array, the labeling picture and the neural network are used to segment the PCB picture, which includes the following steps:
s41, constructing a U-shaped neural network:
The U-shaped neural network comprises a left side unit, a right side unit and a convolution unit connected with the left side unit and the right side unit;
The left side unit comprises four compression path units, and two adjacent compression path units are sequentially subjected to two-side convolution operation and pooling operation;
the right side unit comprises four extended path units, and two adjacent extended path units sequentially undergo up-sampling operation, splicing operation, two convolution operation and extrusion excitation operation of the extrusion excitation module;
The convolution operation is used for transmitting the input of the left unit to the right unit after performing convolution operation twice;
inputting a characteristic diagram X into the extrusion excitation module; the feature map X is subjected to convolution operation to obtain a feature map U; the feature map U is subjected to convolution operation to obtain a feature map V, and a sigmoid activation function is used for obtaining a feature vector of 1 multiplied by C through global tie pooling, extrusion operation and full connection operation; finally multiplying the characteristic vector of 1 multiplied by C with the characteristic diagram V to finally realize the excitation of important channels and the suppression of unimportant channels;
S42, inputting the PCB picture into the U-shaped neural network of S41, calculating the PCB picture through the U-shaped neural network to obtain a prediction picture, wherein the size of the prediction picture is the same as the size of the marked picture, and determining whether the predicted value of the prediction picture is the same as the gray value of the same position of the marked picture;
If the two types of the space weight arrays are different, the loss is generated, the difference degree between the predictive graph and the labeling graph is calculated by using a cross entropy loss function to be used as the loss L 1, and the difference degree between the result of combining the predictive graph and the space weight array and the labeling graph is used as the loss L 2, wherein the specific calculation result is as follows:
(α11,α21)、(α12,α22)、(α13,α23)、…、(α1k,α2k) is taken as the loss function weight of k groups of network models, the weight values are not repeated, each learner has different preference, and the loss L of k models is as follows:
L=α1 iL1+α2i·L2 (wherein i=1, 2,3, …, k)
The calculation method of L 2 is as follows:
If a pixel point predicts that no reflection exists, namely the predicted value is 0, and the pixel at the same position of the actual marked picture is reflected, namely the gray value is 1, the pixel is a false negative example FN, or the pixel point predicts that reflection exists, namely the predicted value is 1, the pixel in the marked picture does not reflect, namely the gray value is 0, the pixel is a false positive example FP, and loss can occur only when the false negative example and the false positive example occur; assuming that the predicted value of a certain pixel point is y pred, the gray value of the pixel at the same position of the actual labeling picture is y true, and calculating that the pixel belongs to a jth section according to the method described in S32, and the spatial weight value of the section is Aj, and then the calculation formula of L 2 is as follows:
L2=(ytrue-ypred)2·ln(1+Aj)
And S43, updating the parameters of the U-shaped neural network by using back propagation, and recycling the steps to obtain the optimized U-shaped neural network model.
Further, in S5, transferring the segmentation result to the PCB board picture to cover the reflective area, which specifically includes:
When the image is segmented, the image is transmitted into k models to obtain k prediction images, so that k prediction results exist for each pixel, and a few prediction values are determined by adopting a majority rule to determine whether the pixel is a reflective pixel or not;
And transmitting the coordinate position identified as the reflective pixel in the segmentation result to the PCB picture, and setting the pixel gray value of the coordinate position on the PCB picture to be 0 to cover the reflective light spot in the Mark point on the PCB picture.
Compared with the prior art, the invention has the beneficial effects that by adopting the scheme, the invention has the following advantages:
The method can eliminate the reflection generated by the irradiation of the annular light source on the photosensitive film on the PCB, and improve the Mark point identification precision. In addition, the method has the advantages of simple image acquisition, relevant open source functions of pretreatment and neural network establishment, modularized design process, simplicity and easiness in use. The industrial computer is used for automatic segmentation, manual participation is not needed, and the operation time is reduced, so that the segmentation speed is high and the precision is high. Because the neural network has strong learning ability, when there are enough PCB pictures, it can learn to divide various difficult-to-recognize areas, so it can divide the difficult-to-recognize areas. The method is suitable for various reflective segmentation applications, has wide applicability to various units and modules in a network model, does not need large adjustment of the structure, and has wide applicable range.
Drawings
Fig. 1 is a PCB board picture;
FIG. 2 is a labeling picture;
FIG. 3 is a picture pixel interval diagram of a PCB;
FIG. 4 is a diagram of a spatial weight array;
FIG. 5 is a diagram of an extrusion excitation module;
FIG. 6 is a compressed path cell diagram;
FIG. 7 is an expanded path element diagram;
FIG. 8 is a convolution cell diagram;
FIG. 9 is a complete network architecture diagram;
Fig. 10 is a result chart.
Detailed Description
In order that the above-recited objects, features and advantages of the present application will be more readily understood, a more particular description of the application will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It should be noted that, without conflict, the embodiments of the present application and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways than those described herein, and therefore the scope of the present invention is not limited to the specific embodiments disclosed below.
The embodiment provides a method for dividing Mark point reflection light spots of a PCB based on deep learning, which comprises the following steps:
S1, acquiring a gray level image of a Mark point of a PCB, marking all reflective pixels in the gray level image of the Mark point of the PCB at the same coordinate position of another gray level image, and obtaining a marked image, wherein the method comprises the following steps:
First, a gray scale of Mark points of the PCB board is obtained using a camera or other image capturing apparatus, as shown in fig. 1. All reflective pixels in Mark points in a Mark point picture of the PCB are marked on the same coordinate position of another gray level picture by using marking tools (such as Photoshop or labelme software, etc.), and the gray level picture is the marked picture. All the marked pixel points are connected into a plurality of continuous closed areas, as shown in fig. 2, the gray value of the background pixel is 0, and the gray value of the reflective pixel is 1.
S2, preprocessing a gray level image of Mark points of the PCB and a labeling image corresponding to the gray level image to obtain a training set and a testing set, wherein the method comprises the following steps:
Removing PCB pictures and corresponding labeling pictures which contain artifacts and are fuzzy and influence the training effect of the neural network, then taking 80% of PCB pictures and corresponding labeling pictures randomly as training sets, and taking the remaining 20% of PCB pictures and corresponding labeling pictures as test sets;
Filtering and denoising the PCB picture, namely scanning all pixels of the image by using a filter of m multiplied by m (m is a positive integer, for example, m is 5), and replacing the value of a central pixel point by using a weighted average value of pixels in a neighborhood determined by a mask and a mask at a corresponding position;
And scaling the PCB picture and the labeling picture in the same proportion (mostly reducing the calculation amount of the neural network) so as to facilitate the deep learning network learning. If the length or width of the picture scaled according to the same proportion is smaller than the area defined by the neural network, the picture area is smaller than the area defined by the neural network, the part smaller than the picture defined by the neural network is filled by background color, and finally the sizes of the scaled PCB picture and the scaled label picture are the same as the size of the picture defined by the neural network;
and normalizing the gray values of all pixels of the PCB picture, wherein the normalized values are between 0 and 1. The manner of calculation is the gray values of all pixels divided by 255.
S3, generating a space weight array by the preprocessed PCB picture and the preprocessed label picture, wherein the space weight array comprises the following steps:
S31, preprocessing the marked picture, including: and (3) finding out the arc edge of the circular hole area in the marked picture, and fitting the circle to form a circular contour circle1 through opencv or other image processing libraries. The radius of the circular outline is r, r is taken as a radius drawing circle2, mu is taken as a threshold value, and 10-20 can be taken. And selecting the area surrounded by the circular contour circle2 as a circular ROI.
S32, firstly calculating the radius l max of the circular ROI, dividing the distance into n sections, wherein the length of each section isThe distance d between the pixel located in the i-th segment and the center point is located in a section, and the section where d is located is represented by the following formula:
all pixel interval diagrams are shown in fig. 3.
And calculating the distance between all the reflection pixels in the circular ROI and the center point of the circular ROI. Classifying the calculated reflective pixels according to the distance from the center point of the circular ROI, classifying the distances in the ith section into the ith class, and then calculating the number of similar pixels. (x, y) is the coordinates of reflective pixels in the circular ROI, and the coordinates of the central point of the circular ROI are (x 0,y0), the calculation formula of the distance D between the reflective pixels and the central point of the circular ROI is as follows:
s33, defining a space weight array A with the length of n, wherein the element value of the array index i is equal to the number of the i+1st element, and finally normalizing the weight array to enable all values to be distributed between 0 and 1. X is a weight value in the array, xmax, xmin, xnorm is a maximum value and a minimum value of the values in the space weight array and a normalized value, and the normalization method is as follows:
S34, xnorm replaces the original weight value of the array, the finally generated space weight array is shown in FIG. 4, ai represents the value of the ith element (namely Xnorm), namely the space weight values of all the pixels positioned at the ith section in the PCB picture.
S35, if the last weight value of the weight array, namely An, is 0, removing the An, subtracting 1 from the array dimension, and reducing the classification number to n-1. This operation is repeated until the last weight value is not 0.
S4, dividing the PCB picture by using the space weight array, the labeling picture and the neural network, wherein the method comprises the following steps of:
s41, constructing a U-shaped neural network:
a U-shaped neural network (U-Net) including left and right side units, and a convolution unit connecting the left and right side units, as shown in FIG. 9;
The left side unit includes four compression path units, each containing one Max-Pooling (Max pooling) operation and two Conv (convolution) operations, the compression path units are seen in fig. 6; the right side cell consists of four extended path cells, each containing an Upsampling operation, a Concat (concatenation) operation, two Conv (convolution) operations, and a SE (extrusion excitation) module, see fig. 7.
The extrusion excitation module is shown in fig. 5, and the input feature map X is subjected to convolution operation to obtain a feature map U; the feature map U is deconvoluted to obtain a feature map V, and the feature map U can obtain a feature vector of 1 multiplied by C through GAP (global tie pooling, extrusion) operation, FN (full connection) operation and sigmoid activation function; finally, the eigenvectors of 1×1×c are multiplied by the eigenvector diagram V, so that the excitation of important channels and the suppression of unimportant channels can be realized. The convolution unit is two convolution operations for connecting the compression path and the expansion path, see fig. 8.
S42, inputting the PCB picture into the U-shaped neural network of S41, and obtaining a prediction picture in the U-shaped neural network through complex calculation, wherein the size of the prediction picture is the same as the size of the marked picture. The training adopts an integrated learning mode, and trains k network models, and the specific training process of each network model is as follows: inputting the training set PCB picture into a neural network, outputting a prediction graph, wherein loss is generated when the prediction value of the prediction graph is different from the gray value of the same position of the labeling graph, the loss is divided into two parts, the difference degree between the prediction graph and the labeling graph is calculated by using a cross entropy loss function and is used as loss L 1, and the difference degree between the result of combining the prediction graph and the spatial weight and the labeling graph is used as loss L 2.
(α11,α21)、(α12,α22)、(α13,α23)、…、(α1k,α2k) The loss function weights of k groups of network models are not repeated, so that each learner has different preference, and the loss L of k models is as follows:
L=α1 iL1+α2i·L2 (where i=1, 2,3, …, k) (4)
In the loss function weight of each group of network models, alpha determines the duty ratio of L 1 and L 2, the value range is [0,1], and in the loss function weight of each group of network models, alpha 1+alpha 2=1.
The calculation method of L 2 is as follows:
if a pixel point predicts no reflection (the predicted value is 0), and the pixel at the same position of the actual label picture is reflection (the gray value is 1), the pixel is a false negative example FN, or the pixel point predicts reflection (the predicted value is 1), the pixel in the label picture is no reflection (the gray value is 0), the pixel is a false positive example FP, and losses can be generated when the false negative example and the false positive example occur. The predicted value of a certain pixel point is y pred, the gray value of a pixel at the same position of an actual label picture is y true, the pixel belongs to a jth section calculated by the method of S32, and the spatial weight value of the section is Aj, and the calculation formula of L 2 is shown as the following formula 5:
L2=(ytrue-ypred)2·ln(1+Aj) (5)
and S43, finally updating the U-shaped neural network parameters by using back propagation. And (5) circulating the steps to obtain the optimized U-shaped neural network model.
S5, transferring the segmentation result to a PCB picture to cover a light reflecting area, wherein the method specifically comprises the following steps of:
When in segmentation, the pictures are transmitted into k models to obtain k prediction graphs, so that k prediction results exist for each pixel, and a few prediction values are determined by adopting a majority rule, namely: assuming k takes 3, if there are two or three models to predict the pixel as reflective, then this pixel is considered to be reflective, otherwise it is considered to be background. And transmitting the coordinate position of the pixel identified as the reflection in the segmentation result to the PCB picture, and setting the pixel gray value of the coordinate position on the PCB picture to be 0 (black, background color) to cover the reflection light spots in the Mark points on the PCB picture. As shown in fig. 10, the reflective light spots in the mark points of the PCB picture are covered with black, which is beneficial to the identification of the positioning coordinates.
The present invention is not limited to the above-mentioned embodiments, and any changes or substitutions that can be easily understood by those skilled in the art within the technical scope of the present invention are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.

Claims (10)

1.一种基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,该方法包括如下步骤:1. A method for segmenting reflective spots of PCB board Mark points based on deep learning, characterized in that the method comprises the following steps: S1、获取PCB板Mark点的灰度图,将PCB板Mark点的灰度图内所有反光像素在另一个灰度图的相同坐标位置标注出来,得到标注图片;S1. Obtain a grayscale image of a Mark point on a PCB board, and mark all reflective pixels in the grayscale image of the Mark point on the PCB board at the same coordinate position in another grayscale image to obtain a marked image; S2、对PCB板Mark点的灰度图,以及与之对应的标注图片进行预处理,得到训练集和测试集;S2, preprocessing the grayscale image of the Mark points of the PCB board and the corresponding annotated images to obtain a training set and a test set; 对PCB板图片进行滤波降噪,再将PCB板图片以及标注图片按相同比例缩放,使图片大小与神经网络定义输入图片的大小相同;Filter and reduce noise on the PCB board image, and then scale the PCB board image and the annotated image to the same size so that the image size is the same as the size of the neural network defined input image; 同时对PCB板图片所有像素灰度值做归一化处理,将其归一化到[0,1]区间;进行归一化的计算方式为:所有像素的灰度值除以255;At the same time, the grayscale values of all pixels in the PCB board image are normalized to the interval [0,1]. The normalization calculation method is: the grayscale values of all pixels are divided by 255; S3、用预处理后的PCB板图片和标注图片生成空间权重数组;S3, using the pre-processed PCB board image and the annotated image to generate a spatial weight array; S4、利用空间权重数组、标注图片以及神经网络对PCB板图片进行分割;S4, segmenting the PCB board image using the spatial weight array, the labeled image and the neural network; S5、将分割结果转移至PCB板图片上覆盖反光区域。S5. Transfer the segmentation result to the PCB board image to cover the reflective area. 2.根据权利要求1所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S1中,在标注图片中,将标注出的全部像素点连接成若干连续闭合区域,其中背景像素的灰度值为0,反光像素的灰度值为1。2. According to the deep learning-based PCB board mark point reflective spot segmentation method according to claim 1, it is characterized in that in S1, in the annotated image, all the annotated pixel points are connected into several continuous closed areas, where the grayscale value of the background pixel is 0 and the grayscale value of the reflective pixel is 1. 3.根据权利要求1所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S2中,所述训练集和测试集是通过如下步骤得到的:3. According to the deep learning-based PCB board mark point reflective spot segmentation method of claim 1, it is characterized in that in S2, the training set and the test set are obtained by the following steps: 去出含有伪影、模糊的PCB板图片和对应的标注图片,随机抽取一部分数量的PCB板图片和对应的标注图片作为训练集,另一部分数量的PCB板图片和对应的标注图片作为测试集。Remove the PCB board images and corresponding annotated images that contain artifacts and blurs, randomly select a number of PCB board images and corresponding annotated images as training sets, and another number of PCB board images and corresponding annotated images as test sets. 4.根据权利要求1所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S2中,对PCB板图片进行滤波降噪的过程为:用n×n的滤波器扫描图像所有像素,用掩膜确定的邻域内像素与对应位置掩膜的加权平均值替代中心像素点的,其中n为正整数。4. According to the deep learning-based PCB board mark point reflective spot segmentation method according to claim 1, it is characterized in that in S2, the process of filtering and denoising the PCB board image is: using an n×n filter to scan all pixels of the image, and replacing the center pixel with the weighted average of the pixels in the neighborhood determined by the mask and the corresponding position mask, where n is a positive integer. 5.根据权利要求1所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S2中,在将PCB板图片以及标注图片按相同比例缩放过程中,若按同一比例缩放后的图片长或宽比神经网络定义的图片的小,图片面积会小于神经网络定义的图片的面积,则小于神经网络定义的图片的部分由背景颜色填充,最后的图片大小即与神经网络定义输入图片的大小相同。5. According to the deep learning-based PCB board mark point reflective spot segmentation method of claim 1, it is characterized in that, in S2, in the process of scaling the PCB board image and the annotated image at the same ratio, if the length or width of the image scaled at the same ratio is smaller than the image defined by the neural network, the image area will be smaller than the area of the image defined by the neural network, then the part smaller than the image defined by the neural network is filled with the background color, and the final image size is the same as the size of the input image defined by the neural network. 6.根据权利要求1所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S3中,用预处理后的PCB板图片和标注图片生成空间权重数组,具体包括:6. The method for segmenting reflective spots of PCB board Mark points based on deep learning according to claim 1 is characterized in that, in S3, a spatial weight array is generated using the pre-processed PCB board image and the annotated image, specifically comprising: S31、对标注图片进行预处理,包括:找出标注图片中圆形孔区域的弧形边缘,将圆形拟合形成半径为r的圆形轮廓circle1,再以(1+μ%)*r为半径画圆形轮廓circle2,其中μ为阈值;并选取该圆形轮廓circle2所包围的区域作为圆形ROI;S31, preprocessing the annotated image, including: finding the arc edge of the circular hole area in the annotated image, fitting the circle to form a circular contour circle1 with a radius of r, and then drawing a circular contour circle2 with a radius of (1+μ%)*r, where μ is a threshold; and selecting the area surrounded by the circular contour circle2 as a circular ROI; S32、先计算圆形ROI边缘到圆形ROI中心点的最大距离lmax,将此距离分为n段,每段长度为lmax/n,那么位于第i段的像素与中心点之间的距离d满足:S32, first calculate the maximum distance l max from the edge of the circular ROI to the center of the circular ROI, divide this distance into n segments, each segment length is l max /n, then the distance d between the pixel in the i-th segment and the center point satisfies: 再计算圆形ROI内所有反光像素距离圆形ROI中心点的距离,将计算出的反光像素按照距离圆形ROI中心点的距离进行分类,距离位于第i段的都分为第i类,然后计算出同类像素的个数;Then calculate the distance between all reflective pixels in the circular ROI and the center point of the circular ROI, classify the calculated reflective pixels according to the distance from the center point of the circular ROI, and classify the reflective pixels in the i-th segment into the i-th category, and then calculate the number of pixels in the same category; S33、定义一个长度为n为空间权重数组A,数组索引为i的元素值等于第i+1类元素个数,最后将权重数组归一化,使所有值都分布在0和1之间;S33, define a spatial weight array A with a length of n, the element value of the array index i is equal to the number of elements in the i+1th category, and finally normalize the weight array so that all values are distributed between 0 and 1; X为数组内权重数值,Xmax、Xmin、Xnorm分别为空间权重数组内数值的最大值和最小值、归一化后的值,则归一化方法如下:X is the weight value in the array, Xmax, Xmin, and Xnorm are the maximum and minimum values in the spatial weight array and the normalized value, respectively. The normalization method is as follows: S34、将Xnorm代替数组原有权重数值,以生成的新的空间权重数组。S34. Replace the original weight value of the array with Xnorm to generate a new spatial weight array. 7.根据权利要求6所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述μ的取值范围为10~20。7. According to the deep learning-based PCB board mark point reflective spot segmentation method of claim 6, it is characterized in that the value range of μ is 10 to 20. 8.根据权利要求6所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S34后还包括如下步骤:8. The method for segmenting the reflective spot of a PCB Mark point based on deep learning according to claim 6, characterized in that the step S34 further comprises the following steps: S35、若新的权重数组的最后一个权重值An为0,则去除最后一个权重值An,数组维度减1,分类书将为n-1,且重复该操作直到最后一个权重值不为0为止。S35. If the last weight value An of the new weight array is 0, remove the last weight value An, reduce the array dimension by 1, the classification book will be n-1, and repeat this operation until the last weight value is not 0. 9.根据权利要求6所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S4中,利用空间权重数组、标注图片以及神经网络对PCB板图片进行分割,包括如下步骤:9. The method for segmenting reflective spots of PCB board Mark points based on deep learning according to claim 6 is characterized in that, in S4, the PCB board image is segmented using a spatial weight array, annotated images and a neural network, comprising the following steps: S41、构建U型神经网络:S41. Constructing U-type neural network: 所述U型神经网络包括左侧单元和右侧单元,以及连接所述左侧单元和所述右侧单元的卷积单元;The U-shaped neural network includes a left unit and a right unit, and a convolution unit connecting the left unit and the right unit; 所述左侧单元包括四个压缩路径单元,相邻的两个所述压缩路径单元之间依次进行两侧卷积操作和池化操作;The left unit includes four compression path units, and two adjacent compression path units perform bilateral convolution operations and pooling operations in sequence; 所述右侧单元包括四个扩展路径单元,相邻的两个所述扩展路径单元之间依次经过上采样操作、拼接操作、两次卷积操作和挤压激励模块的挤压激励操作;The right unit includes four extension path units, and two adjacent extension path units are sequentially subjected to upsampling operation, splicing operation, two convolution operations and squeeze excitation operation of the squeeze excitation module; 所述卷积操作用于将所述左侧单元的输入进行两次卷积操作后传输给右侧单元;The convolution operation is used to perform two convolution operations on the input of the left unit and transmit it to the right unit; 向所述挤压激励模块输入特征图X;特征图X经过卷积操作得到特征图U;特征图U再经过卷积操作得到特征图V,特征图U还经过全局平局池化、挤压操作、全连接操作,sigmoid激活函数获得1×1×C的特征向量;最后将1×1×C的特征向量与特征图V相乘,最终实现对重要通道地激励和对不重要通道地抑制;The feature map X is input to the squeeze excitation module; the feature map X is subjected to a convolution operation to obtain a feature map U; the feature map U is further subjected to a convolution operation to obtain a feature map V, and the feature map U is further subjected to global average pooling, a squeeze operation, a full connection operation, and a sigmoid activation function to obtain a 1×1×C feature vector; finally, the 1×1×C feature vector is multiplied by the feature map V, and finally the excitation of important channels and the suppression of unimportant channels are realized; S42、将PCB板图片输入到所述S41的U型神经网络,经过U型神经网络对PCB板图片的计算得到预测图,且所述预测图大小与标注图片的尺寸相同,并确定所述预测图的预测值与标注图相同位置的灰度值是否相同;S42, inputting the PCB board picture into the U-type neural network of S41, obtaining a prediction graph after the U-type neural network calculates the PCB board picture, and the size of the prediction graph is the same as the size of the annotated picture, and determining whether the predicted value of the prediction graph is the same as the grayscale value of the same position of the annotated picture; 若不同,则会产生损失,用交叉熵损失函数计算预测图与标注图的差距程度作为损失L1,再用预测图和空间权重数组结合的结果与标注图的差距程度作为损失L2,具体计算结果如下:If they are different, a loss will occur. The cross entropy loss function is used to calculate the difference between the predicted image and the labeled image as the loss L 1 , and then the difference between the result of combining the predicted image and the spatial weight array and the labeled image is used as the loss L 2 . The specific calculation results are as follows: 以(α11,α21)、(α12,α22)、(α13,α23)、…、(α1k,α2k)为k组网络模型的损失函数权重,权重取值不重复,使每个学习器具有不同偏好,k个模型的损失L如下:Let (α1 1 , α2 1 ), (α1 2 , α2 2 ), (α1 3 , α2 3 ), …, (α1 k , α2 k ) be the loss function weights of k groups of network models. The weight values are not repeated so that each learner has different preferences. The loss L of k models is as follows: L=α1iL1+α2i·L2(其中,i=1,2,3,…,k)L=α1 i L 1 +α2 i ·L 2 (where i=1,2,3,…,k) L2的计算方法如下:The calculation method of L2 is as follows: 如果某个像素点预测为没有反光,即预测值为0,而实际标注图片的同样位置像素为有反光,即灰度值为1,则该像素为假负例FN,或者该像素点预测有反光,即预测值为1,标注图片中该像素无反光,即灰度值为0,则该像素为假正例FP,出现假负例和假正例的情况才会产生损失;设某个像素点预测值为ypred,实际标注图片的同样位置像素灰度值为ytrue,按S32所述方法计算出该像素属于第j段区间,该区间的空间权重值为Aj,则L2的计算公式:If a pixel is predicted to have no reflection, that is, the predicted value is 0, and the pixel at the same position in the actual annotated image has reflection, that is, the gray value is 1, then the pixel is a false negative example FN, or the pixel is predicted to have reflection, that is, the predicted value is 1, and the pixel in the annotated image has no reflection, that is, the gray value is 0, then the pixel is a false positive example FP, and losses will occur only when false negative examples and false positive examples occur; suppose the predicted value of a pixel is y pred , and the gray value of the pixel at the same position in the actual annotated image is y true , and the method described in S32 is used to calculate that the pixel belongs to the jth interval, and the spatial weight value of the interval is Aj, then the calculation formula of L 2 is: L2=(ytrue-ypred)2·ln(1+Aj)L 2 =(y true -y pred ) 2 ·ln(1+A j ) S43、利用反向传播来更新U型神经网络参数,循环上述步骤得到优化后的U型神经网络模型。S43. Use back propagation to update the U-type neural network parameters, and repeat the above steps to obtain the optimized U-type neural network model. 10.根据权利要求9所述的基于深度学习的PCB板Mark点反光光斑分割方法,其特征在于,所述S5中,将分割结果转移至PCB板图片上覆盖反光区域,具体包括:10. The method for segmenting reflective spots of PCB board Mark points based on deep learning according to claim 9, characterized in that in said S5, the segmentation result is transferred to the PCB board image to cover the reflective area, specifically comprising: 分割时将图片传入k个模型得到k个预测图,因此对于每个像素都有k个预测结果,采用少数服从多数原则确定预测值以确定该像素是否为反光像素;During segmentation, the image is passed into k models to obtain k prediction images. Therefore, there are k prediction results for each pixel. The prediction value is determined by the principle of minority obeys majority to determine whether the pixel is a reflective pixel. 将分割结果中识别为反光像素的坐标位置传至PCB板图片,并将PCB板图片上该坐标位置的像素灰度值设置为0,覆盖PCB板图片上Mark点内的反光光斑。The coordinate position of the reflective pixel identified in the segmentation result is transferred to the PCB board image, and the grayscale value of the pixel at the coordinate position on the PCB board image is set to 0, covering the reflective spot within the Mark point on the PCB board image.
CN202111273505.1A 2021-10-29 2021-10-29 PCB board mark point reflective spot segmentation method based on deep learning Active CN114202489B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111273505.1A CN114202489B (en) 2021-10-29 2021-10-29 PCB board mark point reflective spot segmentation method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111273505.1A CN114202489B (en) 2021-10-29 2021-10-29 PCB board mark point reflective spot segmentation method based on deep learning

Publications (2)

Publication Number Publication Date
CN114202489A CN114202489A (en) 2022-03-18
CN114202489B true CN114202489B (en) 2024-11-26

Family

ID=80646488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111273505.1A Active CN114202489B (en) 2021-10-29 2021-10-29 PCB board mark point reflective spot segmentation method based on deep learning

Country Status (1)

Country Link
CN (1) CN114202489B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162192A (en) * 2013-03-18 2013-06-19 晶科电子(广州)有限公司 Direct down type backlight module
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107345789B (en) * 2017-07-06 2023-06-30 深圳市强华科技发展有限公司 PCB hole position detection device and method
CN206944930U (en) * 2017-07-06 2018-01-30 深圳市强华科技发展有限公司 A kind of pcb board hole location detecting device
CN109711413B (en) * 2018-12-30 2023-04-07 陕西师范大学 Image semantic segmentation method based on deep learning
CN112347950B (en) * 2020-11-11 2024-04-05 湖北大学 Deep learning-based PCB laser target identification method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103162192A (en) * 2013-03-18 2013-06-19 晶科电子(广州)有限公司 Direct down type backlight module
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field

Also Published As

Publication number Publication date
CN114202489A (en) 2022-03-18

Similar Documents

Publication Publication Date Title
US20200133182A1 (en) Defect classification in an image or printed output
CN106875373B (en) Mobile phone screen MURA defect detection method based on convolutional neural network pruning algorithm
CN104463209B (en) Method for recognizing digital code on PCB based on BP neural network
CN108918536B (en) Tire mold surface character defect detection method, device, equipment and storage medium
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN113592911B (en) Apparent enhanced depth target tracking method
CN108921916B (en) Method, device and equipment for coloring multi-target area in picture and storage medium
CN116563237B (en) A hyperspectral image detection method for chicken carcass defects based on deep learning
CN106096610A (en) A kind of file and picture binary coding method based on support vector machine
CN106846011A (en) Business license recognition methods and device
CN112036454B (en) An Image Classification Method Based on Multi-core Densely Connected Networks
CN111310746B (en) Text line detection method, model training method, device, server and medium
CN111460947B (en) BP neural network-based method and system for identifying metal minerals under microscope
CN113205507A (en) Visual question answering method, system and server
CN113627302A (en) Method and system for detecting compliance of ascending construction
CN113870342A (en) Appearance defect detection method, intelligent terminal and storage device
JP2006048370A (en) Pattern recognition method, teaching data generation method used therefor, and pattern recognition apparatus
CN118446987A (en) A visual inspection method for corrosion on the inner surface of cabin sections in narrow and confined spaces
CN117423040A (en) Visual garbage identification method for unmanned garbage sweeper based on improved YOLOv8
CN116342536A (en) Aluminum strip surface defect detection method, system and equipment based on lightweight model
CN114202489B (en) PCB board mark point reflective spot segmentation method based on deep learning
CN114219757A (en) An Intelligent Vehicle Loss Determination Method Based on Improved Mask R-CNN
CN110956623B (en) Wrinkle detection method, wrinkle detection device, wrinkle detection equipment and computer-readable storage medium
CN116883313A (en) Method for rapidly detecting vehicle body paint surface defects, image processing equipment and readable medium
CN116452965A (en) An underwater target detection and recognition method based on acousto-optic fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant