[go: up one dir, main page]

CN117726579A - Defect detection method, defect detection device, computer equipment and computer readable storage medium - Google Patents

Defect detection method, defect detection device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN117726579A
CN117726579A CN202311520045.7A CN202311520045A CN117726579A CN 117726579 A CN117726579 A CN 117726579A CN 202311520045 A CN202311520045 A CN 202311520045A CN 117726579 A CN117726579 A CN 117726579A
Authority
CN
China
Prior art keywords
image
defect
target
area
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311520045.7A
Other languages
Chinese (zh)
Inventor
张建书
何水源
贺继石
刘枢
沈小勇
吕江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202311520045.7A priority Critical patent/CN117726579A/en
Publication of CN117726579A publication Critical patent/CN117726579A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Analysis (AREA)

Abstract

The application relates to a defect detection method, a defect detection device, computer equipment and a computer readable storage medium. The method comprises the following steps: determining a current tag area from target detection images comprising at least two tag areas, matching a to-be-detected binarized image of an image of the current tag area with a standard binarized image of an area image matched with the content of the current tag area in a standard image to obtain offset information, differentiating a difference image of the current tag area between the target binarized image obtained by converting the to-be-detected binarized image and the standard binarized image based on the offset information and the pose information of the current tag area, and returning to the step of determining the current tag area until each tag area has a corresponding difference image; and combining the differential images to obtain a target differential image corresponding to the target detection image, and determining a defect detection result of the target detection image based on defect detection information obtained by inputting the target differential image into the tag defect detection model. The method can reduce the defect omission rate.

Description

Defect detection method, defect detection device, computer equipment and computer readable storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a defect detection method, a defect detection device, a computer device, and a computer readable storage medium.
Background
Random computer technology is developed, and defect detection technology is increasingly applied to product defect detection in multiple fields. In terms of printed label defect detection of product packaging, the prior art cannot realize accurate defect detection on multiple types of labels such as complex labels or labels with larger sizes, so that the missing detection rate of defects is higher.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a defect detection method, apparatus, computer device, and computer-readable storage medium that can accurately detect defects, and reduce the omission factor of defects.
In a first aspect, the present application provides a defect detection method. The method comprises the following steps:
acquiring a target detection image, wherein the target detection image comprises at least two tag areas, and the tag areas are obtained by dividing based on the image content of the target detection image;
determining a current tag area from each tag area in the target detection image, determining a to-be-detected binarized image corresponding to the image of the current tag area, acquiring a standard binarized image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image based on the offset information and pose information of the current tag area to obtain the target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image;
The target binarization image and the standard binarization image are differentiated to obtain a differential image corresponding to the current label area, and the step of determining the current label area from each label area in the target detection image is returned until each label area in the target detection image has a corresponding differential image;
combining the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, inputting the target differential image into a trained tag defect detection model to obtain defect detection information of the target detection image, and determining a defect detection result of the target detection image based on the defect detection information.
In a second aspect, the present application further provides a defect detection apparatus. The device comprises:
the acquisition module is used for acquiring a target detection image, wherein the target detection image comprises at least two tag areas, and the tag areas are obtained based on image content division of the target detection image;
the matching module is used for determining a current tag area from each tag area in the target detection image, determining a to-be-detected binarized image corresponding to the image of the current tag area, acquiring a standard binarized image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image based on the offset information and pose information of the current tag area to obtain the target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image;
The difference module is used for carrying out difference on the target binarization image and the standard binarization image to obtain a difference image corresponding to the current tag area, and returning to the step of determining the current tag area from each tag area in the target detection image until each tag area in the target detection image has the corresponding difference image;
the detection module is used for combining the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, inputting the target differential image into the trained tag defect detection model to obtain defect detection information of the target detection image, and determining a defect detection result of the target detection image based on the defect detection information.
In a third aspect, the present application provides a computer device comprising a memory storing a computer program and a processor implementing the steps of the method described above when the processor executes the computer program.
In a fourth aspect, the present application provides a computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the method described above.
In a fifth aspect, the present application provides a computer program product comprising a computer program which, when executed by a processor, implements the steps of the method described above.
According to the defect detection method, the defect detection device, the computer equipment, the computer readable storage medium and the computer program product, at least two tag areas are divided in the target detection image based on the image content in the target detection image, so that more refined detection of tags of various types is facilitated, and the tag areas are distinguished from blank areas in the target detection image by dividing the tag areas, so that false detection of defects in the blank areas and the tag areas is avoided; determining a current tag area from each tag area in a target detection image, determining a to-be-detected binarized image corresponding to an image of the current tag area, acquiring a standard binarized image, wherein the standard binarized image is a binarized image corresponding to an area image matched with the content of the current tag area in the standard image corresponding to the target detection image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image to obtain the target binarized image based on the offset information and the pose information of the current tag area, so that the position of each tag content in the target binarized image is as close as possible to that in the standard image, thereby providing higher-precision image data for subsequent difference; the target binarization image and the standard binarization image are differentiated to obtain a differential image corresponding to the current tag area, the step of determining the current tag area from each tag area in the target detection image is returned until each tag area in the target detection image has the corresponding differential image, and the target binarization image and the standard binarization image are differentiated to obtain the differential image capable of reflecting hidden defects in each tag content in each tag area more accurately and obviously; the differential images corresponding to all the label areas are combined to obtain a target differential image corresponding to the target detection image, the target differential image is input into a trained label defect detection model to obtain defect detection information of the target detection image, the defect detection result of the target detection image is determined based on the defect detection information, the differential images of all the label areas are combined to obtain the target differential image with potential defect information reflecting all the label content in the target detection image, defects in the target differential image are further identified and detected based on the label defect detection model to obtain defect detection information reflecting the defects of the label content, and the defect detection result of the target detection image is determined according to the defect detection information.
Drawings
Fig. 1 is an application environment diagram of a defect detection method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a defect detection method according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of a defect detecting device according to an embodiment of the present disclosure;
FIG. 4 is an internal block diagram of a computer device according to an embodiment of the present application;
FIG. 5 is an internal block diagram of another computer device according to an embodiment of the present application;
fig. 6 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The defect detection method provided by the embodiment of the application can be applied to an application environment shown in fig. 1. Wherein the terminal 102 communicates with the server 104 via a network. The data storage system may store data that the server 104 needs to process. The data storage system may be integrated on the server 104 or may be located on a cloud or other network server. The terminal 102 may be, but not limited to, various personal computers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, where the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart vehicle devices, and the like. The portable wearable device may be a smart watch, smart bracelet, headset, or the like. The server 104 may be implemented as a stand-alone server or as a server cluster of multiple servers.
As shown in fig. 2, an embodiment of the present application provides a defect detection method, and the method is applied to the server 104 in fig. 1 for illustration. It is understood that the computer device may include at least one of a terminal and a server. The method comprises the following steps:
s200, acquiring a target detection image, wherein the target detection image comprises at least two label areas, and the label areas are obtained based on image content division of the target detection image.
The target detection image is an image for detecting whether a defect exists, the image contains the printed label content of the product to be detected, the defect possibly exists in the label, around or in other blank areas of the product, and the specific area is not limited. The tag region refers to a region containing tag content; the label area can be a rectangular frame, and the label area can be represented by the coordinate position information of the upper left corner of the rectangular frame and the length and the width of the rectangular frame; one tag area may include one character or logo, or may include a plurality of characters or logos, the number of the characters or logos specifically divided in the tag area is variable, tag contents included in different tag areas are not repeated, a set of tag contents in all tag areas includes all tag contents in the detected product, one tag may include one or more characters and logos, and the tag contents are not limited to the contents included in the characters and logos and may be any content printed on the detected product.
Specifically, in order to detect defects of print contents such as characters or logo in each tag more specifically and more accurately, tag areas of each tag content can be divided in a smaller range based on image contents in a target detection image, so that tag areas containing different tag contents are obtained, and when the area division is performed, a set of tag contents in the tag areas finally obtained by division contains all contents in the target detection image corresponding to a detected product, so that defect detection can be performed on each tag in the detected product, and defect omission ratio is reduced. In addition, because of the difference of various factors between the labels and between the contents such as characters or logo in the labels, if the defect detection is carried out on the target detection image only in a single area, the defect of the contents such as characters or logo in each label is difficult to ensure to be detected, so that the image contents in the target detection image are divided into different label areas, and the defect omission ratio can be reduced by carrying out more targeted detection according to different label contents corresponding to different labels.
S202, determining a current tag area from each tag area in a target detection image, determining a to-be-detected binarized image corresponding to the image of the current tag area, acquiring a standard binarized image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image based on the offset information and pose information of the current tag area to obtain the target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image; .
The current tag area refers to the tag area processed by the current traversal cycle. The binarized image to be detected refers to a binarized image of an image corresponding to a current tag area in the target detection image. The standard image refers to an image corresponding to a product which is not defective and corresponds to a detected product corresponding to the target detection image and belongs to the same product; the labels in the standard image are free of defects, and the defect-free labels can be defects such as bright spots, dark spots, stains, strokes and shape defects of fonts and logos, redundant strokes and shapes of fonts and logos, and other label quality defects, and the defect conditions are not limited to the above description. The standard binarized image refers to a binarized image of an image corresponding to the content matching of the current tag area in the standard image, namely, the standard image corresponds to the same product as the target detection image, an area image matching with the image content corresponding to the current tag area in the target detection image is determined in the standard image, and the binarized image corresponding to the area image is the standard binarized image corresponding to the current tag area, and in more detail, the printed content corresponding to the product corresponding to the standard image and the printed content corresponding to the product corresponding to the target detection image at the same reference position are matched, but the printed content is not necessarily identical, because the product corresponding to the target detection image may have defects; the label area corresponding to the standard binarized image is set in the standard image according to the content and the size of the current label area of the target detection image, namely, the label content in the label area corresponding to the standard binarized image is matched with the content in the current label area, and the length and the width of the label area corresponding to the standard binarized image are equal to the length and the width of the current label area. The offset information refers to a deviation between the position of the label area corresponding to the binarized image to be detected and the position of the label area corresponding to the standard binarized image. Pose information refers to position information and area information of a current tag area; the area information is represented by the length and width of the current tag area. The target binarized image refers to a binarized image having the same position and the same tag content as the standard image.
Specifically, in order to more intuitively highlight the defect, the standard binarization image corresponding to the image matched with the content of the current label area in the standard image corresponding to the target detection image and the binarization image to be detected corresponding to the image of the current label area can be calculated respectively based on an Ojin algorithm, in order to ensure that the same reference position in the binarization image to be detected and the standard binarization image is the corresponding pixel point when the difference is carried out subsequently, the binarization image to be detected and the standard binarization image can be matched based on a preset template matching algorithm, offset information of the binarization image to be detected relative to the standard binarization image is obtained, the fusion position information of the offset information and the position information of the current label area is used as the position information of the binarization image to be detected after the conversion, the area information of the current label area is used as the area information of the binarization image to be detected, thereby ensuring that the label content in the converted to be detected is consistent in position and size with the label content in the standard binarization image at the moment, in addition, since the converted to be detected binarization image and the difference value in the standard point can be compared with the preset value in the original position of the original image, the position of the original image can be set to be more than the default value, the default value can be set to be the default value, and the default value can be set to be compared with the original value, and the default value can be set to be a default value, and the default value can be set to be displayed in the default value is 80, the specific setting of the preset multiple can be formulated according to the requirement, but each pixel point needs to be ensured to change the gray value according to the same preset multiple.
S204, differentiating the target binarized image and the standard binarized image to obtain a differential image corresponding to the current tag area, and returning to the step of determining the current tag area from each tag area in the target detection image until each tag area in the target detection image has the corresponding differential image.
The difference means that the gray values of the pixel points corresponding to the same positions in the target binarized image and the standard binarized image are subtracted, and the difference is used for extracting a bright area in the binarized image. The differential image refers to an image for reflecting defects existing in the label content in the current label area; obvious defects in label content such as characters or logo can be intuitively found from the differential image, but for finer defects, the defects need to be detected later through a defect label detection model.
Specifically, when the target binarized image and the standard binarized image are differentiated, the gray value of each pixel point in the target binarized image and the gray value of the pixel point corresponding to the same position in the standard binarized image can be subtracted in sequence to obtain a gray difference value, and the gray value of the pixel point, of which the gray difference value is smaller than a preset gray threshold, in the target binarized image is set as a fixed gray value to obtain a first difference image; then, sequentially subtracting the gray value of each pixel point in the standard binary image from the gray value of the corresponding pixel point in the same position in the target binary image to obtain a gray difference value, setting the pixel point with the gray difference value smaller than a preset gray threshold in the standard binary image as a fixed gray value to obtain a second differential image, adding and fusing the first differential image and the second differential image to obtain a differential image of the current label area, wherein the differential image reflects the defects of the types such as bright point, bright spot, dark spot, scratch and the like in the label content of the target detection image; wherein the fixed gray value may be 0. Further, the addition fusion of the first difference map and the second difference map includes: and sequentially cycling the gray values of the pixels in the first differential graph and the second differential graph, comparing the gray values of the pixels corresponding to the same positions in the first differential graph with the gray values of the pixels corresponding to the same positions in the second differential graph, and setting the gray value of the pixel corresponding to the same position as the gray value of the pixel not equal to the fixed gray value in the first differential graph or the second differential graph when the gray values of the pixels corresponding to the same positions are different, for example, the gray value of the pixel A in the first differential graph is 100, and the gray value of the pixel A 'in the second differential graph is 0, and setting the gray value of the pixel corresponding to the positions of the pixel A and the pixel A' as 100. Further, in order to detect defects in the target detection image more specifically and more accurately, the difference images corresponding to the tag areas can be circularly calculated, and the difference of the tag contents corresponding to the tag areas enables the images corresponding to the tag areas to be differentiated more specifically, so that the condition that the tags are missed to be detected is avoided.
S206, combining the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, inputting the target differential image into the trained tag defect detection model to obtain defect detection information of the target detection image, and determining a defect detection result of the target detection image based on the defect detection information.
The target differential image is an image reflecting defects existing inside or outside each tag in the target detection image. The label defect detection model refers to a model for detecting whether a label has a defect, and can be, but not limited to, a model obtained by training a partitioned model of yolov5 of a convolutional neural network by using a sample containing the label defect. The defect detection information refers to defect information detected in the label, and includes, but is not limited to, defect type, defect circumscribing outline area, defect confidence, defect circumscribing rectangular frame information, and the like. The defect detection result refers to the result of whether the detected product corresponding to the target detection image has defects or not.
Specifically, the differential image of each label area further highlights the defects in each label in the target detection highlighting, in order to obtain more accurate defect information in the target detection image, the target differential image corresponding to the target detection image can be input into a trained label defect detection model to obtain defect detection information containing various types of defects specifically existing in the target detection image, and the defect detection information comprises the defect type, defect confidence, defect external contour area, defect external rectangular frame and other information, so that the detection result of the target detection image can be determined.
According to the defect detection method, at least two label areas are divided in the target detection image based on the image content in the target detection image, so that more detailed detection of labels in various modes is facilitated, and the label areas are distinguished from blank areas in the target detection image by dividing the label areas, so that false detection of defects in the blank areas and the label areas is avoided; determining a current tag area from each tag area in a target detection image, determining a to-be-detected binarized image corresponding to an image of the current tag area, acquiring a standard binarized image, wherein the standard binarized image is a binarized image corresponding to an area image matched with the content of the current tag area in the standard image corresponding to the target detection image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image to obtain the target binarized image based on the offset information and the pose information of the current tag area, so that the position of each tag content in the target binarized image is as close as possible to that in the standard image, thereby providing higher-precision image data for subsequent difference; the target binarization image and the standard binarization image are differentiated to obtain a differential image corresponding to the current tag area, the step of determining the current tag area from each tag area in the target detection image is returned until each tag area in the target detection image has the corresponding differential image, and the target binarization image and the standard binarization image are differentiated to obtain the differential image capable of reflecting hidden defects in each tag content in each tag area more accurately and obviously; the differential images corresponding to all the label areas are combined to obtain a target differential image corresponding to the target detection image, the target differential image is input into a trained label defect detection model to obtain defect detection information of the target detection image, the defect detection result of the target detection image is determined based on the defect detection information, the differential images of all the label areas are combined to obtain the target differential image with potential defect information reflecting all the label content in the target detection image, defects in the target differential image are further identified and detected based on the label defect detection model to obtain defect detection information reflecting the defects of the label content, and the defect detection result of the target detection image is determined according to the defect detection information.
In some embodiments, S200 comprises:
s300, dividing a plurality of complete tag areas in the standard image, setting mask areas in each complete tag area, and carrying out feature extraction on areas except the mask areas in the complete tag areas based on a preset feature extraction algorithm to obtain standard image features corresponding to the standard image.
S302, based on a plurality of groups of rotation angles and scaling factors, the standard image features are transformed to obtain a plurality of groups of transformed standard image features, and the standard image features and the set of transformed standard image features are used as a standard image feature set.
S304, acquiring an image to be detected, extracting the image to be detected corresponding to the image to be detected based on a preset feature extraction algorithm, matching the image to be detected with the image features in the standard image feature set to obtain target matching information and target matching score, and performing pose transformation operation on the image to be detected based on the target matching information when the target matching score is greater than or equal to a matching score threshold value to obtain a target detection image.
The method comprises the steps that a complete label area is divided into an area, wherein the complete label comprises a plurality of patterns such as characters or logos, the content of the label correspondingly divided by the complete label area is more than that of the label area, and the label area is divided into more detailed parts of the complete label area; the full label area is a regular rectangular frame, which can be represented by the upper left corner coordinates of the rectangular frame and the length and width of the rectangular frame. The mask region refers to a region in which no feature is extracted and a shielded region; the shape of the mask region may be irregular and may be a wide variety of shapes. The preset feature extraction algorithm refers to an algorithm for extracting image features, and can be, but not limited to, a combination algorithm of Gaussian blur and a sobel operator calculation gradient and gradient quantization algorithm. The standard image features refer to image features in the standard image, and include, but are not limited to, feature information of each pixel position and gray value transformation condition in the standard image. The rotation angle refers to a rotation angle used for changing image characteristic angle information in the standard image; the setting of the multiple groups of rotation angles is in a certain rotation angle range and is not arbitrarily set. The scaling factor refers to scaling data for changing image scaling feature information in the standard image; the setting of the multiple groups of scaling factors is in a certain scaling factor range and is not arbitrarily set. Transforming the standard image features refers to transforming the standard image features by rotating the standard image features by a predetermined angle and scaling the standard image features. The standard image feature set refers to a set of image features when rotation and scaling are not performed and image features after rotation and scaling are performed. The image to be detected refers to an image obtained by photographing the detected product, and the image is not subjected to any processing. The image feature to be detected refers to feature information reflecting the image feature to be detected, and the feature information includes, but is not limited to, information on positions, angles, scaling factors and the like of all pixel points in the image to be detected. The target matching information refers to the matching condition of the image features to be detected and the image features which are most matched in the standard image feature set; the matching information comprises position difference information, angle difference information and scaling difference information between the image features to be detected and the image features in the standard image feature set. The target matching score refers to the matching score of the image to be detected corresponding to the image feature most matched in the standard image feature set. The matching score threshold refers to a score threshold for measuring matching conditions of the image to be detected and the standard image.
Specifically, for different products to be detected, there are corresponding detection requirements, not all products find defects in all labels, and the detection requirements of some products are higher detection requirements for defects of some printed labels in the products. The method comprises the steps of dividing the complete label in the standard image into a plurality of complete label areas, setting mask areas with irregular shapes in each complete label area according to detection requirements, framing the label contents such as neglectable characters or logo in the detection requirements in the mask areas, masking the image contents in the mask areas, and shielding interference of the mask areas. And then carrying out feature extraction on the regions except the mask region in each complete region based on a preset feature extraction algorithm to obtain standard image features corresponding to the standard image. The label printed by the product is produced according to a fixed production chain, so that the condition that the size and the angle of the label printed by the product are not in accordance with the requirement is low, but the deviation in the aspects of shooting angle, position and scaling size of the image of the product to be detected possibly occurs when the product to be detected is shot, so that the standard image characteristics can be subjected to table replacement based on a plurality of groups of rotation angles and scaling coefficients to obtain conversion standard image characteristics corresponding to a plurality of rotation angles and scaling coefficients, and the original standard image characteristics and the obtained collection of the plurality of groups of conversion standard image characteristics are used as standard image characteristic sets.
Further, extracting image features to be detected corresponding to the image to be detected based on a preset feature extraction algorithm, sequentially matching the image features to be detected with standard image features corresponding to different rotation angles and scaling factors in a standard image feature set to obtain matching information and matching scores corresponding to the different rotation angles and the scaling factors, selecting the largest matching score from the matching information and the matching scores corresponding to the different rotation angles and the scaling factors as a target matching score, taking the matching information corresponding to the target matching score as target matching information, and performing pose transformation operation on the image to be detected according to rotation angle difference information, position difference information and scaling difference information in the target matching information to obtain the target detection image. Wherein, matching the image feature to be detected with the standard image feature comprises: and calculating position difference information, rotation angle difference information and scaling difference information of each pixel point in the image feature to be detected and the standard image feature, counting the same number of pixel points of which the positions, rotation angles and scaling coefficients in the image feature to be detected and the standard image feature meet the same judging conditions, and calculating a matching score based on the same number of pixel points.
In the above embodiment, the problem that the image to be detected is inaccurately matched with the standard image due to the difference of angles or scaling of the photographed image is avoided by matching the standard image features corresponding to the plurality of groups of rotation angles and scaling factors with the image to be detected, and the image to be detected is firstly matched with the image to be detected according to the extracted features of the complete tag region, and then pose transformation is performed on the image to be detected after the matching score is greater than or equal to the matching score threshold value, so that more accurate data preparation is performed for defect detection of the tag region to be refined later, the calculation force is reduced to a certain extent while the accuracy of detection of tag content corresponding to a large range is ensured, and the detection efficiency is improved.
In some embodiments, S304 further comprises: if the target matching score is smaller than the matching score threshold, the detection result of the target detection image is that the defect exists.
Specifically, if the target matching score is smaller than the matching score threshold, the fact that the to-be-detected image and the standard image have a large mismatch difference is indicated, and at the moment, a product corresponding to the to-be-detected image has a large label printing defect, so that the detection result of the target detection image can be directly judged to be defective.
In some embodiments, the pose information includes location information of the current tag region, and S202 includes:
s400, fusing the position information and the offset information to obtain fused position information, and converting the binary image to be detected into a binary image to be processed based on the fused position information.
S402, setting the gray level value of the to-be-processed binarized image to be a gray level value of a preset multiple to obtain a target binarized image.
The position information refers to information for representing the position of the current tag area, and can be represented by the upper left corner coordinate information of the rectangular frame of the current tag area and the length and width of the current tag area. The fused position information refers to position information obtained by adding coordinate information and offset information in the position information of the current tag region. The binarized image to be processed refers to a binarized image obtained after the position of the binarized image to be detected is converted.
Specifically, in order to ensure the accuracy of the data when the difference is performed subsequently and further highlight the defect information in the target detection highlighting, the fused position information obtained by adding the coordinate information in the position information corresponding to the current tag region and the offset information and the size information of the length and the width in the position information corresponding to the current tag region are used together as the position information of the binary image to be processed formed after the conversion of the binary image to be detected, label contents in some target detection images can be reserved in a light color in the differential image obtained after the difference is performed subsequently, so that the position of the defect is determined according to the label contents, the gray level of the binary image to be processed can be set to the gray level of the original preset multiple, and the target binary image is obtained, for example, the gray level of the pixel point in the binary image to be processed is A, and the gray level of the target binary image is 80% A.
In the above embodiment, the position of the binary image to be detected is adjusted by the offset information, so that the accuracy of the position when the subsequent target binary image and the standard binary image are differentiated is ensured, and the gray value with the gray value being a preset multiple is favorable for the differential image obtained in the subsequent step to retain some label content information, the retained label content information is more shallow than the defect, can form more obvious contrast with the defect such as a bright spot, and is favorable for assisting in judging the position information of the defect, thereby reducing the omission of the defect and the accuracy of defect detection to a certain extent.
In some embodiments, S204 comprises:
s500, respectively fusing the gray value of each pixel point in the target binarized image with the gray value of the pixel point corresponding to the same position in the standard binarized image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the target binarized image as a fixed gray value to obtain a first difference map.
S502, respectively fusing the gray value of each pixel point in the standard binary image with the gray value of the pixel point corresponding to the same position in the target binary image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the standard binary image as a fixed gray value to obtain a second difference image.
S504, fusing the first differential graph and the second differential graph to obtain a differential image corresponding to the current label area.
The gray level fusion value refers to a gray level difference value obtained by subtracting the gray level value of the pixel point corresponding to the same position in the target binarized image and the standard binarized image. The preset gray threshold refers to a threshold for measuring brightness of a pixel point corresponding to the same position in the target binarized image and the standard binarized image. The fixed gray value refers to a fixed gray value, which may be 0. The first difference map refers to a region for reflecting a brighter than the standard binarized image in the target binarized image. The second difference map refers to an area for reflecting that the standard binarized image is brighter than the target binarized image.
Specifically, the difference can be performed to extract a bright region of the target binarized image compared with the standard binarized image, and the difference between the gray value of each pixel point in the target binarized image and the gray value of the pixel point corresponding to the same position in the standard binarized image can be set as a gray fusion value corresponding to each pixel point when the difference is performed, the gray value of the pixel point of which the gray fusion value is smaller than the preset gray threshold in the target binarized image is set as a fixed gray value, a first difference image is obtained, then the difference between the gray value of each pixel point in the standard binarized image and the gray fusion value of each pixel point corresponding to the same position in the target binarized image is set as a gray fusion value corresponding to each pixel point, and the pixel point of which the gray fusion value is smaller than the preset gray fusion value in the standard binarized image is set as a fixed value, the first difference image and the second difference image are added, namely the gray value of the pixel point of which is smaller than the preset gray fusion value in the first difference image and the second difference image is more than the preset gray value, and the difference image is more favorably detected as a defect label in the current region, and the defect region can be further detected.
In the above embodiment, the difference image reflecting the defects existing in the current tag area can be more accurately and prominently obtained by respectively differentiating the target binarized image and the standard binarized image, so that the accuracy of defect detection of tag contents such as characters or logo in each current tag area is improved to a certain extent.
In some embodiments, S206 comprises:
s600, creating a blank image with the same size as the standard image, and setting the gray value of each pixel in the blank image to be a fixed gray value.
S602, copying the differential image corresponding to each label area into a blank image based on pose information of the target binarized image corresponding to each label area, and obtaining a target differential image corresponding to the target detection image.
Wherein, the blank image refers to a single-channel 8-bit blank image with the same size as the standard image.
Specifically, after the differential image corresponding to each label area in the target detection image is obtained, the copy area of the differential image corresponding to each label area on the blank image can be determined according to the pose information of the target binarized image corresponding to each label area, the differential image corresponding to each label area is copied into the corresponding copy area, the target differential image corresponding to the target detection image is formed, and the target differential image can more obviously reflect the defects existing in the label content in the target detection image.
In the above embodiment, by copying the differential image corresponding to each label area into the blank image with the same size as the standard image, defects which can more obviously and accurately reflect the content of the labels in the target detection image are obtained, and further more accurate defect detection information is obtained for the subsequent input of a label defect detection model, so that the accuracy of defect detection is improved to a certain extent, and the omission factor of the defects is reduced.
In some embodiments, the defect detection method further comprises:
s700, acquiring a trained defect detection model, wherein the defect detection model is used for detecting defects of blank areas in the image.
S702, inputting the target detection image into a defect detection model to obtain blank area defect information of the target detection image, wherein the blank area defect information comprises a defect external contour area, defect external rectangular frame information and defect confidence corresponding to each defect.
S704, if the defect circumscribing outline area of any defect in the blank area defect information is larger than the circumscribing outline area threshold, or the length of the defect circumscribing rectangular frame information of any defect is larger than the length threshold or the width of the defect circumscribing rectangular frame information is larger than the width threshold, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists.
The defect detection model is used for detecting defects in a blank area in an image, and the blank area may have defects of bright spots, scratches, dark spots and the like; the model may be, but is not limited to, one obtained by training a separation model of yolov5 of the convolutional neural network with sample data that labels defects in blank areas in the sample image. The blank region defect information refers to defect information of a blank region in the target detection image, and also comprises, but is not limited to, defect types corresponding to each defect, defect external contour area, defect external rectangular frame information, defect confidence and the like. The length threshold value refers to a threshold value for measuring that the length of the circumscribed rectangular frame of the defect does not meet the standard product requirement. The width threshold refers to a threshold for measuring that the width of the defective circumscribed rectangular frame does not meet the standard product requirements. The defect circumscribing contour area refers to the specific area of the defect. The defect circumscribed rectangular frame information refers to the length and width of the defect circumscribed rectangular frame. Defect confidence refers to the degree of gap between the defect and the standard requirements of the test product.
Specifically, if the blank area in the target detection image is directly detected together with the label area, it is difficult to distinguish the defect in the blank area from the label, and the blank area is detected together with the label area, so that it is difficult to identify the defect contained in the label, such as a spot or a separated dot on a character printing stroke. Therefore, the defects of the blank area and the defects of the label area are detected separately, so that the false detection rate of the defects is reduced to a certain extent, the omission rate of the defects is also reduced, and the detection of the defects is more targeted. When the blank area is detected, the target detection image of the divided label area is directly input into a trained defect detection model, blank area defect information corresponding to the blank area in the target detection image can be obtained, whether the blank area defect information is in a condition which does not meet a standard product is further judged, namely if the defect external contour area of any defect in the blank area defect information is larger than an external contour area threshold value, or the length of the defect external rectangular frame information of any defect is larger than a length threshold value or a width threshold value, or the defect confidence of any defect is larger than a confidence threshold value, the detection result of the target detection image is that the defect exists, the defect does not exist in the target detection image, and the defect is not detected in the label area, so that the product corresponding to the target detection image is not defective and is good.
In the above embodiment, the trained defect detection model is used to detect the blank region in the target detection image, so that the defect of the blank region can be identified and detected in a targeted manner, and the false detection rate and the omission rate of the defect of the blank region can be reduced to a certain extent.
In some embodiments, the defect detection information includes a defect circumscribing outline area, defect circumscribing rectangular frame information, and defect confidence corresponding to each defect, and the defect detection method further includes:
if the defect outline area of any defect in the defect detection information is larger than the outline area threshold, or the length of the defect outline area of any defect in the defect outline rectangular frame information is larger than the length threshold or the width of the defect outline area of any defect is larger than the width threshold, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists.
Specifically, the defect detection information obtained by detecting through the trained tag defect detection model includes information such as a defect external outline area, defect external rectangular frame information, defect confidence coefficient, defect type and the like of each tag content in each target detection image, and if the defect external outline area of any defect in the defect detection information is larger than an external outline area threshold value, or the length of the defect external rectangular frame information of any defect is larger than a length threshold value or a width threshold value, or the defect confidence coefficient of any defect is larger than a confidence coefficient threshold value, it is indicated that the tag content in the tag area has defects which do not meet the requirements of standard products, and the detection result of the target detection image is that the defects exist. Only when there is no defect in both the label area and the blank area, it can be determined that the detection result of the target detection image is that there is no defect.
In one embodiment, defect detection of a printed product is described as an example. Taking an image of a non-defective standard printed product which belongs to the same type as the printed product to be detected as a standard image, flattening the label of the printed product to be detected through a transparent glass pressing plate, and shooting the printed product to be detected after flattening the label by using a camera to obtain the image to be detected. The method comprises the steps of dividing complete label areas corresponding to all complete labels in a standard image, defining mask areas with uncertain shapes in the complete label areas according to detection requirements corresponding to printed products to be detected, shielding label content in the mask areas, extracting image features except the mask areas in the complete label areas to serve as standard image features, providing more standard comparison feature information for matching of subsequent images to be detected in order to obtain features of the standard images corresponding to more shooting conditions, transforming the extracted standard image features based on multiple groups of rotation angles and scaling factors, taking a set of multiple groups of transformed standard image features and original untransformed standard image features obtained after transformation as a standard image feature set, extracting image features to be detected of the images to be detected, matching the image features to be detected with image features corresponding to different rotation angles and scaling factors in the standard image feature set to obtain matching information and matching scores corresponding to different rotation angles and different scaling factors, taking the largest matching score as target matching score and matching information corresponding to the target matching score as target matching score, and taking the target matching score as target matching information if the target matching score is smaller than a target matching score, and directly printing defect detection threshold is achieved if the target matching score is smaller than the defect detection threshold.
Further, if the target matching score is greater than or equal to the matching score threshold, pose transformation is performed on the image to be detected based on the target matching information to obtain a target detection image, corresponding tag areas are arranged in the target detection image, tag content in each tag area is not repeated, the set of tag content in each tag area is consistent with the image content of the target detection image, all image content in the target detection image is divided into tag areas, then binary images to be detected corresponding to the images of each tag area are sequentially determined, standard binary images corresponding to the images consistent with the content of each tag area in the standard image are obtained, the binary images to be detected are matched with the standard binary images through invoking a template matching algorithm of an OpenCV visual library to obtain offset information, in order to ensure accuracy of image data in the subsequent differential process, the situation that defect false detection or missing detection is caused by insufficient accuracy of image data required by the differential process can be avoided, offset transformation is performed on the binary images to be detected based on the offset information and the pose information of the current tag area, the preset gray level value in the binary images to be detected is set to be a larger than the preset gray level value of the original gray level value of the binary image to be closer to the gray level of the standard gray level value to the standard gray level. And then, differentiating the target binarized image and the standard binarized image to obtain a first differential image, differentiating the standard binarized image and the target binarized image to obtain a second differential image, fusing the first differential image and the second differential image to obtain differential images which can more obviously reflect defects existing in the label content of the current label area, cycling until each label area in the target detection image has the corresponding differential image, copying the corresponding differential image into a blank image with the same size as the standard image based on pose information of the target binarized image corresponding to each label area to obtain a target differential image of the target detection image, inputting the target differential image into a trained label defect detection model to detect, obtaining defect detection information corresponding to each label content in the target detection image, and judging that the defect has a confidence value which is larger than a length threshold or a width threshold or a confidence value which is larger than a width threshold value or a defect confidence value which has a defect which is larger than a confidence value in the defect detection information, if the defect has a confidence value which is larger than a confidence value is larger than a threshold value, printing the defect content corresponding to the label content in the target detection image, and judging that the defect has a confidence value is larger than a confidence value.
Further, since the blank area and the label area in the target detection image are detected together, the defect false detection rate is high, so that the target detection image can be input into a defect detection model for detecting whether the blank area of the image has defects or not, blank area defect information corresponding to the blank area can be obtained, if the defect external contour area of any defect in the blank area defect information is larger than the external contour area threshold, or the length of the defect external rectangular frame information with any defect is larger than the length threshold or the width of the defect external rectangular frame information, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists, and the defect which does not meet the standard printing requirement exists in the blank area is indicated, and the defect exists can be directly used as the defect detection result corresponding to the target detection image. Only if the defects are not detected in the label area and the blank area, the detection result of the printed product to be detected can be judged to be that the defects are not present, and the printed product to be detected is a good product. By separating the blank area from the label area for detection, and carrying out finer division on the label area when detecting the label area, and carrying out targeted difference on each label area before identifying and detecting defects based on a label defect detection model, more accurate and more obvious defect information is obtained, the false detection rate of the blank area and the label area on the defects is reduced better, and the missing detection rate of various labels in a printed label product is reduced better.
It should be understood that, although the steps in the flowcharts related to the embodiments described above are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a defect detection device. The implementation of the solution provided by the device is similar to that described in the above method, so the specific limitation of one or more embodiments of the defect detection device provided below may be referred to above for limitation of the defect detection method, and will not be repeated here.
As shown in fig. 3, an embodiment of the present application provides a defect detection apparatus 300, including:
the acquisition module 302 acquires a target detection image, wherein the target detection image comprises at least two tag areas, and the tag areas are obtained based on image content division of the target detection image;
the matching module 304 is configured to determine a current tag area from each tag area in the target detection image, determine a to-be-detected binarized image corresponding to the image of the current tag area, obtain a standard binarized image, match the to-be-detected binarized image with the standard binarized image, obtain offset information, and transform the to-be-detected binarized image based on the offset information and pose information of the current tag area to obtain the target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image;
the difference module 306 is configured to perform difference between the target binarized image and the standard binarized image to obtain a difference image corresponding to the current tag region, and return to the step of determining the current tag region from each tag region in the target detection image until each tag region in the target detection image has a corresponding difference image;
The detection module 308 is configured to combine the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, input the target differential image into the trained tag defect detection model to obtain defect detection information of the target detection image, and determine a defect detection result of the target detection image based on the defect detection information.
In some embodiments, in acquiring the target detection image, the acquiring module 302 is specifically configured to:
dividing a plurality of complete tag areas in a standard image, setting mask areas in each complete tag area, and carrying out feature extraction on areas except the mask areas in the complete tag areas based on a preset feature extraction algorithm to obtain standard image features corresponding to the standard image;
transforming the standard image features based on a plurality of groups of rotation angles and scaling factors to obtain a plurality of groups of transformed standard image features, and taking the standard image features and the set of transformed standard image features as a standard image feature set;
the method comprises the steps of obtaining an image to be detected, extracting the image to be detected corresponding to the image to be detected based on a preset feature extraction algorithm, matching the image to be detected with the image features in a standard image feature set to obtain target matching information and target matching score, and performing pose transformation operation on the image to be detected based on the target matching information when the target matching score is greater than or equal to a matching score threshold value to obtain a target detection image.
In some embodiments, the pose information includes location information of the current tag region; in the aspect of transforming the binarized image to be detected based on the offset information and the pose information of the current tag region to obtain a target binarized image, the matching module 304 is specifically configured to:
fusing the position information and the offset information to obtain fused position information, and converting the binary image to be detected into a binary image to be processed based on the fused position information;
and setting the gray level value of the binarized image to be processed as the gray level value of a preset multiple to obtain the target binarized image.
In some embodiments, in differentiating the target binarized image and the standard binarized image to obtain a differential image corresponding to the current tag region, the differential module 306 is specifically configured to:
respectively fusing the gray value of each pixel point in the target binary image with the gray value of the pixel point corresponding to the same position in the standard binary image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the target binary image as a fixed gray value to obtain a first difference image;
respectively fusing the gray value of each pixel point in the standard binary image with the gray value of the pixel point corresponding to the same position in the target binary image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the standard binary image as a fixed gray value to obtain a second difference image;
And fusing the first differential graph and the second differential graph to obtain a differential image corresponding to the current label area.
In some embodiments, in combining the differential images corresponding to the tag areas to obtain the target differential image corresponding to the target detection image, the detection module 308 is specifically configured to:
creating a blank image with the same size as the standard image, and setting the gray value of each pixel in the blank image as a fixed gray value;
and copying the differential image corresponding to each label area into the blank image based on the pose information of the target binarized image corresponding to each label area to obtain a target differential image corresponding to the target detection image.
In some embodiments, the defect detection apparatus 300 further comprises a further defect detection module 310, the further defect detection module 310 being specifically configured to:
acquiring a trained defect detection model, wherein the defect detection model is used for detecting defects of blank areas in an image;
inputting the target detection image into a defect detection model to obtain blank area defect information of the target detection image, wherein the blank area defect information comprises a defect external contour area, defect external rectangular frame information and defect confidence corresponding to each defect;
If the defect circumscribing outline area of any defect in the blank area defect information is larger than the circumscribing outline area threshold value, or the length of the defect circumscribing rectangular frame information of any defect is larger than the length threshold value or the width of the defect circumscribing rectangular frame information of any defect is larger than the width threshold value, or the defect confidence of any defect is larger than the confidence threshold value, the detection result of the target detection image is that the defect exists.
In some embodiments, the defect detection apparatus 300 further includes a determination module 312, where the determination module 312 is specifically configured to:
if the defect external contour area of any defect in the defect detection information is larger than the external contour area threshold, or the length of the defect external rectangular frame information of any defect is larger than the length threshold or the width of the defect external rectangular frame information of any defect is larger than the width threshold, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists;
if the target matching score is smaller than the matching score threshold, the detection result of the target detection image is that the defect exists.
The respective modules in the above defect detection apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, an Input/Output interface (I/O) and a communication interface. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface is connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used to store relevant data for performing the defect detection process. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a defect detection method.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 5. The computer device includes a processor, a memory, an input/output interface, a communication interface, a display unit, and an input means. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the display unit and the input device are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The input/output interface of the computer device is used to exchange information between the processor and the external device. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless mode can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a defect detection method. The display unit of the computer device is used for forming a visual picture, and can be a display screen, a projection device or a virtual reality imaging device. The display screen can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be a key, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structures shown in fig. 4 and 5 are block diagrams of only portions of structures associated with the present application and are not intended to limit the computer device to which the present application is applied, and that a particular computer device may include more or less elements than those shown, or may be combined with certain elements, or have different arrangements of elements.
In some embodiments, a computer device is provided, comprising a memory storing a computer program and a processor implementing the steps of the method embodiments described above when the computer program is executed.
In some embodiments, an internal structural diagram of a computer-readable storage medium is provided as shown in fig. 6, the computer-readable storage medium storing a computer program that, when executed by a processor, implements the steps of the method embodiments described above.
In some embodiments, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
It should be noted that, the user information (including, but not limited to, user equipment information, user personal information, etc.) and the data (including, but not limited to, data for analysis, stored data, presented data, etc.) referred to in the present application are information and data authorized by the user or sufficiently authorized by each party, and the collection, use and processing of the related data are required to comply with the related laws and regulations and standards of the related countries and regions.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The foregoing examples represent only a few embodiments of the present application, which are described in more detail and are not thereby to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.

Claims (10)

1. A method of defect detection, the method comprising:
acquiring a target detection image, wherein the target detection image comprises at least two tag areas, and the tag areas are obtained based on image content division of the target detection image;
determining a current tag area from each tag area in the target detection image, determining a to-be-detected binarized image corresponding to the image of the current tag area, acquiring a standard binarized image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image based on the offset information and pose information of the current tag area to obtain a target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image;
The target binarization image and the standard binarization image are differentiated to obtain a differential image corresponding to the current label area, and the step of determining the current label area from all label areas in the target detection image is returned until all label areas in the target detection image have corresponding differential images;
combining the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, inputting the target differential image into a trained tag defect detection model to obtain defect detection information of the target detection image, and determining a defect detection result of the target detection image based on the defect detection information.
2. The method of claim 1, wherein the acquiring the target detection image comprises:
dividing a plurality of complete tag areas in the standard image, setting mask areas in each complete tag area, and carrying out feature extraction on areas except the mask areas in the complete tag areas based on a preset feature extraction algorithm to obtain standard image features corresponding to the standard image;
transforming the standard image features based on a plurality of groups of rotation angles and scaling factors to obtain a plurality of groups of transformed standard image features, and taking the standard image features and a set of transformed standard image features as a standard image feature set;
And acquiring an image to be detected, extracting the image to be detected corresponding to the image to be detected based on the preset feature extraction algorithm, matching the image to be detected with the image features in the standard image feature set to obtain target matching information and target matching score, and performing pose transformation operation on the image to be detected based on the target matching information when the target matching score is greater than or equal to a matching score threshold value to obtain a target detection image.
3. The method of claim 1, wherein the pose information comprises position information of the current tag region; the transforming the to-be-detected binarized image based on the offset information and the pose information of the current tag region to obtain a target binarized image includes:
fusing the position information and the offset information to obtain fused position information, and converting the binary image to be detected into a binary image to be processed based on the fused position information;
and setting the gray level value of the to-be-processed binarized image to be a gray level value of a preset multiple to obtain a target binarized image.
4. The method of claim 1, wherein differentiating the target binarized image and the standard binarized image to obtain a differential image corresponding to the current tag region comprises:
Respectively fusing the gray value of each pixel point in the target binary image with the gray value of the pixel point corresponding to the same position in the standard binary image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the target binary image as a fixed gray value to obtain a first difference image;
respectively fusing the gray value of each pixel point in the standard binary image with the gray value of the pixel point corresponding to the same position in the target binary image to obtain a gray fusion value corresponding to each pixel point, and setting the gray value of the pixel point with the gray fusion value smaller than a preset gray threshold in the standard binary image as the fixed gray value to obtain a second difference image;
and fusing the first differential graph and the second differential graph to obtain a differential image corresponding to the current label area.
5. The method of claim 1, wherein combining the differential images corresponding to the tag areas to obtain the target differential image corresponding to the target detection image comprises:
creating a blank image with the same size as the standard image, and setting the gray value of each pixel in the blank image as a fixed gray value;
And copying the differential image corresponding to each label area into the blank image based on the pose information of the target binarized image corresponding to each label area to obtain a target differential image corresponding to the target detection image.
6. The method according to claim 1, wherein the method further comprises:
acquiring a trained defect detection model, wherein the defect detection model is used for detecting defects of blank areas in an image;
inputting the target detection image into the defect detection model to obtain blank area defect information of the target detection image, wherein the blank area defect information comprises a defect external outline area, defect external rectangular frame information and defect confidence corresponding to each defect;
if the defect outline area of any defect in the blank area defect information is larger than the outline area threshold, or the length of the defect outline rectangular frame information of any defect is larger than the length threshold or the width of the defect outline rectangular frame information of any defect is larger than the width threshold, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists.
7. The method according to claims 1 and 2, wherein the defect detection information includes a defect circumscribing outline area, defect circumscribing rectangular frame information, and defect confidence corresponding to each defect; the method further comprises the steps of:
If the defect outline area of any defect in the defect detection information is larger than the outline area threshold, or the length of the defect outline rectangular frame information of any defect is larger than the length threshold or the width of the defect outline rectangular frame information of any defect is larger than the width threshold, or the defect confidence of any defect is larger than the confidence threshold, the detection result of the target detection image is that the defect exists;
the method further comprises the steps of:
and if the target matching score is smaller than a matching score threshold, detecting the target detection image as defect.
8. A defect detection apparatus, the apparatus comprising:
the acquisition module is used for acquiring a target detection image, wherein the target detection image comprises at least two tag areas, and the tag areas are obtained based on image content division of the target detection image;
the matching module is used for determining a current tag area from each tag area in the target detection image, determining a to-be-detected binarized image corresponding to the image of the current tag area, acquiring a standard binarized image, matching the to-be-detected binarized image with the standard binarized image to obtain offset information, and transforming the to-be-detected binarized image based on the offset information and the pose information of the current tag area to obtain a target binarized image; the standard binarization image is a binarization image corresponding to an area image matched with the content of the current label area in the standard image corresponding to the target detection image;
The difference module is used for carrying out difference on the target binarization image and the standard binarization image to obtain a difference image corresponding to the current label area, and returning to the step of determining the current label area from all label areas in the target detection image until all label areas in the target detection image have corresponding difference images;
the detection module is used for combining the differential images corresponding to the tag areas to obtain a target differential image corresponding to the target detection image, inputting the target differential image into a trained tag defect detection model to obtain defect detection information of the target detection image, and determining a defect detection result of the target detection image based on the defect detection information.
9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202311520045.7A 2023-11-14 2023-11-14 Defect detection method, defect detection device, computer equipment and computer readable storage medium Pending CN117726579A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311520045.7A CN117726579A (en) 2023-11-14 2023-11-14 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311520045.7A CN117726579A (en) 2023-11-14 2023-11-14 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN117726579A true CN117726579A (en) 2024-03-19

Family

ID=90200679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311520045.7A Pending CN117726579A (en) 2023-11-14 2023-11-14 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN117726579A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118506030A (en) * 2024-07-12 2024-08-16 中科方寸知微(南京)科技有限公司 Image defect target matching method and system based on geometric distance measurement
CN119417836A (en) * 2025-01-08 2025-02-11 贵州大学 A mobile phone glass cover defect detection method, system, device and storage medium
CN120013928A (en) * 2025-04-15 2025-05-16 中国电建集团华东勘测设计研究院有限公司 Pipeline image defect identification method and device based on convolutional neural network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118506030A (en) * 2024-07-12 2024-08-16 中科方寸知微(南京)科技有限公司 Image defect target matching method and system based on geometric distance measurement
CN118506030B (en) * 2024-07-12 2024-10-11 中科方寸知微(南京)科技有限公司 Image defect target matching method and system based on geometric distance measurement
CN119417836A (en) * 2025-01-08 2025-02-11 贵州大学 A mobile phone glass cover defect detection method, system, device and storage medium
CN120013928A (en) * 2025-04-15 2025-05-16 中国电建集团华东勘测设计研究院有限公司 Pipeline image defect identification method and device based on convolutional neural network

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
CN117726579A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN103745104B (en) A kind of method of marking examination papers based on augmented reality
CN111027554B (en) Commodity price tag text accurate detection positioning system and positioning method
CN114549390A (en) Circuit board detection method, electronic device and storage medium
CN110766027A (en) Image area positioning method and training method of target area positioning model
CN117132540A (en) A PCB circuit board defect post-processing method based on segmentation model
CN113609984A (en) A kind of pointer meter reading identification method, device and electronic equipment
CN113840135A (en) Color cast detection method, device, equipment and storage medium
CN118071719A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN116051575A (en) Image segmentation method, apparatus, computer device, and storage medium program product
CN118691801B (en) Target area identification method, solder paste defect detection method, device, equipment and medium
CN118968480A (en) Ovulation test paper detection method and device, electronic device and storage medium
CN105894068B (en) FPAR card design and rapid identification and positioning method
CN117557786A (en) Material quality detection method, device, computer equipment and storage medium
CN112084364A (en) Object analysis method, local image search method, device, and storage medium
CN117392698A (en) Method, device, equipment and storage medium for identifying hand-drawn circuit diagram
CN117392079A (en) Appearance defect detection method and device, visual detection system and electronic equipment
CN113888522B (en) Target detection method and system based on digital image and electronic equipment
CN112465904A (en) Image target positioning method and device, computer equipment and storage medium
CN118447001A (en) Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN114820547B (en) Lane line detection method, device, computer equipment and storage medium
CN119007224B (en) Chip identifier identification method, device, computer equipment, medium and product
CN119992159A (en) Product defect detection method, device, computer equipment and storage medium
CN119887707A (en) Defect detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination