CN119205674A - Parts offset angle detection method - Google Patents
Parts offset angle detection method Download PDFInfo
- Publication number
- CN119205674A CN119205674A CN202411277457.7A CN202411277457A CN119205674A CN 119205674 A CN119205674 A CN 119205674A CN 202411277457 A CN202411277457 A CN 202411277457A CN 119205674 A CN119205674 A CN 119205674A
- Authority
- CN
- China
- Prior art keywords
- contour
- component
- feature map
- detection
- offset angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/762—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a part offset angle detection method, which is characterized in that part images are acquired from the position right above a part through a camera, edge detection is carried out on the part images to acquire part contour images, meanwhile, part standard contour images are extracted from a database, contour feature extraction and contour saliency processing are carried out on the part contour images and the part standard contour images by adopting an image processing technology based on deep learning, and further, the part offset angle is estimated intelligently based on feature differences between the part contour images and the part standard contour images. Therefore, the precision and the efficiency of the detection of the offset angle of the part can be effectively improved, the production cost is reduced, and the product quality is further improved.
Description
Technical Field
The application relates to the field of intelligent detection, and in particular relates to a part offset angle detection method.
Background
In modern manufacturing industry, especially in the high-end manufacturing fields of automobiles, aviation and the like, accurate installation and positioning of parts are one of key factors for ensuring product quality and production efficiency. However, in the actual production process, due to various reasons such as equipment wear, environmental factors, improper operation, etc., the components may shift during the assembly, welding or installation process, resulting in an increase in the reject ratio of the product and a decrease in the production efficiency.
To ensure accurate mounting of components, conventional component offset angle detection methods typically rely on manual detection or mechanical measurement equipment. However, the manual detection method is time-consuming and labor-consuming, is greatly influenced by human factors, and is difficult to ensure the accuracy and consistency of detection results. The existing detection devices such as an X-ray detector, a three-coordinate measuring instrument, an optical projector and the like are high in cost, complex in operation and high in maintenance cost, and are difficult to widely apply to a large-scale production line.
Therefore, an optimized part offset angle detection method is desired.
Disclosure of Invention
The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a part offset angle detection method, which is characterized in that part images are acquired from the position right above a part through a camera, edge detection is carried out on the part images to acquire part contour images, meanwhile, part standard contour images are extracted from a database, contour feature extraction and contour saliency processing are carried out on the part contour images and the part standard contour images by adopting an image processing technology based on deep learning, and further, the part offset angle is intelligently estimated based on feature differences between the part contour images and the part standard contour images. Therefore, the precision and the efficiency of the detection of the offset angle of the part can be effectively improved, the production cost is reduced, and the product quality is further improved.
According to an aspect of the present application, there is provided a part offset angle detection method including:
Capturing a part image from directly above the part;
Performing edge detection on the part image to obtain a part contour image;
extracting a component standard contour image from a database;
extracting contour features and enhancing feature gradients of the component contour image and the component standard contour image to obtain a component contour significant detection feature image and a component contour significant standard reference feature image;
And generating a part offset angle decoding estimated value based on the feature comparison of the part contour significant detection feature map and the part contour significant standard reference feature map.
Compared with the prior art, the part offset angle detection method provided by the application has the advantages that the part images are acquired from the position right above the part through the camera, the edge detection is carried out on the part images to acquire the part contour images, meanwhile, the part standard contour images are extracted from the database, the contour feature extraction and the contour saliency processing are carried out on the part contour images and the part standard contour images by adopting the image processing technology based on deep learning, and the part offset angle is estimated intelligently based on the feature difference between the part contour images and the part standard contour images. Therefore, the precision and the efficiency of the detection of the offset angle of the part can be effectively improved, the production cost is reduced, and the product quality is further improved.
Drawings
The above and other objects, features and advantages of the present application will become more apparent by describing embodiments of the present application in more detail with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of embodiments of the application and are incorporated in and constitute a part of this specification, illustrate the application and together with the embodiments of the application, and not constitute a limitation to the application. In the drawings, like reference numerals generally refer to like parts or steps.
FIG. 1 is a flow chart of a method of part offset angle detection according to an embodiment of the present application;
FIG. 2 is a schematic data flow diagram of a method for detecting a part offset angle according to an embodiment of the present application;
Fig. 3 is a flowchart of sub-step S4 of the part offset angle detection method according to an embodiment of the present application.
Detailed Description
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Although the present application makes various references to certain modules in a system according to embodiments of the present application, any number of different modules may be used and run on a user terminal and/or server. The modules are merely illustrative, and different aspects of the systems and methods may use different modules.
A flowchart is used in the present application to describe the operations performed by a system according to embodiments of the present application. It should be understood that the preceding or following operations are not necessarily performed in order precisely. Rather, the various steps may be processed in reverse order or simultaneously, as desired. Also, other operations may be added to or removed from these processes.
Hereinafter, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein.
To ensure accurate mounting of components, conventional component offset angle detection methods typically rely on manual detection or mechanical measurement equipment. However, the manual detection method is time-consuming and labor-consuming, is greatly influenced by human factors, and is difficult to ensure the accuracy and consistency of detection results. The existing detection devices such as an X-ray detector, a three-coordinate measuring instrument, an optical projector and the like are high in cost, complex in operation and high in maintenance cost, and are difficult to widely apply to a large-scale production line. Therefore, an optimized part offset angle detection method is desired.
In the technical scheme of the application, a part offset angle detection method is provided. Fig. 1 is a flowchart of a part offset angle detection method according to an embodiment of the present application. Fig. 2 is a data flow diagram of a method for detecting a part offset angle according to an embodiment of the present application. As shown in fig. 1 and 2, the part offset angle detection method according to the embodiment of the application comprises the steps of S1, shooting part images from the right above the part, S2, carrying out edge detection on the part images to obtain part contour images, S3, extracting part standard contour images from a database, S4, carrying out contour feature extraction and feature gradient enhancement on the part contour images and the part standard contour images to obtain part contour significant detection feature images and part contour significant standard reference feature images, and S5, generating a part offset angle decoding estimated value based on feature comparison of the part contour significant detection feature images and the part contour significant standard reference feature images.
In particular, the S1 captures a part image from directly above the part. It will be appreciated that photographing from directly above ensures that the camera view angle coincides with the vertical direction of the component, thereby minimizing errors due to view angle tilting or shifting, helping to more accurately reflect the actual shape and contour of the component, and enabling more accurate detection of the shifting angle of the component.
In particular, the step S2 performs edge detection on the component image to obtain a component contour image. In one specific example of the application, the part image is processed using a Canny edge detection operator to obtain the part contour image. It should be appreciated that in part offset angle detection, the profile information of the part is the basis for the offset angle estimation. The contour of the part may be made to appear less distinct in view of the fact that the part image may be affected by ambient light, the material of the part surface, etc. To this end, the application further uses a Canny edge detection operator to process the component image. Those skilled in the art will appreciate that the Canny edge detection operator has good edge detection performance and noise immunity, and can effectively detect edge information in an image through steps such as gaussian filtering, gradient calculation, non-maximum suppression, hysteresis threshold and the like, and simultaneously suppress noise interference to obtain a component contour image with clear contour information, so that accurate contour data is provided for subsequent offset angle detection.
Accordingly, in one possible implementation, the component image may be processed using a Canny edge detection operator to obtain the component profile image, for example, by converting the component image to a gray scale image, smoothing the image using a gaussian filter to remove noise, calculating the horizontal and vertical gradients of the image using a Sobel operator, retaining only the gradient value in the direction of maximum gradient magnitude for each pixel, thresholding the gradient magnitude using two thresholds. Pixels below the lower threshold are set to zero and pixels above the higher threshold are set to a maximum. Pixels between the two thresholds are set to zero unless they are adjacent to pixels above the higher threshold, and edge pixels are connected into contours using connected component analysis to obtain the component contour image.
In particular, the S3 component standard contour image is extracted from the database. It should be appreciated that the standard contour image is a component contour image obtained under ideal conditions, which has high accuracy and definition, and by using the standard contour image as a reference standard for offset angle detection, quick and accurate offset angle detection can be realized.
In particular, the step S4 performs contour feature extraction and feature gradient enhancement on the component contour image and the component standard contour image to obtain a component contour salient detection feature map and a component contour salient standard reference feature map. Specifically, in one specific example of the present application, as shown in FIG. 3, the step S4 includes the step S41 of extracting image features of the component profile image and the component standard profile image to obtain a component profile detection feature map and a component profile standard reference feature map, respectively, and the step S42 of inputting the component profile detection feature map and the component profile standard reference feature map into a feature distribution gradient mask salient to obtain the component profile salient detection feature map and the component profile salient standard reference feature map.
Specifically, the S41 extracts image features of the component contour image and the component standard contour image to obtain a component contour detection feature map and a component contour standard reference feature map, respectively. In a specific example of the present application, the component profile image and the component standard profile image are input to a profile feature extractor based on a hole convolutional neural network model to obtain the component profile detection feature map and the component standard reference feature map. In order to ensure that the component contour image and the component standard contour image have the same representation form and dimension in a high-dimensional feature space so as to facilitate subsequent feature comparison and difference analysis, the application further inputs the component contour image and the component standard contour image into a common contour feature extractor based on a cavity convolutional neural network model for processing, captures rich component contour information in the image by using a large receptive field of the cavity convolutional neural network, and digs key detail features in the image, such as edges, angular points, curves and the like, so as to obtain a component contour detection feature image and a component contour standard reference feature image, and provide a richer information basis for subsequent offset angle estimation.
Notably, the hole convolutional neural network is a special Convolutional Neural Network (CNN) that enlarges the receptive field by using hole convolutional operations. Where receptive field refers to the area covered by the convolution kernel on the input feature map. Hole convolution is a modified convolution operation that introduces "holes" in the convolution kernel. A hole refers to the spacing between elements in a convolution kernel. The hole convolution neural network has the advantage of expanding the receptive field in that the hole convolution allows the network to expand the receptive field without increasing the number of parameters. This is very useful for tasks such as semantic segmentation and object detection, where extensive context information is required, and where the spatial resolution is maintained-unlike the pooling layer, the spatial resolution of the feature map is not reduced by the hole convolution. This is important for tasks requiring accurate positioning, such as boundary detection.
Specifically, the S42 inputs the part contour detection feature map and the part contour standard reference feature map into a feature distribution gradient mask salient to obtain the part contour salient detection feature map and the part contour salient standard reference feature map. In order to further improve the contrast between the part contour detection feature and the part contour standard reference feature and further improve the accuracy of offset angle detection, the application further introduces a feature distribution gradient mask salizer to process the part contour detection feature map and the part contour standard reference feature map, and enhances the feature expression of the edge contour position in the feature map by analyzing the feature gradient information. Specifically, the feature distribution gradient mask saliency unit first quantifies the local variation intensity of pixels representing each position in the input feature map by calculating the gradient magnitude value of each position. Then, the local description operator of the gradient amplitude is calculated by measuring the significance of the gradient amplitude value of each position relative to the gradient amplitude values of other positions in the local neighborhood, and the feature significance of each position is estimated. And then, carrying out nonlinear activation on the gradient amplitude local description operators at all the positions by utilizing GELU functions to generate a gating mask so as to guide the enhancement of the contour information in the feature map. Finally, the feature map is weighted element by element based on the generated gating mask, thereby focusing on the contour salient region in the feature map, removing background noise, and obtaining a clearer and differentiated component contour salient detection feature map and a component contour salient standard reference feature map.
In a specific example of the application, inputting the part contour detection feature map and the part contour standard reference feature map into a feature distribution gradient mask prominence to obtain the part contour prominence detection feature map and the part contour prominence standard reference feature map comprises calculating a multi-directional gradient value distribution of each position in the part contour detection feature map, determining a gradient amplitude value of each position in the part contour detection feature map based on the multi-directional gradient value distribution of each position to obtain a part contour detection feature gradient amplitude map, calculating a gradient amplitude local description operator of each position in the part contour detection feature gradient amplitude map to obtain a part contour detection feature gradient amplitude local prominence map, inputting the part contour detection feature gradient amplitude local prominence map into a GELU function-based gating mask to obtain a gradient amplitude local prominence gating mask map, and calculating a position-wise point multiplication between the gradient amplitude local prominence gating mask map and the part contour detection feature map to obtain the part contour detection prominence feature map.
The process of calculating the local description operator of the gradient amplitude of each position in the part contour detection characteristic gradient amplitude distribution map to obtain the local significant distribution map of the part contour detection characteristic gradient amplitude comprises the steps of determining the scale of a local neighborhood, and calculating the average value of the differences between the gradient amplitude values of the preset position in the part contour detection characteristic gradient amplitude distribution map and the gradient amplitude values of other positions in the local neighborhood to obtain the local description operator of the gradient amplitude corresponding to the preset position.
To sum up, in the above embodiment, inputting the part contour detection feature map and the part contour standard reference feature map into a feature distribution gradient mask salient to obtain the part contour salient detection feature map and the part contour standard reference feature map includes processing the part contour detection feature map with a feature distribution gradient mask formula to obtain the part contour detection salient feature map, wherein the feature distribution gradient mask formula is: wherein, the method comprises the steps of, 、、、、AndGray values respectively representing corresponding positions of the component profile detection feature map,Representing the part contour detection feature mapThe abscissa direction gradient value of the position pixel point,Representing the part contour detection feature mapThe ordinate-direction gradient value of the position pixel point,In the profile detection feature map of the partChannel direction gradient values of the position pixels,A first step of representing the profile detection feature mapThe magnitude of the gradient of the location,Representing the detection of characteristic gradient magnitude profiles with the component profileA set of gradient magnitude values within a local neighborhood centered on the location,Representing the number of gradient magnitude values in the set of gradient magnitude values within the local neighborhood,、AndIndicating the amounts of offset in the abscissa direction, the ordinate direction and the channel direction respectively,A first step of representing the profile detection feature mapThe gradient magnitude of the location locally describes the operator,Representation ofThe function of the function is that,A first field representing the local saliency gating mask map of gradient magnitudeThe gradient magnitude of the location locally describes the operator,Representing the local saliency gating mask map of gradient magnitudes,Representing the multiplication by the position point,Representing the part contour detection saliency map.
It should be noted that, in other specific examples of the present application, the component contour image and the component standard contour image may be further subjected to contour feature extraction and feature gradient enhancement in other manners to obtain a component contour significant detection feature image and a component contour significant standard reference feature image, for example, the component contour image and the component standard contour image are input, features of the component contour image and the component standard contour image are extracted by using a contour feature extractor based on a cavity convolutional neural network to obtain a component contour detection feature image and a component contour standard reference feature image, gradients of the component contour detection feature image and the component contour standard reference feature image are calculated, significance information in the feature image is enhanced by using a gradient enhancement technique, and the enhanced gradients are combined with the component contour detection feature image and the component contour standard reference feature image to obtain the component contour significant detection feature image and the component contour significant standard reference feature image.
In particular, the S5 generates a part offset angle decoding estimate based on a feature comparison of the part contour saliency detection feature map and the part contour saliency standard reference feature map. In a specific example of the present application, first, a difference feature between the part contour significant detection feature map and the part contour significant standard reference feature map is extracted to obtain a part offset angle differential representation feature map, and specifically, a difference feature between the part contour significant detection feature map and the part contour significant standard reference feature map is calculated to obtain the part offset angle differential representation feature map. In order to visually display and quantitatively represent the deviation condition of the component contour compared with the standard contour image, the application further adopts a differential calculation mode, and the feature difference between the component contour significant detection feature image and the component contour significant standard reference feature image is calculated by position so as to obtain a component deviation angle differential representation feature image, thereby providing visual quantized data support for the deviation angle estimation of the component. Further, the part offset angle differential representation feature map is input to a decoder-based part offset angle estimation module to obtain the part offset angle decoding estimate. It should be appreciated that the decoder is primarily used to decode the encoded features back into the original space or particular form. In the technical scheme of the application, the decoder-based part offset angle estimation module performs multi-layer nonlinear transformation on the part offset angle differential representation feature map, so that part contour offset information in the part offset angle differential representation feature map can be fully utilized and mapped to a numerical space of the part offset angle, and a part offset angle decoding estimated value is obtained, so that accurate estimation of the part offset angle is realized.
In a preferred example, inputting the part offset angle differential representation feature map into a decoder-based part offset angle estimation module to obtain a part offset angle decoding estimate comprises the steps of:
Inter-feature value-based processing of all feature values of the feature map represented by the part offset angle difference Clustering the distances, and arranging clustering features into part offset angle difference representation clustering vectors;
Determining a clustering proportion value of the number of the characteristic values of the part offset angle differential representation clustering vector and the number of the characteristic values of the part offset angle differential representation characteristic map;
Dividing the two norms of the part offset angle differential representation clustering vector by the two norms of the part offset angle differential representation feature vector obtained after the part offset angle differential representation feature map is unfolded to obtain a part offset angle differential representation conflict representation value;
Dividing a first power value of a norm of the part offset angle differential representation clustering vector with the clustering proportion value as an index by a second power value of a norm of the part offset angle differential representation feature vector with the clustering proportion value as an index to obtain a part offset angle differential representation countermeasure representation value;
for each eigenvalue of the part offset angle differential representation cluster vector, multiplying it by the inverse of the difference between the part offset angle differential representation conflict representation value and the part offset angle differential representation countermeasure representation value to obtain an optimized eigenvalue of the part offset angle differential representation cluster vector;
for each feature value outside the cluster in the part offset angle differential representation feature map, multiplying the feature value by the inverse of the sum of the part offset angle differential representation conflict representation value and the part offset angle differential representation countermeasure representation value to obtain an optimized out-of-class feature value of the part offset angle differential representation feature map;
Forming an optimized part offset angle differential representation feature map from the optimized feature values of the part offset angle differential representation cluster vector and the optimized extrageneric feature values of the part offset angle differential representation feature map, and
Inputting the optimized part offset angle differential representation feature map into a decoder-based part offset angle estimation module to obtain a part offset angle decoding estimated value.
Expressed as: wherein, the method comprises the steps of, Is the part offset angle differential representation feature vector,Is the number of eigenvalues of the part offset angle differential representation eigenvector,Is the part offset angle differential representation clustering vector,Is the number of eigenvalues of the part offset angle differential representation cluster vector,Representing the clustering feature set corresponding to the part offset angle differential representation clustering vector,AndRepresenting the two and one norms of the vector, respectivelyTo the power.
That is, considering that the component contour saliency detection feature map and the component contour saliency standard reference feature map respectively represent the image semantic features of the component contour image and the component standard contour image, when the difference feature map therebetween is further calculated to perform aggregation mapping on the image semantic difference representation domain, the image semantic aggregation alignment conflict is caused by the image semantic feature enhancement difference, so that aggregation key information is lost, and the expression effect of the component offset angle difference representation feature map is affected.
Therefore, in order to avoid the loss of key suffix semantic information relative to the whole original feature set caused by aggregation conflict, the clustering proportion of the number of the feature values of the part offset angle differential representation clustering vector relative to the number of the feature values of the part offset angle differential representation feature map is used as a judging function to conduct countermeasure type judgment of absolute representation of the set of one norm of the part offset angle differential representation clustering vector and the part offset angle differential representation feature vector, positive and negative interaction is conducted on the clustering inherent conflict representation of the two norms of the part offset angle differential representation clustering vector and the part offset angle differential representation feature vector respectively, so that a firm alignment guardrail of the optimized part offset angle differential representation feature map based on the aggregation feature and the whole original feature set is constructed, the detrimental information loss of the optimized part offset angle differential representation feature map based on the aggregation risk mobility is achieved, the expressing effect of the optimized part offset angle differential representation feature map is improved, and the part offset angle differential representation feature map based on the estimated offset angle decoding module of the part offset angle estimation and the part offset angle differential representation feature map input based on the decoder is improved.
In summary, the method for detecting the offset angle of the part according to the embodiment of the application is explained, wherein the camera is used for collecting the part image from the part right above, and carrying out edge detection on the part image to obtain the part contour image, meanwhile, the part standard contour image is extracted from the database, and the image processing technology based on deep learning is adopted for carrying out contour feature extraction and contour saliency processing on the part contour image and the part standard contour image, so that the offset angle of the part is intelligently estimated based on the feature difference between the part contour image and the part standard contour image. Therefore, the precision and the efficiency of the detection of the offset angle of the part can be effectively improved, the production cost is reduced, and the product quality is further improved.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by persons skilled in the art that the scope of the disclosure referred to in this disclosure is not limited to the specific combinations of features described above, but also covers other embodiments which may be formed by any combination of features described above or equivalents thereof without departing from the spirit of the disclosure. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).
Moreover, although operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limiting the scope of the present disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are exemplary forms of implementing the claims. The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411277457.7A CN119205674A (en) | 2024-09-12 | 2024-09-12 | Parts offset angle detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411277457.7A CN119205674A (en) | 2024-09-12 | 2024-09-12 | Parts offset angle detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119205674A true CN119205674A (en) | 2024-12-27 |
Family
ID=94077794
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411277457.7A Pending CN119205674A (en) | 2024-09-12 | 2024-09-12 | Parts offset angle detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119205674A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05141920A (en) * | 1991-11-20 | 1993-06-08 | Yamatake Honeywell Co Ltd | Parts status detection method |
US5233670A (en) * | 1990-07-31 | 1993-08-03 | Thomson Trt Defense | Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing |
CN108447070A (en) * | 2018-03-15 | 2018-08-24 | 中国科学院沈阳自动化研究所 | A kind of industrial part defect detection algorithm based on pixel vectors invariant relation feature |
CN109829876A (en) * | 2018-05-30 | 2019-05-31 | 东南大学 | Carrier bar on-line detection device of defects and method based on machine vision |
CN116758067A (en) * | 2023-08-16 | 2023-09-15 | 梁山县成浩型钢有限公司 | Metal structural member detection method based on feature matching |
CN118298090A (en) * | 2024-03-20 | 2024-07-05 | 天津理工大学 | Object contour and texture enhanced SLAM method based on NeRF |
CN118608504A (en) * | 2024-06-19 | 2024-09-06 | 广州宏江智能装备有限公司 | A method and system for detecting component surface quality based on machine vision |
-
2024
- 2024-09-12 CN CN202411277457.7A patent/CN119205674A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5233670A (en) * | 1990-07-31 | 1993-08-03 | Thomson Trt Defense | Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing |
JPH05141920A (en) * | 1991-11-20 | 1993-06-08 | Yamatake Honeywell Co Ltd | Parts status detection method |
CN108447070A (en) * | 2018-03-15 | 2018-08-24 | 中国科学院沈阳自动化研究所 | A kind of industrial part defect detection algorithm based on pixel vectors invariant relation feature |
CN109829876A (en) * | 2018-05-30 | 2019-05-31 | 东南大学 | Carrier bar on-line detection device of defects and method based on machine vision |
CN116758067A (en) * | 2023-08-16 | 2023-09-15 | 梁山县成浩型钢有限公司 | Metal structural member detection method based on feature matching |
CN118298090A (en) * | 2024-03-20 | 2024-07-05 | 天津理工大学 | Object contour and texture enhanced SLAM method based on NeRF |
CN118608504A (en) * | 2024-06-19 | 2024-09-06 | 广州宏江智能装备有限公司 | A method and system for detecting component surface quality based on machine vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Maini et al. | Study and comparison of various image edge detection techniques | |
GB2569751A (en) | Static infrared thermal image processing-based underground pipe leakage detection method | |
CN111325721A (en) | Gas leakage detection method and system based on infrared thermal imaging | |
CN111259706B (en) | Lane line pressing judgment method and system for vehicle | |
CN111161222B (en) | A Visual Saliency Based Defect Detection Method for Printing Cylinders | |
Hussain et al. | A comparative analysis of edge detection techniques used in flame image processing | |
Mainali et al. | Robust low complexity corner detector | |
Krishnan et al. | A survey on different edge detection techniques for image segmentation | |
CN109816645B (en) | Automatic detection method for steel coil loosening | |
CN114627080B (en) | Vehicle stamping accessory defect detection method based on computer vision | |
CN113705564B (en) | Pointer type instrument identification reading method | |
CN108182704A (en) | Localization method based on Shape context feature | |
CN112991374A (en) | Canny algorithm-based edge enhancement method, device, equipment and storage medium | |
CN105787870A (en) | Graphic image splicing fusion system | |
CN108537815B (en) | Video image foreground segmentation method and device | |
CN108399614B (en) | A Fabric Defect Detection Method Based on Unsampled Wavelet and Gumbel Distribution | |
Zhang et al. | Reading various types of pointer meters under extreme motion blur | |
Liu et al. | A deep learning-based method for structural modal analysis using computer vision | |
Zheng et al. | Research on edge detection algorithm in digital image processing | |
Wang et al. | Fast blur detection algorithm for UAV crack image sets | |
CN119205674A (en) | Parts offset angle detection method | |
CN108389204B (en) | Degraded image fuzzy kernel double-parameter direct estimation method for high-speed online detection | |
CN106530292A (en) | Strip steel surface defect image rapid identification method based on line scanning camera | |
CN109359646A (en) | Identification method of liquid level instrument based on inspection robot | |
CN115908275A (en) | Hot ring rolling deformation geometric state online measurement method based on deep learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |