[go: up one dir, main page]

CN116433700B - Visual positioning method for flange part contour - Google Patents

Visual positioning method for flange part contour Download PDF

Info

Publication number
CN116433700B
CN116433700B CN202310693060.5A CN202310693060A CN116433700B CN 116433700 B CN116433700 B CN 116433700B CN 202310693060 A CN202310693060 A CN 202310693060A CN 116433700 B CN116433700 B CN 116433700B
Authority
CN
China
Prior art keywords
flange part
convolution kernel
determining
pixel points
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310693060.5A
Other languages
Chinese (zh)
Other versions
CN116433700A (en
Inventor
王荣景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Jinrunyuan Flange Machinery Co ltd
Original Assignee
Shandong Jinrunyuan Flange Machinery Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Jinrunyuan Flange Machinery Co ltd filed Critical Shandong Jinrunyuan Flange Machinery Co ltd
Priority to CN202310693060.5A priority Critical patent/CN116433700B/en
Publication of CN116433700A publication Critical patent/CN116433700A/en
Application granted granted Critical
Publication of CN116433700B publication Critical patent/CN116433700B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a flange part contour visual positioning method, which relates to the field of image processing, and comprises the following steps: acquiring an image of the flange part, and converting the image into a gray image; determining a convolution kernel size based on the grayscale image; determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel; and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part. The scheme can obtain complete contour information.

Description

Visual positioning method for flange part contour
Technical Field
The application relates to the field of image processing, in particular to a flange part contour visual positioning method.
Background
The visual contour positioning of flange parts is one of the important tasks in manufacturing, assembling, and inspecting some mechanical parts. The flanges can be classified according to the outline, and the flanges can be specifically classified into: right-angle flanges, circular flanges, tapered flanges, annular flanges, etc. The flange contour is generally initially extracted through edge detection, a binary image only containing contour information is obtained through post-processing, and basic contour information is obtained through analysis of the binary image. The contour information includes contour position, shape, and the like. The sobel operator is generally used for selecting the edge detection operator, and is more suitable for a sobel operator with simpler and clearer calculation method because the flange image is simple in color structure.
In the process of extracting the flange profile by using a sobel operator, due to the limitation of the view angle of the acquisition camera, the edge part of the obtained image after operator detection is incomplete, and the profile information of the obtained flange part is incomplete when the subsequent post-treatment is carried out.
Disclosure of Invention
The application provides a flange part contour visual positioning method which can obtain complete contour information.
The application provides a flange part contour visual positioning method, which comprises the following steps:
acquiring an image of the flange part, and converting the image into a gray image;
determining a convolution kernel size based on the grayscale image;
determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel;
and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part.
In an alternative embodiment, determining a convolution kernel size based on the grayscale image includes:
detecting gray values of pixel points in a window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values;
connecting the abrupt pixel points, and determining a normal line from a perpendicular line of the connection line;
determining a localized area of the flange part based on the normal;
the convolution kernel size is determined based on the local region.
In an alternative embodiment, determining the convolution kernel size based on the local region includes:
calculating entropy of the local area based on probability of occurrence of different gray values in the local area;
determining a first pixel point when the entropy of the local area changes from 0 to not 0, and determining a second pixel point when the entropy of the local area changes from not 0 to 0;
and determining the distance between the first pixel point and the second pixel point, wherein the distance is the convolution kernel size.
In an alternative embodiment, calculating entropy of the local region based on probabilities of occurrence of different gray values in the local region includes:
the entropy of the local region is calculated using the following equation (1):
(1);
wherein ,representing different local areas +.>Entropy of (2); />Representing the probability of occurrence of different gray values in the local area;representing the number of different gray values.
In an alternative embodiment, determining the weight modification value based on the distribution characteristics of the pixel points within the convolution kernel includes:
judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel;
if the marking necessity is in the first preset range, not marking, and if the marking necessity is in the second preset range, marking;
the marked pixel points are connected in pairs, and the shape of the flange part is determined;
the weight modification value is determined based on the shape.
In an alternative embodiment, the method for connecting the marked pixel points in pairs to determine the shape of the flange part comprises the following steps:
connecting the marked pixel points in pairs to obtain a connecting line;
determining a normal line of a connecting line, the normal line passing through a midpoint of the connecting line;
if all normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile.
In an alternative embodiment, determining the weight modification value based on the shape includes:
if the flange part is of a circular outline, the weight in the direction of the connecting line composition is increased, the weight in the other directions is reduced, and then the weight modification value is determined.
In an alternative embodiment, the processing the flange part by using the optimized sobel operator to obtain the profile information corresponding to the flange part includes:
processing the flange part by using the optimized sobel operator to obtain a gradient output value;
and determining contour information corresponding to the flange part based on the gradient output value.
In an alternative embodiment, the processing the flange part by using the optimized sobel operator to obtain a gradient output value includes:
obtaining a gradient output value by using the following formula (2):
(2);
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />The weight modification values for the 4 directions are represented, respectively, and in the initial sobel operator,n represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,
in an alternative embodiment, determining the necessity of marking the center pixel in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixels in the convolution kernel includes:
the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
(3);
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Pixels representing different gray values within the convolution kernel; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
The beneficial effects of the application are as follows: compared with the prior art, the flange part contour visual positioning method provided by the application comprises the following steps: acquiring an image of the flange part, and converting the image into a gray image; determining a convolution kernel size based on the grayscale image; determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel; and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part. The scheme can obtain complete contour information.
Drawings
FIG. 1 is a flow chart of a first embodiment of the flange part contour visual positioning method of the present application;
FIG. 2 is a flow chart of an embodiment of step S12 in FIG. 1;
FIG. 3 is a flowchart illustrating an embodiment of step S13 in FIG. 1;
FIG. 4 is a schematic diagram of a connection line of marked pixels.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The present application will be described in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, a flow chart of an embodiment of a flange part contour visual positioning method according to the present application specifically includes:
step S11: and acquiring an image of the flange part, and converting the image into a gray image.
Specifically, images of flange parts are collected, in order to increase contrast between pixel points and facilitate improvement of subsequent operators, the collected images are subjected to equalization processing after graying, gray images are obtained, and subsequent processing is performed on the basis of the gray images for unfolding analysis.
Step S12: a convolution kernel size is determined based on the grayscale image.
Preliminary analysis is carried out on the preprocessed image to obtain priori features at the flange edge, and the purpose of obtaining the priori features is to obtain the width of the flange edge, so that the calculated convolution kernel can cover the edge, and the general sobel operator convolution kernel size isTo prevent the local area edge width from being larger than +.>The convolution kernel size needs to be determined here so that the convolution kernel wraps around the edges. So that the rollThe purpose of the convolution edge is that the pixel points in the convolution kernel can completely reflect the edge part in the process of operator operation, so that whether the distribution of the gray values of the pixel points in the convolution kernel meets the rule of the distribution of the pixel points at the flange edge is judged according to the prior characteristics. A priori features mean that the difference in gray values at the flange edge and at the background is large and the edge has a width, within a given width, the gray variation of which is gradual. The primary width of the edge is obtained by quantifying the gray level difference and the progressive gray level change, the edge obtained here is a blurry edge, and the main purpose is to determine the width of the edge, so that the convolution kernel is wide in size, the edge can be completely covered, and the unfolding analysis of subsequent steps is convenient.
Specifically, referring to fig. 2, step S12 specifically includes:
step S21: and detecting gray values of pixel points in the window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values.
Specifically, in the gray level image, the background gray level value is 0, two sliding windows sliding in parallel are taken, so that the gray level value of a pixel point in the window is detected in the sliding process, and if the gray level value in the window jumps from 0 to a preset value, the pixel point corresponding to the preset value when the gray level value in the window jumps from 0 to the preset value is determined as a sudden change pixel point. That is, the first pixel point, which jumps 0 to a preset value, is determined as the abrupt pixel point.
Step S22: and connecting the abrupt pixel points, and determining a normal line by a perpendicular line of the connecting line.
And connecting abrupt pixel points, wherein the vertical line of the connecting line is the normal line of the area with larger gray value difference.
Step S23: a localized region of the flange part is determined based on the normal.
After obtaining the normal direction, prescribing the direction consistent with the normal directionThe area isThe normal constitutes a local area, i.e. a local area of the flange part. Specifically, the middle pixel point in the local area passes through the normal line.
Step S24: the convolution kernel size is determined based on the local region.
Specifically, the entropy of the local region is calculated based on the probability of occurrence of different gray values in the local region. In one embodiment, the entropy of the local region is calculated using the following equation (1):
(1);
wherein ,representing different local areas +.>Entropy of (2); />Representing the probability of occurrence of different gray values in the local area;representing the number of different gray values.
When the window is in the background portion,when the window is in the foreground portion but not in the edge portion, itThe method comprises the steps of carrying out a first treatment on the surface of the When the window is in the edge region, it is +.>. Based on this, the entropy of the local area is determined>A first pixel point when changing from 0 to not 0, and determining entropy of the local area>A second pixel point when it is changed from 0 to 0. Determining the distance +.>Said distance->For the convolution kernel size, i.e. the size of the convolution kernel is +.>. In one embodiment, n has a value of 5.
Step S13: a weight modification value is determined based on the distribution characteristics of the pixels within the convolution kernel.
The step S12 obtains the size of the convolution kernel, which may already include the width of the basic edge, so in step S13, the distribution characteristics of the pixels in the convolution kernel may be directly analyzed, and whether the distribution characteristics satisfy the prior rule of the edge pixel distribution is analyzed.
In one embodiment, referring to fig. 3, step S13 specifically includes:
step S31: and judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel.
The purpose of determining the marking necessity of the central pixel point in the convolution kernel is to divide the points with the distribution characteristics in the kernel meeting the prior characteristics to a high necessity, so that the points are considered as points meeting the edge rule, the series of points are connected, the basic distribution form of the points in the airspace is quantized, and the points meeting the edge rule is further determined.
For the points meeting the rules, as the edge deletion can be caused by the view angle problem in a certain section, gradient weight division is needed, and the gradient duty ratio at the edge is amplified, so that the problem of the edge deletion can be solved in the final output result, and a good edge detection effect is achieved.
In one embodiment, the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
(3);
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Pixels representing different gray values within the convolution kernel; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
Step S32: if the marking necessity is within a first preset range, the marking is not performed, and if the marking necessity is within a second preset range, the marking is performed.
It will be appreciated that the greater the entropy within the convolution kernel, the greater the marking necessity, and therefore proportional, since at the edges the distribution of pixel gray values is more discrete compared to the foreground or background. Wherein the entropy is in the range ofThe method comprises the steps of carrying out a first treatment on the surface of the The larger the difference between the maximum and minimum values, the greater its marking necessity. Because the gray value distribution of the pixel points is more discrete at the edge, the gray value at the edge is generally lower than the foreground and generally higher than the background; mean->The larger the label the smaller the need for the label, since at the foreground its gray value average is significantly higher than at the edge (there are low gray value points).
In particular, the necessity of markingThe closer to 1, the greater the marking necessity. Thus, in an embodiment, if the marking necessity is within a first preset range, no marking is performed, and if the marking necessity is within a second preset range, marking is performed. It should be noted that the second preset range is closer to 1. In one embodiment, the first predetermined range is [0,0.8 ]]The second preset range is (0.8,1)]. I.e. < ->When not marked, if +.>At that time, the marking is performed. The purpose of marking the pixel points is to analyze the basic distribution form or the distribution rule of all the marked points in the airspace, and judge whether the basic structure of the flange profile, such as a circle, a rectangle and the like, is satisfied. If the basic structure is satisfied, the gradient weight of the basic structure needs to be reconstructed, and the aim of the reconstruction is to make the output result of the sobel operator after the reconstruction of the incomplete edge part complement the incomplete edge part, so that the aim of supplementing the incomplete edge is achieved.
Step S33: and connecting the marked pixel points in pairs to determine the shape of the flange part.
After the pixel points are marked, the shape formed by the marked pixel points in the airspace needs to be quantized, so that a modified value of the gradient weight is obtained according to the shape.
In one embodiment, the quantization is performed in the following manner: connecting the marked pixel points in pairs to obtain a connecting line; a normal to a connection line is determined, the normal passing through a midpoint of the connection line. After the connection is completed, the degree of intersection between all normals is analyzed. If all normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile. For the circular contour flange in the scene, the final purpose of the scheme is to complement the position where the contour of the edge is incomplete, so that the gradient weight modification value is required to be modified according to the shape at the moment, and the purpose of complementing the contour defect is achieved.
Step S34: the weight modification value is determined based on the shape.
In step S12, the convolution kernel size of the operator is already determined, and at this time, the calculation mode of the operator and the output result need to be determined, where the calculation mode is determined by the basic contour, for example, under the contour of the circular flange, the calculation mode needs to add weights around the circular rule, so that the output result can complement the defect. The weight adding method around the circular rule comprises the following steps: the convolution kernel can completely cover the edge width, and compared with the gray values of the upper pixel point and the lower pixel point at the position of the common gradient output as the central pixel point, the convolution kernel can increase the weight in the direction formed by the marked pixel point connecting lines so as to meet the round rule, thereby enabling the increased output value to be more in line with the round flange outline.
In an embodiment, please refer to fig. 4, wherein in the 3 pi/4 direction, there is a connection line of the marked pixel points, and the connection line of the marked pixel points is a gradient direction, and the weight value in the connection line direction of the marked pixel points is increased to obtain the weight modification value.
Step S14: and optimizing the initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator so as to acquire the contour information corresponding to the flange part.
Specifically, the flange part is processed by utilizing the optimized sobel operator to obtain a gradient output value; and determining contour information corresponding to the flange part based on the gradient output value.
In one embodiment, the gradient output value is obtained using the following equation (2):
(2);
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />The weight modification values for the 4 directions are represented, respectively, and in the initial sobel operator,n represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,
the necessity of the output point mark is obtained by quantizing the basic characteristics of the pixel point distribution in the convolution kernel, and the distribution form of the mark points in the airspace is quantized, so that the modification value of the weight is determined.
Dividing the original operators into two types, wherein one type is original gradient output, namely no mark point exists in the operators at the moment; the second is a gradient output containing weights, i.e. there are marker points in the operator at this time. And adding the logic to a sobel operator to obtain an optimized operator with weight. Processing the flange part by using the optimized sobel operator to obtain a gradient output value; and determining contour information corresponding to the flange part based on the gradient output value. Thereby enabling a finished edge profile image to be obtained.
The method can analyze and quantify the prior characteristic of the flange part and is used for judging whether the distribution mode of the pixel points in the operator convolution kernel meets the prior characteristic, so that the follow-up judgment of the necessity of marking the output points and the modification of the weight can be carried out. Compared with the method for extracting the outline by directly using the sobel operator, the method can obtain the weight of each direction in the convolution kernel by marking the output points and analyzing the distribution form of the output points, so that the weight corresponds to the part which can identify the outline and supplement the incomplete part in the original image, and the problems of unclear and incomplete detected outline are effectively solved.
The foregoing is only the embodiments of the present application, and therefore, the patent scope of the application is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present application and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the application.

Claims (6)

1. The flange part contour visual positioning method is characterized by comprising the following steps of:
acquiring an image of the flange part, and converting the image into a gray image;
determining a convolution kernel size based on the grayscale image;
determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel;
optimizing an initial sobel operator by using the weight modification value, and processing the flange part by using the optimized sobel operator to obtain contour information corresponding to the flange part;
determining a convolution kernel size based on the grayscale image, comprising:
detecting gray values of pixel points in a window in a sliding process by using two windows sliding in parallel in the gray image, and further determining abrupt pixel points, wherein the abrupt pixel points are pixel points corresponding to preset values when the gray values jump from 0 to the preset values;
connecting the abrupt pixel points, and determining a normal line from a perpendicular line of the connection line;
determining a localized area of the flange part based on the normal;
determining the convolution kernel size based on the local region;
determining a weight modification value based on the distribution characteristics of the pixel points in the convolution kernel, including:
judging the necessity of marking the central pixel point in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixel points in the convolution kernel;
if the marking necessity is in the first preset range, not marking, and if the marking necessity is in the second preset range, marking;
the marked pixel points are connected in pairs, and the shape of the flange part is determined;
determining the weight modification value based on the shape;
processing the flange part by using the optimized sobel operator to obtain the profile information corresponding to the flange part, including:
processing the flange part by using the optimized sobel operator to obtain a gradient output value;
determining contour information corresponding to the flange part based on the gradient output value;
processing the flange part by using the optimized sobel operator to obtain a gradient output value, including:
obtaining a gradient output value by using the following formula (2):
(2);
wherein ,expressing the gradient output value of the sobel operator, < +.>Respectively indicate->Components in 4 directions; />Weight modification values respectively representing 4 directions, in the initial sobel operator, ++>N represents the number of pixel points marked in the connecting line composition direction, and in the optimized sobel operator,
2. a flange part contour visual positioning method as defined in claim 1, wherein determining said convolution kernel size based on said localized area comprises:
calculating entropy of the local area based on probability of occurrence of different gray values in the local area;
determining a first pixel point when the entropy of the local area changes from 0 to not 0, and determining a second pixel point when the entropy of the local area changes from not 0 to 0;
and determining the distance between the first pixel point and the second pixel point, wherein the distance is the convolution kernel size.
3. The flange part contour visual positioning method according to claim 2, wherein calculating entropy of the local area based on probabilities of occurrence of different gray values in the local area, comprises:
the entropy of the local region is calculated using the following equation (1):
(1);
wherein ,representing different local areas +.>Entropy of (2); />Representing the probability of occurrence of different gray values in the local area; />Representing the number of different gray values.
4. The method for visually locating the contour of a flange part according to claim 1, wherein the step of connecting the marked pixels in pairs to determine the shape of the flange part comprises:
connecting the marked pixel points in pairs to obtain a connecting line;
determining a normal line of a connecting line, the normal line passing through a midpoint of the connecting line;
if all the normals intersect at a point, the flange part is represented as a circular profile; if all normals are parallel to each other, it means that the flange part has a rectangular profile.
5. A method of visual positioning of a flange part contour as defined in claim 1, wherein determining said weight modification value based on said shape comprises:
if the flange part is of a circular outline, the weight in the direction of the connecting line composition is increased, the weight in the other directions is reduced, and then the weight modification value is determined.
6. The flange part contour visual positioning method according to claim 1, wherein determining the necessity of marking the center pixel in the convolution kernel based on the entropy in the convolution kernel, the maximum value and the minimum value of the gray values of the pixels in the convolution kernel, comprises:
the necessity of marking the center pixel point in the convolution kernel is calculated using the following equation (3):
(3);
wherein ,indicating the necessity of marking the central pixel point in the convolution kernel,/-, for example>Representing differences within convolution kernelsPixels of gray values; />Representing the probability of occurrence of pixels of different gray values in the convolution kernel, +.>Respectively representing the maximum value and the minimum value of the gray value of the pixel point in the convolution kernel.
CN202310693060.5A 2023-06-13 2023-06-13 Visual positioning method for flange part contour Active CN116433700B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310693060.5A CN116433700B (en) 2023-06-13 2023-06-13 Visual positioning method for flange part contour

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310693060.5A CN116433700B (en) 2023-06-13 2023-06-13 Visual positioning method for flange part contour

Publications (2)

Publication Number Publication Date
CN116433700A CN116433700A (en) 2023-07-14
CN116433700B true CN116433700B (en) 2023-08-18

Family

ID=87083612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310693060.5A Active CN116433700B (en) 2023-06-13 2023-06-13 Visual positioning method for flange part contour

Country Status (1)

Country Link
CN (1) CN116433700B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116704209B (en) * 2023-08-08 2023-10-17 山东顺发重工有限公司 Quick flange contour extraction method and system
CN118397359B (en) * 2024-05-08 2025-02-07 兰州大学 A convolution calculation method for improving the recognition accuracy of fuzzy biological images

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233670A (en) * 1990-07-31 1993-08-03 Thomson Trt Defense Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing
JP2000339478A (en) * 1999-05-31 2000-12-08 Nec Corp Device and method for processing picture
RU2360289C1 (en) * 2008-08-11 2009-06-27 Евгений Александрович Самойлин Method of noise-immune gradient detection of contours of objects on digital images
CN106709909A (en) * 2016-12-13 2017-05-24 重庆理工大学 Flexible robot vision recognition and positioning system based on depth learning
CN109472271A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 Printed circuit board image contour extraction method and device
CN110687120A (en) * 2019-09-18 2020-01-14 浙江工商大学 Flange appearance quality inspection system
WO2020103417A1 (en) * 2018-11-20 2020-05-28 平安科技(深圳)有限公司 Bmi evaluation method and device, and computer readable storage medium
CN111696107A (en) * 2020-08-05 2020-09-22 南京知谱光电科技有限公司 Molten pool contour image extraction method for realizing closed connected domain
CN111985329A (en) * 2020-07-16 2020-11-24 浙江工业大学 Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection
WO2020253062A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Method and apparatus for detecting image border
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN113450292A (en) * 2021-06-17 2021-09-28 重庆理工大学 High-precision visual positioning method for PCBA parts
CN115082410A (en) * 2022-06-29 2022-09-20 西安工程大学 Detection method of circlip defect based on image processing
CN115096206A (en) * 2022-05-18 2022-09-23 西北工业大学 Part size high-precision measurement method based on machine vision

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7440139B2 (en) * 2005-01-13 2008-10-21 Xerox Corporation Systems and methods for controlling a tone reproduction curve using error diffusion
GB0608069D0 (en) * 2006-04-24 2006-05-31 Pandora Int Ltd Image manipulation method and apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5233670A (en) * 1990-07-31 1993-08-03 Thomson Trt Defense Method and device for the real-time localization of rectilinear contours in a digitized image, notably for shape recognition in scene analysis processing
JP2000339478A (en) * 1999-05-31 2000-12-08 Nec Corp Device and method for processing picture
RU2360289C1 (en) * 2008-08-11 2009-06-27 Евгений Александрович Самойлин Method of noise-immune gradient detection of contours of objects on digital images
CN106709909A (en) * 2016-12-13 2017-05-24 重庆理工大学 Flexible robot vision recognition and positioning system based on depth learning
CN109472271A (en) * 2018-11-01 2019-03-15 凌云光技术集团有限责任公司 Printed circuit board image contour extraction method and device
WO2020103417A1 (en) * 2018-11-20 2020-05-28 平安科技(深圳)有限公司 Bmi evaluation method and device, and computer readable storage medium
WO2020253062A1 (en) * 2019-06-20 2020-12-24 平安科技(深圳)有限公司 Method and apparatus for detecting image border
WO2021000524A1 (en) * 2019-07-03 2021-01-07 研祥智能科技股份有限公司 Hole protection cap detection method and apparatus, computer device and storage medium
CN110687120A (en) * 2019-09-18 2020-01-14 浙江工商大学 Flange appearance quality inspection system
CN111985329A (en) * 2020-07-16 2020-11-24 浙江工业大学 Remote sensing image information extraction method based on FCN-8s and improved Canny edge detection
CN111696107A (en) * 2020-08-05 2020-09-22 南京知谱光电科技有限公司 Molten pool contour image extraction method for realizing closed connected domain
CN113450292A (en) * 2021-06-17 2021-09-28 重庆理工大学 High-precision visual positioning method for PCBA parts
CN115096206A (en) * 2022-05-18 2022-09-23 西北工业大学 Part size high-precision measurement method based on machine vision
CN115082410A (en) * 2022-06-29 2022-09-20 西安工程大学 Detection method of circlip defect based on image processing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
伍济钢 ; 宾鸿赞 ; .薄片零件机器视觉图像亚像素边缘检测.中国机械工程.2009,(03),全文. *

Also Published As

Publication number Publication date
CN116433700A (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN116433700B (en) Visual positioning method for flange part contour
CN114529549B (en) Cloth defect labeling method and system based on machine vision
CN113808138B (en) Artificial intelligence-based wire and cable surface defect detection method
US7903880B2 (en) Image processing apparatus and method for detecting a feature point in an image
WO2017121018A1 (en) Method and apparatus for processing two-dimensional code image, and terminal and storage medium
CN101877127B (en) Image reference-free quality evaluation method and system based on gradient profile
US11776094B1 (en) Artificial intelligence based image quality assessment system
CN115601368B (en) Sheet metal part defect detection method for building material equipment
CN115797314B (en) Method, system, equipment and storage medium for detecting surface defects of parts
CN114782329A (en) Bearing defect damage degree evaluation method and system based on image processing
CN116883408B (en) Integrating instrument shell defect detection method based on artificial intelligence
CN113537037A (en) Pavement disease identification method, system, electronic device and storage medium
CN115731166A (en) High-voltage cable connector polishing defect detection method based on deep learning
CN116310889A (en) Unmanned aerial vehicle environment perception data processing method, control terminal and storage medium
CN111612836B (en) Identification method and system for hollow circular pointer type instrument
US20230154158A1 (en) Method and system for enhancing online reflected light ferrograph image
CN117392066A (en) Defect detection method, device, equipment and storage medium
CN109544513A (en) A kind of steel pipe end surface defect extraction knowledge method for distinguishing
US20220414827A1 (en) Training apparatus, training method, and medium
CN114998311A (en) Part precision detection method based on homomorphic filtering
CN117036205B (en) Injection molding production quality detection method based on image enhancement
CN117541832B (en) Abnormality detection method, abnormality detection system, electronic device, and storage medium
Zhu et al. Optimization of image processing in video-based traffic monitoring
CN117237350A (en) Real-time detection method for quality of steel castings
Zhu et al. Quantitative assessment mechanism transcending visual perceptual evaluation for image dehazing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A visual positioning method for the contour of flange parts

Granted publication date: 20230818

Pledgee: Weihai commercial bank Limited by Share Ltd. Ji'nan branch

Pledgor: Shandong jinrunyuan flange Machinery Co.,Ltd.

Registration number: Y2024980003836