CN118015004B - Laser cutting scanning system and method - Google Patents
Laser cutting scanning system and method Download PDFInfo
- Publication number
- CN118015004B CN118015004B CN202410426536.3A CN202410426536A CN118015004B CN 118015004 B CN118015004 B CN 118015004B CN 202410426536 A CN202410426536 A CN 202410426536A CN 118015004 B CN118015004 B CN 118015004B
- Authority
- CN
- China
- Prior art keywords
- characteristic
- points
- gray level
- feature
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003698 laser cutting Methods 0.000 title claims abstract description 31
- 238000000034 method Methods 0.000 title claims abstract description 28
- 238000001914 filtration Methods 0.000 claims abstract description 5
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 238000010586 diagram Methods 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 abstract description 4
- 239000013598 vector Substances 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the field of image processing, in particular to a laser cutting scanning system and a method, wherein the method comprises the following steps: collecting a plate image to be processed and a template image, preprocessing to obtain a first gray level image and a second gray level image, extracting characteristic points in the first gray level image and the second gray level image, matching the characteristic points to obtain a characteristic point combination, obtaining the relative position similarity of the two characteristic points in the characteristic point combination according to the positions and the neighborhood density of the two characteristic points in the characteristic point combination in the respective images, splicing the first gray level image and the second gray level image, determining the matching position rationality of the characteristic point combination based on the angle between the connecting line of the two characteristic points in the characteristic point combination in the splicing image and the horizontal direction and the relative position similarity of the two characteristic points, determining the mismatching characteristic point combination by utilizing the matching position rationality, filtering, determining the plate position according to the filtered characteristic point combination, and performing laser cutting, thereby improving the cutting precision.
Description
Technical Field
The invention relates to the field of image processing, in particular to a laser cutting scanning system and a laser cutting scanning method.
Background
The laser cutting plays an important role in the modern manufacturing industry, stands out with the characteristics of high precision, non-contact processing, wide application, high flexibility and the like, has the advantages of high efficiency, no pollution, less material loss and high automation degree, and is an indispensable key processing technology in the manufacturing industry. In recent years, in order to further improve the accuracy of the laser cutting process, a visual recognition system has been introduced in performing the laser cutting operation, and accurate positioning has been achieved by combining the image recognition technology in computer vision. The vision system can effectively guide the laser cutting system by monitoring and identifying the position and the shape of the workpiece in real time, so that the cutting process is ensured to be accurate. The integration brings improvement of cutting precision and consistency, and higher efficiency and controllability are injected into the production flow of the manufacturing industry, so that laser cutting is more automatic.
At present, when a laser cutting scanning system is used for carrying out image recognition on a plate to be processed, a SIFT algorithm and a KNN algorithm are generally adopted to detect and match corresponding characteristic points with similar positions and textures in a template and an image to be detected. However, as a large number of characteristic points are extracted from the plate to be processed with the surface having repeated textures, the characteristic vectors obtained by the SIFT algorithm of the characteristic points at different positions are the same, and when the characteristic points are matched according to the characteristic vectors by the KNN algorithm, the characteristic points with dissimilar positions can be mismatched, so that the position identification of the plate is inaccurate, and the laser cutting precision is reduced.
Disclosure of Invention
In order to solve one or more of the above technical problems, the present invention provides a laser cutting scanning system and a method thereof, which improve the accuracy of cutting a board. The technical scheme is as follows: a laser cutting scanning method comprising:
collecting a plate image to be processed and a template image, and preprocessing to obtain a first gray level image and a second gray level image respectively;
Extracting characteristic points in the first gray level image and the second gray level image, carrying out characteristic point matching, and combining each characteristic point in the first gray level image and the matched characteristic point in the second gray level image as a characteristic point;
obtaining the relative position similarity of two feature points in each feature point combination according to the positions and the neighborhood density of the two feature points in each feature point combination in respective images;
the relative position similarity of the two feature points in each feature point combination comprises:
Determining the relative distance similarity of the two feature points in each feature point combination based on the positions of the two feature points in each feature point combination in the respective images;
The neighborhood density of the characteristic points on the first gray scale image in each characteristic point combination is used as a first density, and the neighborhood density of the characteristic points on the second gray scale image is used as a second density;
taking the product of the ratio of the first density to the second density and the relative distance similarity of the two feature points as the relative position similarity of the two feature points;
the relative distance similarity of two feature points in each feature point combination is obtained by the following steps:
Taking the Euclidean distance average value between the characteristic points of the first gray level diagram and other characteristic points of the first gray level diagram as a first distance average value;
Taking the Euclidean distance average value between the characteristic points of the second gray level diagram and other characteristic points of the second gray level diagram as a second distance average value;
taking the ratio of the first distance average value to the second distance average value as the relative distance similarity of the two feature points;
splicing the first gray level image and the second gray level image to obtain a spliced image;
Acquiring a connecting line of two feature points in each feature point combination in the mosaic and an angle between the connecting line and the horizontal direction;
Determining the rationality of the matching position of each characteristic point combination based on the angle between the connecting line of two characteristic points in each characteristic point combination in the splice graph and the horizontal direction and the relative position similarity of the two characteristic points;
determining the matching position rationality of each feature point combination comprises the following steps:
Selecting a characteristic point combination;
the angle between the connecting line of two characteristic points in the characteristic point combination and the horizontal direction is recorded as a first angle, and the distance between the connecting lines of the two characteristic points is recorded as a first distance;
Acquiring a first angle mean value and a first distance mean value corresponding to all feature point combinations;
the ratio of the first angle to the first angle mean is recorded as a first ratio, and the ratio of the first distance to the first distance mean is recorded as a second ratio;
calculating the product of the relative position similarity of two feature points in the feature point combination and the first ratio and the second ratio, and taking the obtained numerical value as the matching position rationality of the feature point combination;
and determining the mismatching characteristic point combination by utilizing the rationality of the matching positions of all the characteristic point combinations, filtering the mismatching characteristic point, determining the position of the plate according to the filtered characteristic point combination, and performing laser cutting.
Further, the neighborhood density of the feature points is obtained by the following steps:
Setting a neighborhood range of each feature point;
And taking the ratio of the number of the feature points contained in the neighborhood range of each feature point to the number of all the feature points contained in the gray level image where the feature point is located as the neighborhood density of the feature point.
Further, the setting the neighborhood range of each feature point includes: centering on each characteristic point, and in the neighborhoodThe pixel size range of (2) is used as the neighborhood range of the characteristic point,Is a preset value.
Further, the determining the mismatching feature point combination by using the matching position rationality of all feature point combinations includes:
acquiring a threshold value of the rationality of the matching positions of all the characteristic point combinations by using an Ojin method;
and combining the characteristic points with the matching position rationality smaller than the threshold value to serve as the characteristic point combination of mismatching.
Further, the positions of the two feature points in each feature point combination in the respective images are obtained by the following steps:
setting the upper left corners of the first gray level diagram and the second gray level diagram as the original points, taking the vertical downward direction as the y-axis direction and the horizontal rightward direction as the x-axis direction, and obtaining the coordinates of the positions of the two feature points in the respective images.
The invention also provides a laser cutting scanning system comprising a memory and a processor, the memory having stored thereon a computer program which when executed by the processor implements the steps of the laser cutting scanning method as described in any of the preceding claims.
The invention has the following effects:
According to the invention, the characteristic point matching is carried out on the gray level images of the plate image to be processed and the template image, so that the characteristic point combinations matched with each other in the two images are obtained, the relative position similarity is obtained according to the positions and the neighborhood densities of the two characteristic points in each characteristic point combination, the matching position rationality of the characteristic point combinations is obtained according to the angles between the relative position similarity and the connecting line of the two characteristic points and the horizontal direction and the distance between the connecting line of the two characteristic points, and the mismatching characteristic points are filtered based on the matching position rationality, so that the accurate matching of the plate template and the characteristic points of the plate to be cut with repeated textures is realized, the laser head is controlled to cut the plate according to the matching result, and the precision of the laser cutting of the plate is improved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. In the drawings, embodiments of the invention are illustrated by way of example and not by way of limitation, and like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a schematic illustration of the process flow of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that some, but not all embodiments of the invention are described. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Specific embodiments of the present invention are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, a laser cutting scanning method includes steps S1 to S6, specifically as follows:
s1: and collecting an image of the plate to be processed and a template image, and preprocessing to obtain a first gray level image and a second gray level image respectively.
In the embodiment, a CCD industrial camera is used for collecting an image of a plate to be processed, the collected image of the plate to be processed is subjected to gray level pretreatment, the obtained image is used as a first gray level image, a template image of the plate is subjected to gray level pretreatment, and the obtained image is used as a second gray level image.
S2: extracting the characteristic points in the first gray level image and the second gray level image, carrying out characteristic point matching, and combining each characteristic point in the first gray level image and the matched characteristic point in the second gray level image as a characteristic point.
In this embodiment, the SIFT algorithm is adopted to extract, and is a classical feature extraction algorithm, so that key feature points in an image can be detected, feature vectors at the key feature points are provided, the method has the advantages of rotation, scale invariance and the like, visual differences caused by acquisition angles such as rotation and scale scaling of the image can be well adapted, so that the SIFT algorithm is adopted to extract feature points of a first gray scale image and a second gray scale image so as to obtain all feature points in the first gray scale image and all feature points (each feature point corresponds to one feature vector) in the second gray scale image, and the KNN algorithm is a simple and effective matching algorithm, and can find the most similar feature points by calculating the distance between the feature vectors, so that the method has the advantages of easy understanding and implementation.
The method comprises the following specific steps:
And respectively acquiring a feature point in the first gray level image and a feature vector corresponding to each feature point, a feature point in the second gray level image and a feature vector corresponding to each feature point through a SIFT algorithm, calculating Euclidean distance between the feature vector corresponding to the feature point in the first gray level image and the feature vector corresponding to the feature point in the second gray level image through a KNN algorithm according to the feature points in the first gray level image and the feature vector corresponding to each feature point in the second gray level image, generally acquiring the feature point which is most matched with the feature point in the first gray level image from the second gray level image, combining the two feature points as one feature point, and obtaining all feature point combinations formed by the feature points matched with the feature point in the second gray level image in the first gray level image according to the method.
The first gray level image and the second gray level image both take the upper left corner as the original point, take the horizontal right as the horizontal axis and take the vertical downward direction as the vertical axis to obtain the position coordinates of each group of characteristic points, and the coordinates of the characteristic points on the first gray level image in the ith group of characteristic point combination are taken as the coordinates of the characteristic points on the first gray level image as the characteristic points on the first gray level image and the characteristic points on the second gray level image in each group of characteristic pointsThe coordinates of the characteristic points on the second gray level diagram are as follows。
S3: and obtaining the relative position similarity of the two feature points in each feature point combination according to the positions and the neighborhood density of the two feature points in each feature point combination in the respective images.
The relative position similarity of two feature points in each feature point combination is obtained by the following steps:
S31: determining the relative distance similarity of the two feature points in each feature point combination based on the positions of the two feature points in each feature point combination in the respective images;
The relative distance similarity of the two feature points is obtained by the following steps:
Taking the Euclidean distance average value between the characteristic points of the first gray level image and other characteristic points of the first gray level image as a first distance average value, taking the ith group of characteristic points as an example, wherein the first distance average value is as follows:
In the formula (i), Is the first distance mean of the ith set of feature points,For the total number of feature points in the first gray scale map,Is the abscissa of the feature points located on the first gray scale image among the i-th group of feature points,Is the ordinate of the feature point on the first gray scale image in the i-th group of feature points,For the sequence number of the feature point on the first gray scale,Is taken through the value of (2)All integers within the range of the present invention,Is the first gray scale imageThe abscissa of the individual feature points,Is the first gray scale imageThe ordinate of the feature points shows that the numerator of the formula represents the sum of the euclidean distances between the feature points located on the first gray scale map and the other feature points on the first gray scale map in the ith group of feature points.
Taking the Euclidean distance average value between the characteristic points of the second gray level diagram and other characteristic points of the second gray level diagram as a second distance average value, taking the ith group of characteristic points as an example, wherein the second distance average value is as follows:
In the formula (i), A second distance average for the ith set of feature points,For the total number of feature points of the second gray scale map,Is the abscissa of the feature points located on the second gray scale image in the i-th group of feature points,Is the ordinate of the feature point on the second gray scale image in the i-th group of feature points,For the sequence number of the feature point on the second gray scale,Is taken through the value of (2)All integers within the range of the present invention,Is the first in the second gray level diagramThe abscissa of the individual feature points,Is the first in the second gray level diagramThe ordinate of each feature point shows that the value of the formula numerator represents the sum of euclidean distances between the feature point located on the second gray level map and other feature points on the second gray level map in the ith group of feature points.
Taking the ratio of the first distance average value to the second distance average value as the relative distance similarity of the two feature points, taking the ith group of feature points as an example, the relative distance similarity of the two feature points in the ith group of feature points is as follows:
In the formula (i), And the relative distance similarity of two feature points in the ith group of feature points is represented.
It should be noted that, because the textures and structures of the plate to be processed are similar to those of the plate in the template image, that is, the textures and structures in the first gray scale image and the second gray scale image are similar to each other, so that the pixel point distribution is similar, the relative positions of each feature point combination in the first gray scale image and the second gray scale image are consistent in spatial relationship, the relative distance similarity of two feature points in each group of feature point combinations can be measured by the distances between the two feature points and other feature points on the respective affiliated image, so that the distance average value between the two feature points and the other feature points in the respective image is obtained for each feature point combination, and the more the relative distance similarity is, the more accurate the matching result of the two feature points is, and conversely, the more the matching is possible to be the mismatching.
S32: obtaining the neighborhood density of two feature points in each image in each feature point combination;
The neighborhood density of the feature points is obtained by the following steps:
setting a neighborhood range of each feature point, wherein the specific operation is as follows: in the present embodiment, the sliding window size is empirically set to be centered at each feature point The size of each pixel point is used as the neighborhood range of the characteristic point, the quantity of the characteristic points in the neighborhood range is counted,To preset the value, the embodiment is provided withSpecifically, the method can be set by oneself;
Taking the ith group of feature points as an example, the number of feature points contained in the neighborhood range of the feature points located on the first gray scale image in the ith group of feature points is The number of the feature points in the neighborhood range of the feature points on the second gray level image in the feature point combination is;
Taking the ratio of the number of feature points contained in the neighborhood range of each feature point to the number of all feature points contained in the gray level image where the feature point is located as the neighborhood density of the feature points, taking the ith group of feature points as an example, and taking the neighborhood density of the feature points on the first gray level image asThe neighborhood density of the feature points on the second gray level map is。
S33: and determining the relative position similarity of the two feature points based on the relative distance similarity and the neighborhood density of the two feature points.
For each feature point combination, multiplying the ratio of the neighborhood density of two feature points contained in the feature point combination in each image (the ratio of the neighborhood density of the feature point in the first gray level image to the neighborhood density of the feature point in the second gray level image) by the relative distance similarity of the two feature points, wherein the obtained value is taken as the relative position similarity of the feature point combination, and taking the i-th group of feature points as an example, the relative position similarity of the two feature points is as follows:
In the formula (i), The relative position similarity of the i-th group of feature points.
It should be noted that, for the two feature points in each feature point combination, the relative distance similarity only considers the global position information of the two feature points in the respective images, and the local spatial features in the respective images can capture the detail information around the feature points, so that the analysis of the relative positions is more detailed and accurate, and the relative position similarity of the feature points can be better reflected by combining the local spatial features, therefore, the neighborhood density of the two feature points in the respective images is introduced in the step, and the local spatial features are characterized by the neighborhood density.
S4: and splicing the first gray level image and the second gray level image to obtain a spliced image.
Specifically, the pixel point at the upper right corner of the first gray level image is taken as the origin of the second gray level image, the first gray level image and the second gray level image are spliced to obtain a spliced image, namely the left side of the second gray level image and the right side of the first gray level image are spliced to obtain the spliced image, and the origin of the spliced image is the upper left corner.
S5: and determining the matching position rationality of each feature point combination based on the angle between the connecting line of two feature points in each feature point combination in the splice graph and the horizontal direction and the relative position similarity of the two feature points.
The method for acquiring the rationality of the matching position of each characteristic point combination comprises the following steps:
Selecting a characteristic point combination;
the angle between the connecting line of two characteristic points in the characteristic point combination and the horizontal direction is recorded as a first angle, and the distance between the connecting lines of the two characteristic points is recorded as a first distance;
Taking the ith group of feature points as an example, the first distance is The first angle is as follows:
In the formula (i), Is the first angle of the i-th set of feature points.
Acquiring first angle mean values corresponding to all feature point combinationsFirst distance average;
The ratio of the first angle to the first angle mean is recorded as a first ratio, and the ratio of the first distance to the first distance mean is recorded as a second ratio;
Calculating the product of the relative position similarity of two feature points in the feature point combination and the first ratio and the second ratio, wherein the obtained numerical value is used as the matching position rationality of the feature point combination, and the matching position rationality of the i-th group of feature points is as follows:
In the formula (i), And (5) reasonably matching the i-th group of feature points.
It should be noted that, because there is inevitably a visual angle difference between the two images when the plate image to be processed and the template image are obtained, a certain offset angle error exists between the plate image to be processed and the template image, and the values of the relative position similarity of the two feature points in the feature point combination which are partially mismatched are similar under the influence of the offset angle error, but the first distance and the first angle are higher than those of the normal matching feature points, so that the included angle between the connecting line of the two feature points in the feature point combination and the horizontal direction and the distance between the connecting line can be further screened out, the relative position similarity values are similar and the expected normal matching points are met, so that the rationality of the matching position of the feature point combination is more comprehensive and accurate, and the matching precision and the reliability are improved.
S6: and determining the mismatching characteristic point combination by utilizing the rationality of the matching positions of all the characteristic point combinations, filtering the mismatching characteristic point, determining the position of the plate according to the filtered characteristic point combination, and performing laser cutting.
In this embodiment, the rationality of the matching positions of all the feature point combinations is obtained first, and then the threshold value of the rationality of the matching positions of all the feature point combinations is obtained by using the Ojin threshold methodFor the followingIs considered to be a correct match, does not need to be filtered, forThe characteristic point combination is considered to be obtained by mismatching, filtering is needed, so that the matching characteristic combination obtained after the mismatching characteristic point combination is filtered is obtained, the matching precision can be greatly improved by eliminating the mismatching characteristic points, once the accurate matching of the plate image to be processed and the template image is realized, the position and the gesture of the plate in processing can be ensured to strictly accord with the template standard, and when the plate is embodied, the coordinates of the characteristic points in the template image can be mapped into the coordinate system of the actual plate to be processed on a working platform by applying the projective transformation principle, so that the laser cutting head can be accurately controlled to accurately cut according to the calibrated plate position and gesture information according to specific processing requirements, and the accuracy and the consistency of the processing process are ensured.
The invention also provides a laser cutting scanning system which comprises a memory and a processor, wherein the memory is stored with a computer program, and the computer program realizes the steps of the laser cutting scanning method according to any one of S1-S6 when being executed by the processor so as to control a laser head to accurately cut the plate with the determined position and posture.
In the description of the present specification, the meaning of "a plurality", "a number" or "a plurality" is at least two, for example, two, three or more, etc., unless explicitly defined otherwise.
While various embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Many modifications, changes, and substitutions will now occur to those skilled in the art without departing from the spirit and scope of the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention.
Claims (6)
1. A laser cutting scanning method, comprising:
collecting a plate image to be processed and a template image, and preprocessing to obtain a first gray level image and a second gray level image respectively;
Extracting characteristic points in the first gray level image and the second gray level image, carrying out characteristic point matching, and combining each characteristic point in the first gray level image and the matched characteristic point in the second gray level image as a characteristic point;
obtaining the relative position similarity of two feature points in each feature point combination according to the positions and the neighborhood density of the two feature points in each feature point combination in respective images;
the relative position similarity of the two feature points in each feature point combination comprises:
Determining the relative distance similarity of the two feature points in each feature point combination based on the positions of the two feature points in each feature point combination in the respective images;
The neighborhood density of the characteristic points on the first gray scale image in each characteristic point combination is used as a first density, and the neighborhood density of the characteristic points on the second gray scale image is used as a second density;
taking the product of the ratio of the first density to the second density and the relative distance similarity of the two feature points as the relative position similarity of the two feature points;
the relative distance similarity of two feature points in each feature point combination is obtained by the following steps:
Taking the Euclidean distance average value between the characteristic points of the first gray level diagram and other characteristic points of the first gray level diagram as a first distance average value;
Taking the Euclidean distance average value between the characteristic points of the second gray level diagram and other characteristic points of the second gray level diagram as a second distance average value;
taking the ratio of the first distance average value to the second distance average value as the relative distance similarity of the two feature points;
splicing the first gray level image and the second gray level image to obtain a spliced image;
Determining the rationality of the matching position of each characteristic point combination based on the angle between the connecting line of two characteristic points in each characteristic point combination in the splice graph and the horizontal direction and the relative position similarity of the two characteristic points;
determining the matching position rationality of each feature point combination comprises the following steps:
Selecting a characteristic point combination;
the angle between the connecting line of two characteristic points in the characteristic point combination and the horizontal direction is recorded as a first angle, and the distance between the connecting lines of the two characteristic points is recorded as a first distance;
Acquiring a first angle mean value and a first distance mean value corresponding to all feature point combinations;
the ratio of the first angle to the first angle mean is recorded as a first ratio, and the ratio of the first distance to the first distance mean is recorded as a second ratio;
calculating the product of the relative position similarity of two feature points in the feature point combination and the first ratio and the second ratio, and taking the obtained numerical value as the matching position rationality of the feature point combination;
and determining the mismatching characteristic point combination by utilizing the rationality of the matching positions of all the characteristic point combinations, filtering the mismatching characteristic point, determining the position of the plate according to the filtered characteristic point combination, and performing laser cutting.
2. The laser cutting scanning method according to claim 1, wherein the neighborhood density of the feature points is obtained by:
Setting a neighborhood range of each feature point;
And taking the ratio of the number of the feature points contained in the neighborhood range of each feature point to the number of all the feature points contained in the gray level image where the feature point is located as the neighborhood density of the feature point.
3. The method of claim 2, wherein the setting the neighborhood range of each feature point comprises: centering on each characteristic point, and in the neighborhoodThe pixel size range of (2) is used as the neighborhood range of the characteristic point,Is a preset value.
4. The laser cutting scanning method according to claim 1, wherein determining the mismatching feature point combination using the matching position rationality of all feature point combinations comprises:
acquiring a threshold value of the rationality of the matching positions of all the characteristic point combinations by using an Ojin method;
and combining the characteristic points with the matching position rationality smaller than the threshold value to serve as the characteristic point combination of mismatching.
5. The laser cutting scanning method according to claim 1, wherein the positions of the two feature points in each feature point combination in the respective images are obtained by:
setting the upper left corners of the first gray level diagram and the second gray level diagram as the original points, taking the vertical downward direction as the y-axis direction and the horizontal rightward direction as the x-axis direction, and obtaining the coordinates of the positions of the two feature points in the respective images.
6. A laser cutting scanning system, characterized in that it comprises a memory and a processor, said memory having stored thereon a computer program which, when executed by said processor, implements the steps of the laser cutting scanning method according to any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410426536.3A CN118015004B (en) | 2024-04-10 | 2024-04-10 | Laser cutting scanning system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410426536.3A CN118015004B (en) | 2024-04-10 | 2024-04-10 | Laser cutting scanning system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN118015004A CN118015004A (en) | 2024-05-10 |
CN118015004B true CN118015004B (en) | 2024-07-05 |
Family
ID=90958142
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410426536.3A Active CN118015004B (en) | 2024-04-10 | 2024-04-10 | Laser cutting scanning system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118015004B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118628474B (en) * | 2024-07-05 | 2024-10-11 | 苏州诺达佳自动化技术有限公司 | Industrial automation three-dimensional detection system and method |
CN119048344B (en) * | 2024-10-31 | 2025-03-04 | 山东省地质测绘院 | Remote sensing image stitching method, device, computer equipment and medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN115588033A (en) * | 2022-09-06 | 2023-01-10 | 西安电子科技大学 | Synthetic aperture radar and optical image registration system and method based on structure extraction |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104992408B (en) * | 2015-06-30 | 2018-06-05 | 百度在线网络技术(北京)有限公司 | For the panorama image generation method and device of user terminal |
US10733739B2 (en) * | 2017-09-29 | 2020-08-04 | Nanjing Avatarmind Robot Technology Co., Ltd. | Method and system for displaying target image based on robot |
CN109523501A (en) * | 2018-04-28 | 2019-03-26 | 江苏理工学院 | One kind being based on dimensionality reduction and the matched battery open defect detection method of point cloud data |
CN115512138A (en) * | 2022-08-26 | 2022-12-23 | 广东工业大学 | Line Feature Matching Method Based on Point-Line-Surface Fusion |
CN115861352A (en) * | 2022-12-18 | 2023-03-28 | 南京理工大学 | Monocular vision, IMU and laser radar data fusion and edge extraction method |
CN117036737A (en) * | 2023-08-17 | 2023-11-10 | 渤海大学 | Feature extraction and matching method based on information entropy, GMS and LC significant detection |
CN117333518A (en) * | 2023-09-20 | 2024-01-02 | 宜春宜联打印设备有限公司 | Laser scanning image matching method, system and computer equipment |
-
2024
- 2024-04-10 CN CN202410426536.3A patent/CN118015004B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961162A (en) * | 2018-03-12 | 2018-12-07 | 北京林业大学 | A kind of unmanned plane forest zone Aerial Images joining method and system |
CN115588033A (en) * | 2022-09-06 | 2023-01-10 | 西安电子科技大学 | Synthetic aperture radar and optical image registration system and method based on structure extraction |
Also Published As
Publication number | Publication date |
---|---|
CN118015004A (en) | 2024-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN118015004B (en) | Laser cutting scanning system and method | |
CN107609557B (en) | Pointer instrument reading identification method | |
CN100547351C (en) | A method of machine vision positioning | |
CN111062940B (en) | Screw positioning and identifying method based on machine vision | |
US11392787B2 (en) | Method for grasping texture-less metal parts based on bold image matching | |
CN114331995B (en) | A real-time positioning method based on multi-template matching based on improved 2D-ICP | |
CN113393447B (en) | Needle tip true position detection method and system based on deep learning | |
CN104574401A (en) | Image registration method based on parallel line matching | |
CN114049380B (en) | Target object positioning and tracking method, device, computer equipment and storage medium | |
US12039747B1 (en) | Polarity discrimination detection method and apparatus for multiple stacked electronic components and device | |
CN113705564A (en) | Pointer type instrument identification reading method | |
CN116091603A (en) | Box workpiece pose measurement method based on point characteristics | |
CN114549400A (en) | Image identification method and device | |
CN108520533B (en) | A multi-dimensional feature registration method for workpiece localization | |
CN117670887A (en) | Tin soldering height and defect detection method based on machine vision | |
CN115112098B (en) | Monocular vision one-dimensional two-dimensional measurement method | |
CN115641326B (en) | Sub-pixel size detection method and system for ceramic antenna PIN needle image | |
CN119027422B (en) | Battery accessory quality detection method and system based on machine vision | |
CN112907453B (en) | Image correction method for inner structure of notebook computer | |
CN115578594A (en) | Edge positioning method and device based on computer vision and related equipment | |
CN113936291A (en) | Aluminum template quality inspection and recovery method based on machine vision | |
CN119205630A (en) | Electron beam measurement equipment template graphic positioning method and system based on deep learning | |
CN115582840B (en) | Method and system for calculating sorting and grabbing pose of borderless steel plate workpiece and sorting method | |
CN112991372B (en) | 2D-3D camera external parameter calibration method based on polygon matching | |
CN113720280A (en) | Bar center positioning method based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |