CN111768369B - Steel plate corner point and edge point positioning method, workpiece grabbing method and production line - Google Patents
Steel plate corner point and edge point positioning method, workpiece grabbing method and production line Download PDFInfo
- Publication number
- CN111768369B CN111768369B CN202010486614.0A CN202010486614A CN111768369B CN 111768369 B CN111768369 B CN 111768369B CN 202010486614 A CN202010486614 A CN 202010486614A CN 111768369 B CN111768369 B CN 111768369B
- Authority
- CN
- China
- Prior art keywords
- corner
- points
- picture
- training
- scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 65
- 229910000831 Steel Inorganic materials 0.000 title claims abstract description 54
- 239000010959 steel Substances 0.000 title claims abstract description 54
- 238000004519 manufacturing process Methods 0.000 title claims abstract description 9
- 238000001514 detection method Methods 0.000 claims abstract description 30
- 238000005286 illumination Methods 0.000 claims description 81
- 239000002131 composite material Substances 0.000 claims description 18
- 230000002194 synthesizing effect Effects 0.000 claims description 13
- 238000000605 extraction Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 5
- 230000009286 beneficial effect Effects 0.000 abstract description 6
- 230000003749 cleanliness Effects 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 12
- 230000008569 process Effects 0.000 description 10
- 239000000463 material Substances 0.000 description 9
- 230000007547 defect Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000002372 labelling Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000013138 pruning Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a steel plate corner point and edge point positioning method, a workpiece grabbing method and a production line. The steel plate corner point and edge point positioning method comprises the following steps: designing a learning network for detecting corresponding corner points and side points aiming at a specific scene; putting the scene picture data synthesized by the preset person into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model; obtaining corner point pictures and side point pictures of the real scene, putting picture data of the real scene into a learning network for training, further improving the accuracy and the robustness of the model, and generating a final detection model. The technical scheme of the invention is beneficial to improving the positioning precision of the corner points and the side points of the steel plate.
Description
Technical Field
The invention relates to the technical field of corner point and edge point positioning, in particular to a steel plate corner point and edge point positioning method and a workpiece grabbing method.
Background
In the traditional image processing method, a picture generated by photographing by a camera is processed, and the position of a steel plate under a robot coordinate system is determined by locating the positions of corner points and side points in the image and then converting the coordinate system. However, the different intensities of illumination have a great influence on the accuracy of the positioning of the corner points and the side points, so that the traditional method has the defects of inaccurate positioning detection of the corner points and the side points and the detection success rate of only about 60-80%.
Disclosure of Invention
The invention mainly aims to provide a positioning method for corner points and edge points of a steel plate, and aims to improve the accuracy of detecting the corner points and the edge points of the steel plate.
In order to achieve the above purpose, the steel plate corner point and edge point positioning method provided by the invention comprises the following steps:
designing a learning network for detecting corresponding corner points and side points aiming at a specific scene;
putting the scene picture data synthesized by the preset person into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model;
and acquiring a corner point picture and a side point picture of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by using the accurate pre-training model.
Optionally, the step of designing a learning network for detecting corresponding corner points and edge points for a specific scene includes:
the feature extraction module extracts the features from bottom to top and from top to bottom through the convolution module, combines the two features, and uses convolution operation again to eliminate the aliasing effect;
the region naming module trains a thick potential target region naming module so as to preliminarily locate a rough target region;
the final prediction module further predicts the specific positions of the corner points and the side points accurately by training a network combining convolution and full connection;
and carrying out end-to-end integral training on the feature extraction module, the region naming module and the final prediction module, wherein the three modules are used in series during prediction.
Optionally, before the step of putting the preset artificially synthesized scene picture data into the learning network to train to back-propagate the optimization parameters, the method further includes:
the learning network is pre-trained on the presently disclosed data set to generate a pre-training model.
Optionally, the step of artificially synthesizing the scene picture data includes:
acquiring a scene picture shot by a camera;
according to the scene pictures, a plurality of synthesized pictures are synthesized manually;
and (5) marking the corner points or the side points of the synthesized picture.
Optionally, the step of manually synthesizing a plurality of synthesized pictures according to the scene pictures includes:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
the first illumination intensity is larger than the second illumination intensity and smaller than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
Optionally, the step of manually synthesizing a plurality of synthesized pictures according to different working scenes further includes:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
the first illumination intensity is larger than the fifth illumination intensity and smaller than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
Optionally, the step of generating the corner picture and the edge point picture of the real scene includes:
collecting a plurality of real corner pictures and side point pictures;
and marking corresponding corner points on the corner point pictures, and marking corresponding side points on the side point pictures.
Optionally, the model is preloaded into memory prior to training the model to increase the speed of retrieval and training.
The invention also provides a workpiece grabbing method, which comprises a steel plate corner point and edge point positioning method, wherein the steel plate corner point and edge point positioning method comprises the following steps:
designing a learning network for detecting corresponding corner points and side points aiming at a specific scene;
putting the scene picture data synthesized by the preset person into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model;
and acquiring a corner point picture and a side point picture of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by an accurate pre-training model.
The invention also provides a workpiece production line, which uses a workpiece grabbing method, wherein the workpiece grabbing method comprises a steel plate corner point and edge point positioning method, and the steel plate corner point and edge point positioning method comprises the following steps:
designing a learning network for detecting corresponding corner points and side points aiming at a specific scene;
putting the scene picture data synthesized by the preset person into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model;
and acquiring a corner point picture and a side point picture of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by using the accurate pre-training model.
According to the technical scheme, a corresponding corner point and a learning network for detecting the corner point are designed aiming at a specific scene; then putting the scene picture data synthesized by the preset manual into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model; then obtaining a corner point picture and a side point picture of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by using an accurate pre-training model; therefore, a large number of scene pictures are artificially synthesized for learning a learning network model, so that the learning network model can learn detection conditions under various working scenes, the defect of insufficient number of high-quality learning scene pictures (difficult to obtain) is well overcome by artificially synthesizing the pictures, the learning network can extract certain common characteristics from two scenes, and the learning network plays a very important role in enhancing the generalization capability of the corner point and side point detection model; by generating a large amount of synthesized data and training the data, the accuracy of the model can be up to more than 90%, and after training the real picture, the accurate pre-training model is refined into a final detection model, so that the detection accuracy of the learning network model is further greatly improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an embodiment of a method for positioning corner points and edge points of a steel plate according to the present invention;
fig. 2 to 4 are effect diagrams detected by the final detection model.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear are used in the embodiments of the present invention) are merely for explaining the relative positional relationship, movement conditions, and the like between the components in a certain specific posture (as shown in the drawings), and if the specific posture is changed, the directional indicators are changed accordingly.
Furthermore, the description of "first," "second," etc. in this disclosure is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, "and/or" throughout this document includes three schemes, taking a and/or B as an example, including a technical scheme, a technical scheme B, and a technical scheme that both a and B satisfy; in addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The invention mainly provides a positioning method for corner points and side points of a steel plate, which is mainly applied to a workpiece grabbing method, and the accuracy of identifying and positioning the corner points and the side points of a robot is improved by improving the accuracy of a model. Thereby being beneficial to improving the precision of the mechanical arm for grabbing the workpiece.
The specific structure of the steel plate corner point and edge point positioning method will be mainly described below.
Referring to fig. 1 to 4, in an embodiment of the present invention, the method for positioning corner points and edge points of a steel plate includes the steps of:
s100, designing a corresponding corner point and a learning network for detecting the corner point aiming at a specific scene;
s200, putting the scene picture data synthesized by the preset manual into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model;
s300, obtaining corner point pictures and side point pictures of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by using the accurate pre-training model.
Specifically, in this embodiment, there are many forms of designing corresponding corner points and edge point detection networks for specific scenes, where the specific scenes may be set according to actual requirements, and a learning network for designing corner points and edge point detection is listed below.
The step of designing the learning network for detecting the corresponding corner points and the edge points aiming at the specific scene comprises the following steps:
the feature extraction module extracts the features from bottom to top and from top to bottom through the convolution module, combines the two features, and uses convolution operation again to eliminate the aliasing effect; the region naming module trains a thick potential target region naming module so as to preliminarily locate a rough target region; the final prediction module further predicts the specific positions of the corner points and the side points accurately by training a network combining convolution and full connection; and carrying out end-to-end integral training on the feature extraction module, the region naming module and the final prediction module, wherein the three modules are used in series during prediction.
When the learning network is specifically used, the feature extraction module firstly extracts the scene in a mode of respectively from bottom to top and from top to bottom, combines the features extracted twice, and processes the combined features again. The region naming module trains a thick potential target region naming module, and the final prediction module further predicts the specific positions of the corner points and the side points accurately by training a network combining convolution and full connection. Thus, a preliminary learning network model is formed.
After the learning network is established, the synthesized artificial pictures are given to the learning network for learning, namely, the learning network is trained by using the artificial cooperation pictures. The mode of artificially synthesizing pictures is many, and can consider the synthesis of various working conditions (the working conditions can comprise light intensity, light angle, cleaning degree of the surface of a workpiece and the like), and can also synthesize according to the photos of the existing real working conditions. The manner of synthesizing the artificial picture is exemplified below.
In order to improve the authenticity of the artificially synthesized scene picture, the step of artificially synthesizing scene picture data comprises the steps of:
acquiring a scene picture shot by a camera; according to the scene pictures, a plurality of synthesized pictures are synthesized manually; and (5) marking the corner points or the side points of the synthesized picture. That is, the camera collects a plurality of pictures of real scenes, and artificially synthesized scene pictures are generated according to the real scene pictures in a simulation mode. In the simulation process, multiple scene pictures can be synthesized based on the same real scene picture by adjusting scene working conditions, such as one or more parameters of illumination intensity, illumination angle and workpiece surface cleanliness.
Therefore, not only can the working conditions displayed by the real pictures be simulated and displayed, but also the artificial synthesized scene pictures with various different working conditions based on the real scene pictures can be derived by adjusting the working condition parameters in the synthesis process. Thereby greatly enriching the working scene, being beneficial to improving the robustness of the learning network model and improving the detection precision of the model. It is worth to say that a large number of scene pictures cannot be obtained before the whole system is put into actual production, and a better network model cannot be trained due to a small data volume. The defect of insufficient quantity is well made up by manually combining pictures, and the pictures of the real scenes are difficult to collect. Although the true degree of the synthesized picture is lower than that of the true picture, certain common features in two scenes can be learned and extracted, and the method plays an important role in enhancing the generalization capability of the corner and side point detection model. By generating a large amount of synthetic data and training only those data, the accuracy of the model can reach 90%.
The following describes two cases of artificially synthesized scene pictures:
case one: according to the scene pictures, the step of artificially synthesizing a plurality of synthesized pictures comprises the following steps:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
the first illumination intensity is larger than the second illumination intensity and smaller than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
In this embodiment, a first synthesized picture is synthesized based on the first illumination intensity and the first illumination angle, and a third synthesized picture with both the illumination intensity and the illumination angle larger than the first synthesized picture can be synthesized based on the first synthesized picture, or a second synthesized picture with both the illumination intensity and the illumination angle smaller than the first synthesized picture can be synthesized. Of course, in some embodiments, it is also contemplated to increase the cleanliness of the surface of the workpiece, for example, the first cleanliness, the second cleanliness, and the third cleanliness may be set corresponding to the first synthesized picture, the second synthesized picture, and the third synthesized picture, respectively. Wherein, the first cleanliness is greater than the second cleanliness and less than the third cleanliness. In some embodiments, the texture condition of the surface of the workpiece can be considered, and the texture conditions of various workpieces are synthesized into a picture to form a comprehensive texture picture, wherein the number of textures integrated in the comprehensive texture picture can be processed according to different conditions. For example, the first texture number, the second texture number, and the third texture number may be set corresponding to the first synthesized picture, the second synthesized picture, and the third synthesized picture, respectively. The first texture number is larger than the second texture number and smaller than the third texture number.
And a second case: according to different working scenes, the step of artificially synthesizing a plurality of synthesized pictures further comprises:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
the first illumination intensity is larger than the fifth illumination intensity and smaller than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
In this embodiment, a first synthesized picture is synthesized based on a first illumination intensity and a first illumination angle, and a fourth synthesized picture with an illumination intensity greater than the first synthesized picture and an illumination angle less than the first synthesized picture can be synthesized based on the first synthesized picture. And a fifth composite picture with illumination intensity smaller than that of the first composite picture and illumination angle larger than that of the first composite picture can be synthesized. Of course, in some embodiments, it is also contemplated to increase the cleanliness of the surface of the workpiece, for example, the first cleanliness, the fourth cleanliness, and the fifth cleanliness may be set corresponding to the first synthesized picture, the fourth synthesized picture, and the fifth synthesized picture, respectively. Wherein the first cleanliness is greater than the fifth cleanliness and less than the fourth cleanliness. In some embodiments, the texture condition of the surface of the workpiece can be considered, and the texture conditions of various workpieces are synthesized into a picture to form a comprehensive texture picture, wherein the number of textures integrated in the comprehensive texture picture can be processed according to different conditions. For example, the first, fourth, and fifth texture numbers may be set corresponding to the first, second, and third synthesized pictures, respectively. The first texture number is larger than the fifth texture number and smaller than the fourth texture number.
After an accurate pre-training model is generated, obtaining a corner point picture and a side point picture of a real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by the accurate pre-training model. In this embodiment, the step of generating the corner point picture and the edge point picture of the real scene includes: collecting a plurality of real corner pictures and side point pictures; and marking corresponding corner points on the corner point pictures, and marking corresponding side points on the side point pictures.
And finally, predicting the corner points and the side points on the real picture by the whole trained model. Before the system is produced, the real picture is difficult to obtain, the real picture is more complex, and the background and the interference are complex and changeable. After a small number of real scene pictures are collected and marked, training is continued on the previous network, the network is enabled to contact with real scene data, parameters in the model are corrected through back propagation, and therefore accuracy and robustness of the model in the real scene pictures are further improved.
In the embodiment, a learning network for detecting corresponding corner points and side points is designed for a specific scene; then putting the scene picture data synthesized by the preset manual into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model; then obtaining a corner point picture and a side point picture of the real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by using an accurate pre-training model; therefore, a large number of scene pictures are artificially synthesized for learning the learning network model, so that the learning network model can learn detection conditions under various working scenes, the defect of insufficient quantity of high-quality learning scene pictures of the existing model is well overcome by artificially synthesizing the pictures, the learning network can extract certain common characteristics from the two scenes, and the learning network plays a very important role in enhancing the generalization capability of the corner point and side point detection model; by generating a large amount of synthesized data and training the data, the accuracy of the model can be as high as 90%, after training the real picture, the accurate pre-training model is refined into a final detection model, the detection accuracy of the learning network model is further greatly improved, and the detection accuracy is improved to nearly 100% by referring to fig. 2 to 4.
In some embodiments, in order to improve the detection precision of the learning network, before the step of putting the preset artificially synthesized scene picture data into the learning network to train to back-propagate the optimized parameters, the step of generating the accurate pre-training model further includes: the learning network is pre-trained on the presently disclosed data set to generate a pre-training model. The learning network is pre-trained on the presently disclosed data set, so that the learning network performs high-quality initialization training, and a better initialization network parameter can be obtained.
In some embodiments, for training and testing efficiency, the model is preloaded into memory prior to training to increase the speed of the recall and training. In this embodiment, in order to accelerate the speed of detection, before training, the learning network model may be preloaded into the video memory card, or the pre-learned picture may be preloaded into the video memory card, so as to facilitate the learning network call. Furthermore, besides preloading, the front and rear networks of some branches in the network can be removed through a network pruning technology, overall performance comparison is carried out, some useless network structures are removed, and the network is simplified. Thus, unnecessary operation is reduced, and the effect of acceleration of detection is further achieved. In order to improve the overall detection time of the model and enable the model to meet the beat requirement of industrial production, the detection time is shortened to be within 3 seconds.
The invention also provides a workpiece grabbing method, which comprises a steel plate corner point and side point positioning method, wherein the specific structure of the steel plate corner point and side point positioning method refers to the embodiment, and the workpiece grabbing method at least has all the beneficial effects brought by the technical schemes of the embodiment because the workpiece grabbing method adopts all the technical schemes of the embodiment, and the detailed description is omitted.
The workpiece grabbing method comprises the following steps:
calling a camera to take pictures of corner points and side points of the steel plate, and carrying out corner point positioning and side point positioning on the steel plate in the shot image; acquiring coordinates of corner points and side points of a steel plate in an image;
converting coordinates of corner points and side points of the steel plate in the image into a robot coordinate system, and simultaneously matching the steel plate nesting chart into the robot coordinate system to obtain the corner point coordinates and the side point coordinates of each workpiece;
calculating the maximum rotation angle allowed to rotate when the sucking disc sucks the workpiece according to the maximum size from the center point of the sucking disc to the edge of the sucking disc, the corner point coordinates and the side point coordinates of each workpiece and the position of the vacuum column;
comparing the required maximum rotation angle with a preset rotation angle given in the trepanning graph; and determining that the required maximum rotation angle is larger than or equal to a preset rotation angle, and then grabbing the workpiece by the mechanical arm.
Specifically, in this embodiment, the workpiece grabbing method is based on a nesting chart, which may be provided for the customer or may be provided for the third mechanism. When the material is fed, the material is not well discharged or is not left in the place when the material is discharged, so that great waste is caused, and small materials with different shapes can be sleeved in the material, namely, the material is produced by using as much material as possible on the limited material area, so that the material utilization rate is improved, and the waste is reduced. That is, the nesting chart is a drawing for arranging parts by using a nesting method. In the process of processing a workpiece, firstly, cutting a steel plate into a required workpiece according to a trepanning graph, and then grabbing the workpiece on the steel plate based on the trepanning graph by a mechanical arm according to a workpiece grabbing method.
Calling a camera to take pictures of corner points and side points of the steel plate, and carrying out corner point positioning and side point positioning on the steel plate in the shot image; acquiring coordinates of corner points and side points of a steel plate in an image;
in this embodiment, a camera is first used to photograph the corner points and the edge points of the steel plate, the photographed image is processed, the corner points and the edge points are positioned, and the corner points and the edge points are marked. The method for labeling the side points and the corner points is numerous, can be used for manually labeling, and can also be used for labeling through a mature algorithm. The coordinates of the corner points and the edge points of the steel plate in the image are related to a coordinate system established by the image. In this embodiment, an internal reference matrix is disposed in the robot, and corner points and side points in the image are converted into a coordinate system of the image captured by the camera. The positioning method of the corner points and the edge points of the steel plate is described in detail in the following examples.
Converting coordinates of corner points and side points of the steel plate in the image into a robot coordinate system, and simultaneously matching the steel plate nesting chart into the robot coordinate system to obtain the corner point coordinates and the side point coordinates of each workpiece;
in this embodiment, there are various ways of converting the corner points and the coordinates of the edge points in the image into the robot coordinate system, and an example will be described below. For the corner points and the side points in the image, the corner points and the side points are firstly converted into a camera coordinate system through an internal reference matrix, and then are converted into a robot base coordinate system through an external reference matrix, and the external reference matrix refers to the following embodiment. After the corner point coordinates and the side point coordinates of the steel plate are determined, the nesting chart is combined, and as the positions of all the workpieces in the nesting chart are fixed in the nesting chart, the robot can acquire the specific positions of all the workpieces on the steel plate, so that the real grabbing positions of all the workpieces are obtained.
The step of converting coordinates of corner points and side points of the steel plate in the image into a robot coordinate system comprises the following steps: placing a marking plate, calling a camera to photograph the marking plate, and calculating a conversion matrix R_c_to_m from the camera to the marking plate;
binding a laser pen at the tail end of a sucker of the mechanical arm, walking three points on a marking plate, and respectively recording (x, y) values of the three points on a PLC display disc; wherein the first point represents the origin, the second point represents a point in the X direction, and the third point represents a point in the Y direction;
the unit vectors e in the X and Y directions are calculated by subtracting from the original point coordinates and normalizing x And e y And cross-product the two unit vectors to obtain a unit vector e in the Z direction z ;e x 、e y 、 e z Recorded origin coordinates o o (x, y, z) constitutes a transformation matrix r_m_to_f of the marker plate coordinate system to the robot coordinate system:
the extrinsic matrix R_c_to_f is the R_m_to_f point multiplied by R_c_to_m.
In the coordinate conversion process, a marking plate is given first, a camera is called to photograph the marking plate, and a conversion matrix (internal reference matrix) from the camera to the marking plate is calculated. Meanwhile, a laser pen is bound at the tail end of the sucker of the mechanical arm, three points are moved on the marking plate, and the (x, y) values of the three points on the PLC display disc are recorded respectively.
Calculating the maximum rotation angle allowed to rotate when the sucking disc sucks the workpiece according to the maximum size from the center point of the sucking disc to the edge of the sucking disc, the corner point coordinates and the side point coordinates of each workpiece and the position of the vacuum column;
comparing the required maximum rotation angle with a preset rotation angle given in the trepanning graph; and determining that the maximum rotation angle is larger than or equal to a preset rotation angle, and grabbing the workpiece by the mechanical arm.
The suction cup of the mechanical arm, due to its large diameter, may strike the vacuum column at the edge of the plate link chain during rotation. The maximum dimension from the center point of the suction cup to the edge of the suction cup may take many forms, in this embodiment, the longest distance is exemplified by 0.71m, and may be, of course, 0.6 to 0.8m in some embodiments. For each vacuum column, the center point of the column on the horizontal plane is taken as the position of the column. On the steel plate, a region to be inspected, which is generally indicated by a rectangular frame in fig. 2, is formed, and is an arrangement region of workpieces. If the center coordinates of the part are within the rectangular area to be inspected of fig. 2, it is necessary to estimate the maximum (threshold) angle at which the chuck can rotate when sucking the part on line. That is, it is necessary to calculate how much the suction cup rotates at maximum in the work-piece sucking process, and the suction cup does not collide with the vacuum column. If the given rotation angle in the trepanning graph is larger than the given threshold angle, that is, the angle of rotation of the sucker required for grabbing the workpiece exceeds the actual allowed condition, at this time, the grabbing rotation of the sucker is illegal, that is, the risk of collision exists at this time, the workpiece cannot be directly grabbed, the optimal rotation angle and the flux matrix need to be recalculated until the maximum rotation angle of the allowed rotation is larger than or equal to the preset rotation angle required in the trepanning graph, and then the workpiece is directly grabbed. Of course, if the calculated maximum rotation angle of the allowed rotation is directly larger than the rotation angle specified by the trepanning, a direct grabbing is possible.
In the calculation process, the specific positions of the workpiece and the vacuum column in the robot coordinate system are considered, and the positions of the mechanical arm and the maximum size of the suction disc are combined, so that the current rotation angle can be directly grasped if the position reached by the edge of the suction disc is not interfered with the vacuum column in the rotation process of the mechanical arm. If the position reached by the sucker is interfered with the vacuum column in the grabbing process, the current rotation angle is unreasonable, and the grabbing route needs to be recalculated.
In the embodiment, a camera is called to shoot the corner points and the side points of the steel plate, and the corner points and the side points of the steel plate in the shot image are positioned; acquiring coordinates of corner points and side points of a steel plate in an image; converting coordinates of corner points and side points of the steel plate in the image into a robot coordinate system through the inner reference matrix and the outer reference matrix, and matching the steel plate nesting chart into the robot coordinate system to obtain the corner point coordinates and side point coordinates of each workpiece; then, calculating the maximum rotation angle allowed to rotate when the sucking disc sucks the workpiece according to the maximum size from the center point of the sucking disc to the edge of the sucking disc, the corner point coordinates and the side point coordinates of each workpiece and the position of the vacuum column; comparing the required maximum rotation angle with a preset rotation angle set in the trepanning graph; determining that the required maximum rotation angle is larger than or equal to a preset rotation angle, and grabbing a workpiece by the mechanical arm; by fully considering the actual working condition of the current workpiece and the positions of the vacuum column, the mechanical arm and the like, the suction cup of the mechanical arm can avoid the vacuum column (obstacle avoidance) in the process of grabbing the workpiece, so that the workpiece is grabbed stably and reliably, the robustness of the system and the speed of workpiece identification grabbing are improved, and the grabbing accuracy is required to reach 100%.
The invention also provides a workpiece grabbing method, which comprises a steel plate corner point and side point positioning method, wherein the specific structure of the steel plate corner point and side point positioning method refers to the embodiment, and the workpiece grabbing method at least has all the beneficial effects brought by the technical schemes of the embodiment because the workpiece grabbing method adopts all the technical schemes of the embodiment, and the detailed description is omitted.
The invention also provides a workpiece production line, which uses a workpiece grabbing method, and the specific scheme of the workpiece grabbing method refers to the embodiment, and because the workpiece production line adopts all the technical schemes of all the embodiments, the workpiece production line at least has all the beneficial effects brought by the technical schemes of the embodiments, and the description is omitted herein.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the invention, and all equivalent structural changes made by the description of the present invention and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the invention.
Claims (9)
1. The steel plate corner point and edge point positioning method is characterized by comprising the following steps of:
designing a learning network for detecting corresponding corner points and side points aiming at a specific scene;
putting the scene picture data synthesized by the preset person into a learning network for training so as to reversely propagate the optimized parameters and generate an accurate pre-training model;
acquiring a corner point picture and a side point picture of a real scene, putting picture data of the real scene into a learning network for training, and generating a final detection model by an accurate pre-training model;
the step of designing the learning network for detecting the corresponding corner points and the edge points aiming at the specific scene comprises the following steps:
the feature extraction module extracts the features from bottom to top and from top to bottom through the convolution module, combines the two features, and uses convolution operation again to eliminate the aliasing effect;
the region naming module trains a thick potential target region naming module so as to preliminarily locate a rough target region;
the final prediction module further predicts the specific positions of the corner points and the side points accurately by training a network combining convolution and full connection;
and carrying out end-to-end integral training on the feature extraction module, the region naming module and the final prediction module, wherein the three modules are used in series during prediction.
2. The method for locating corner and edge points of steel plate according to claim 1, wherein before the step of putting the preset artificially synthesized scene picture data into a learning network for training to counter-propagate the optimized parameters, the method further comprises:
the learning network is pre-trained on the presently disclosed data set to generate a pre-training model.
3. The steel plate corner and edge point positioning method according to claim 1 or 2, wherein the step of artificially synthesizing scene picture data comprises:
acquiring a scene picture shot by a camera;
according to the scene pictures, a plurality of synthesized pictures are synthesized manually;
and (5) marking the corner points or the side points of the synthesized picture.
4. A method of positioning corner and edge points of a steel sheet according to claim 3, wherein the step of manually synthesizing a plurality of synthesized pictures from the scene pictures comprises:
generating a first composite picture according to the first illumination intensity and the first illumination angle;
generating a second composite picture according to the second illumination intensity and the second illumination angle;
generating a third composite picture according to the third illumination intensity and the third illumination angle;
the first illumination intensity is larger than the second illumination intensity and smaller than the third illumination intensity;
the first illumination angle is greater than the second illumination angle and less than the third illumination angle.
5. The method for positioning corner points and edge points of a steel plate according to claim 4, wherein the step of manually synthesizing a plurality of synthesized pictures according to different working scenes further comprises:
generating a fourth composite picture according to the fourth illumination intensity and the fourth illumination angle;
generating a fifth composite picture according to the fifth illumination intensity and the fifth illumination angle;
the first illumination intensity is larger than the fifth illumination intensity and smaller than the fourth illumination intensity;
the first illumination angle is smaller than the fifth illumination angle and larger than the fourth illumination angle.
6. The steel plate corner and edge point positioning method according to claim 1 or 2, wherein the step of generating a corner picture and an edge point picture of a real scene comprises:
collecting a plurality of real corner pictures and side point pictures;
and marking corresponding corner points on the corner point pictures, and marking corresponding side points on the side point pictures.
7. The method for positioning corner and side points of a steel plate according to claim 1 or 2, wherein the model is preloaded into a video memory before training the model to increase the speed of the retrieval and training.
8. A workpiece gripping method, characterized by comprising the steel plate corner and edge point positioning method according to any one of claims 1 to 7.
9. A workpiece production line, characterized in that the workpiece gripping method as claimed in claim 8 is used.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010486614.0A CN111768369B (en) | 2020-06-01 | 2020-06-01 | Steel plate corner point and edge point positioning method, workpiece grabbing method and production line |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010486614.0A CN111768369B (en) | 2020-06-01 | 2020-06-01 | Steel plate corner point and edge point positioning method, workpiece grabbing method and production line |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111768369A CN111768369A (en) | 2020-10-13 |
CN111768369B true CN111768369B (en) | 2023-08-25 |
Family
ID=72719750
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010486614.0A Active CN111768369B (en) | 2020-06-01 | 2020-06-01 | Steel plate corner point and edge point positioning method, workpiece grabbing method and production line |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111768369B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114078162B (en) * | 2022-01-19 | 2022-04-15 | 湖南视比特机器人有限公司 | Truss sorting method and system for workpiece after steel plate cutting |
CN114463751B (en) * | 2022-01-19 | 2024-11-19 | 湖南视比特机器人有限公司 | Corner positioning method and device based on neural network and detection algorithm |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1944731A2 (en) * | 2007-01-12 | 2008-07-16 | Seiko Epson Corporation | Method and apparatus for detecting objects in an image |
EP1988506A2 (en) * | 2007-05-03 | 2008-11-05 | Panasonic Electric Works Europe AG | Method for automatically determining testing areas, testing method and testing system |
JP2013152128A (en) * | 2012-01-25 | 2013-08-08 | Hitachi Ltd | Surface inspection method and apparatus therefor |
CN108510062A (en) * | 2018-03-29 | 2018-09-07 | 东南大学 | A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network |
CN108731588A (en) * | 2017-04-25 | 2018-11-02 | 宝山钢铁股份有限公司 | A kind of machine vision steel plate length and diagonal line measuring device and method |
CN108764248A (en) * | 2018-04-18 | 2018-11-06 | 广州视源电子科技股份有限公司 | Image feature point extraction method and device |
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
CN110298292A (en) * | 2019-06-25 | 2019-10-01 | 东北大学 | Detection method is grabbed when the high-precision real of rule-based object polygon Corner Detection |
CN110443130A (en) * | 2019-07-01 | 2019-11-12 | 国网湖南省电力有限公司 | A kind of electric distribution network overhead wire abnormal state detection method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8914313B2 (en) * | 2012-07-18 | 2014-12-16 | Seiko Epson Corporation | Confidence based vein image recognition and authentication |
CN113039563B (en) * | 2018-11-16 | 2024-03-12 | 辉达公司 | Learn to generate synthetic datasets for training neural networks |
-
2020
- 2020-06-01 CN CN202010486614.0A patent/CN111768369B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1944731A2 (en) * | 2007-01-12 | 2008-07-16 | Seiko Epson Corporation | Method and apparatus for detecting objects in an image |
EP1988506A2 (en) * | 2007-05-03 | 2008-11-05 | Panasonic Electric Works Europe AG | Method for automatically determining testing areas, testing method and testing system |
JP2013152128A (en) * | 2012-01-25 | 2013-08-08 | Hitachi Ltd | Surface inspection method and apparatus therefor |
CN108731588A (en) * | 2017-04-25 | 2018-11-02 | 宝山钢铁股份有限公司 | A kind of machine vision steel plate length and diagonal line measuring device and method |
CN108510062A (en) * | 2018-03-29 | 2018-09-07 | 东南大学 | A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network |
CN108764248A (en) * | 2018-04-18 | 2018-11-06 | 广州视源电子科技股份有限公司 | Image feature point extraction method and device |
CN109636772A (en) * | 2018-10-25 | 2019-04-16 | 同济大学 | The defect inspection method on the irregular shape intermetallic composite coating surface based on deep learning |
CN110298292A (en) * | 2019-06-25 | 2019-10-01 | 东北大学 | Detection method is grabbed when the high-precision real of rule-based object polygon Corner Detection |
CN110443130A (en) * | 2019-07-01 | 2019-11-12 | 国网湖南省电力有限公司 | A kind of electric distribution network overhead wire abnormal state detection method |
Also Published As
Publication number | Publication date |
---|---|
CN111768369A (en) | 2020-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111761575B (en) | Workpiece, grabbing method thereof and production line | |
US12328824B2 (en) | Method and device for disassembling electronics | |
CN105894499B (en) | A kind of space object three-dimensional information rapid detection method based on binocular vision | |
WO2020124988A1 (en) | Vision-based parking space detection method and device | |
CN115816460B (en) | Mechanical arm grabbing method based on deep learning target detection and image segmentation | |
CN113034600B (en) | Recognition and 6D Pose Estimation of Industrial Parts with Untextured Planar Structure Based on Template Matching | |
CN101122457B (en) | Image boundary scan system and method | |
CN111768369B (en) | Steel plate corner point and edge point positioning method, workpiece grabbing method and production line | |
CN112509063A (en) | Mechanical arm grabbing system and method based on edge feature matching | |
CN115582827B (en) | Unloading robot grabbing method based on 2D and 3D visual positioning | |
CN115082559B (en) | Multi-target intelligent sorting method and system for flexible parts and storage medium | |
CN113313116B (en) | Underwater artificial target accurate detection and positioning method based on vision | |
CN102729250A (en) | Chess opening chessman-placing system and method | |
CN112200163A (en) | Underwater benthos detection method and system | |
CN114419437A (en) | Workpiece sorting system based on 2D vision and control method and control device thereof | |
CN113843797A (en) | Automatic dismounting method for part hexagon bolt in non-structural environment based on monocular and binocular mixed vision | |
CN117011843A (en) | Image recognition and posture assessment method for automatic picking of dragon fruits | |
CN119131033B (en) | Engine blade defect automatic detection method based on image | |
CN114092530B (en) | Ladle visual alignment method, device and equipment based on deep learning semantic segmentation and point cloud registration | |
CN115106617A (en) | A scanning and tracking method for long welds in a narrow space | |
CN113752258A (en) | Machine vision-based shaft hole centering guide method for workpiece assembly process | |
CN117456001B (en) | Workpiece posture detection method based on point cloud registration | |
CN118840539A (en) | Apple trunk identification and positioning method based on improved YOLOv s and binocular vision | |
CN114399500B (en) | A highly robust visual recognition and posture detection method for the unloading hole of large tank tooling | |
CN116118387A (en) | Mount paper location laminating system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |