CN117885096B - Method and device for controlling welding operation of robot tail end welding gun - Google Patents
Method and device for controlling welding operation of robot tail end welding gun Download PDFInfo
- Publication number
- CN117885096B CN117885096B CN202410135093.2A CN202410135093A CN117885096B CN 117885096 B CN117885096 B CN 117885096B CN 202410135093 A CN202410135093 A CN 202410135093A CN 117885096 B CN117885096 B CN 117885096B
- Authority
- CN
- China
- Prior art keywords
- welding
- point
- welding seam
- starting
- robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1669—Programme controls characterised by programming, planning systems for manipulators characterised by special application, e.g. multi-arm co-operation, assembly, grasping
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B23—MACHINE TOOLS; METAL-WORKING NOT OTHERWISE PROVIDED FOR
- B23K—SOLDERING OR UNSOLDERING; WELDING; CLADDING OR PLATING BY SOLDERING OR WELDING; CUTTING BY APPLYING HEAT LOCALLY, e.g. FLAME CUTTING; WORKING BY LASER BEAM
- B23K37/00—Auxiliary devices or processes, not specially adapted for a procedure covered by only one of the other main groups of this subclass
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J11/00—Manipulators not otherwise provided for
- B25J11/005—Manipulators for mechanical processing tasks
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Robotics (AREA)
- Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a method and a device for controlling welding operation of a robot tail end welding gun. The method of the invention comprises the steps of: s1, performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above a welding working area to obtain a three-dimensional starting point and a starting vector of a weld; s2, performing real-time and synchronous welding seam precise positioning and welding seam tracking on image information of a 3D line laser camera at the tail end of the robot, and performing real-time adjustment on the welding gun pose by obtaining a welding seam position point through the welding seam precise positioning; and S3, according to the space positioning information in the step S1, in the directions of the weld joint starting point and the starting vector, controlling the robot to execute arc starting welding to the final weld joint position point arc quenching according to the step S2. The device of the invention corresponds to the method. The invention is applicable to the welding task of various workpieces, and can solve the problems of thermal deformation, clamping loosening and displacement and the like of the workpieces in the welding process.
Description
Technical Field
The invention belongs to the technical field of intelligent robots, and particularly relates to a method and a device for controlling welding operation of a welding gun at the tail end of a robot.
Background
Welding plays an irreplaceable role in the modern industry as a critical manufacturing process. With the rapid development of automation technology, robot welding has remarkable advantages in improving production efficiency and quality, and is widely applied to the fields of automobile manufacturing, aerospace, construction and the like. However, due to the limitations of adaptability and lack of flexibility of the robotic welding system, there are also disadvantages in the welding process:
(1) The welding seam coarse positioning is not needed in the welding process, so that human intervention is needed when the initial point of the welding seam is determined, the welding seam cannot be automatically identified, and the robot is automatically guided to move to the position of the welding seam.
(2) High-precision measurement and pose adjustment cannot be performed in real time in the welding process, so that real-time state changes such as workpiece clamping deviation, welding seam thermal deformation and the like cannot be dealt with, and welding precision and welding quality are affected.
However, coarse positioning is a key step in ensuring welding accuracy, and existing welding tracking systems have shortcomings in achieving coarse positioning of a weld joint, resulting in positional deviation and quality problems during welding. The deep learning method is adopted at present, and by means of the strong learning capacity of the neural network, the welding seam can be effectively identified, so that the rough positioning of the welding seam can obtain more accurate position information. Aiming at the problem of weld joint change in the welding process, the existing welding tracking system cannot meet the requirements of complex workpiece structures and irregular weld joints, and the adoption of real-time accurate positioning and online tracking of the weld joints is an important method for improving the welding quality.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the invention aims to provide a method and a device for controlling welding operation of a welding gun at the tail end of a robot.
The invention is realized in that a method of controlling a welding operation of a robot tip welding gun comprises the steps of:
S1, performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above a welding working area to obtain a three-dimensional starting point and a starting vector of a weld;
S2, performing real-time and synchronous welding seam precise positioning and welding seam tracking on image information of a 3D line laser camera at the tail end of the robot, and performing real-time adjustment on the welding gun pose by obtaining a welding seam position point through the welding seam precise positioning;
And S3, according to the space positioning information in the step S1, in the directions of the weld joint starting point and the starting vector, controlling the robot to execute arc starting welding to the final weld joint position point arc quenching according to the step S2.
Preferably, the step S1 specifically includes the following steps:
S1-1, calibrating the mutual positions of a left camera and a right camera and internal parameters and distortion coefficients of the cameras;
S1-2, recognizing a weld joint region based on a weld joint recognition model of a Faster-RCNN network;
S1-3, extracting a weld joint starting point and a starting vector of a weld joint region based on a rotating frame target detection model of a PP-YOLOE-R network;
S1-4, respectively performing pixel matching on a welding seam starting point and a starting vector ending point which are obtained by the binocular camera to obtain a three-dimensional starting point and a starting vector of the welding seam.
Preferably, in step S1-2, the weld seam area identification model based on the Faster-RCNN network comprises the following steps:
extracting features of the acquired images by adopting a multi-layer convolutional neural network;
Extracting Positive Anchors as candidate regions by using anchor frames and Softmax classification, and completing preliminary detection target positioning;
classifying targets in the candidate areas, and adjusting the positions of target detection frames;
Preferably, in step S1-3, the extracting the weld start point and the start vector of the weld region based on the rotating frame object detection model of the PP-YOLOE-R network includes the following steps:
Classifying the welding seams of the welding seam area image and the edges of the bottom plate of the welding piece and marking by a rotating frame;
Connecting midpoint of short sides of rotating frame extracting corresponding welding lines and bottom plate side lines;
Calculating pixel values corresponding to the intersection points of the extracted welding lines and the bottom plate edge lines, and marking the pixel values as welding line starting points; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
Preferably, in step S2, the weld tracking specifically includes: the pose of the adjacent welding points is sent to the robot in real time, and the robot is controlled to move at the adjacent welding points through a linear interpolation method so as to guide the welding gun to track the welding joints.
Preferably, in step S2, the obtaining the welding seam position point through the welding seam precise positioning to adjust the pose of the welding gun in real time includes the following steps:
S2-1, performing hand-eye calibration on the 3D line laser camera and the robot;
S2-2, acquiring a first laser stripe point cloud of a welding line through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm, fitting a segmentation straight line by using a space straight line least square method, and then prolonging the space straight line to obtain a welding line position point 1 of the current laser stripe point cloud after intersection point solving;
S2-3, controlling the robot to perform linear movement along the three-dimensional initial vector direction by a small distance, obtaining a second laser stripe point cloud of the welding line, and repeating the step S2-2 to obtain a welding line position point 2 of the second laser stripe point cloud;
S2-4, repeating the step S2-3 until the welding gun approaches the welding seam position point 1, planning a robot track by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously acquiring laser stripe point clouds, calculating the welding seam position point and planning the robot track in the moving process of the robot until the welding operation is completed until the welding seam position point is welded at the last welding seam position point.
The invention further discloses a device for controlling the welding operation of the robot tail end welding gun, which comprises:
the rough positioning unit is used for performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above the welding working area to obtain a three-dimensional starting point and a starting vector of a weld;
the pose adjusting unit is used for carrying out real-time and synchronous welding seam precise positioning and welding seam tracking on the image information of the 3D line laser camera at the tail end of the robot, and carrying out real-time adjustment on the pose of the welding gun by obtaining a welding seam position point through the welding seam precise positioning;
and the welding operation unit is used for controlling the robot to perform arc starting welding to the final welding seam position point arc extinguishing according to the operation of the pose adjusting unit in the directions of the welding seam starting point and the starting vector according to the space positioning information in the rough positioning unit.
Preferably, the coarse positioning unit comprises:
The calibration module is used for calibrating the mutual positions of the left camera and the right camera and the internal parameters and distortion coefficients of the cameras;
the weld joint region identification module is used for identifying the weld joint region based on a weld joint identification model of the Faster-RCNN network;
the welding seam endpoint recognition module is used for extracting a welding seam starting point and a starting vector of a welding seam region based on a rotating frame target detection model of the PP-YOLOE-R network;
And the three-dimensional positioning module is used for respectively carrying out pixel matching on the weld joint starting point and the starting vector ending point which are obtained by the binocular camera, and obtaining the space three-dimensional coordinates of the starting point and the three-dimensional starting vector.
Preferably, in the weld region identification module, the weld identification model based on the fast-RCNN network identifies a weld region including:
the feature extraction module is used for extracting features of the acquired images by adopting a multi-layer convolutional neural network;
The detection target positioning module is used for classifying and extracting Positive Anchors by utilizing the anchor frame and the Softmax to serve as candidate areas so as to finish preliminary detection target positioning;
The detection frame position adjustment module is used for classifying targets in the candidate area and adjusting the positions of the target detection frames;
Preferably, in the three-dimensional positioning module, the extracting the weld joint starting point and the starting vector of the weld joint region based on the rotating frame target detection model of the PP-YOLOE-R network includes:
a classification labeling module, which is used for classifying the objects, weld and weldment for imaging weld area classifying the edges of the bottom plate and marking the edges by using a rotating frame;
the edge line extraction module is used for extracting the edge line, for connecting the midpoint of the short sides of the rotating frame extracting corresponding welding lines and bottom plate side lines;
The welding seam starting point marking module is used for marking the pixel value corresponding to the intersection point of the extracted welding seam and the bottom plate side line as a welding seam starting point; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
Preferably, in the pose adjustment unit, the weld tracking specifically includes: the pose of the adjacent welding points is sent to the robot in real time, and the robot is controlled to move at the adjacent welding points through a linear interpolation method so as to guide the welding gun to track the welding joints.
Preferably, in the pose adjustment unit, the obtaining the welding seam position point through the welding seam precise positioning to adjust the pose of the welding gun in real time includes:
the hand-eye calibration module is used for calibrating the hand-eye of the 3D line laser camera and the robot;
The weld joint position point acquisition module is used for acquiring a first laser stripe point cloud of a weld joint through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm, fitting a segmentation straight line by using a space straight line least square method, and then obtaining a weld joint position point 1 of the current laser stripe point cloud after extending the space straight line to obtain an intersection point;
The welding seam position point traversing module is used for controlling the robot to perform linear movement along the three-dimensional initial vector direction by a small distance to obtain a second laser stripe point cloud of the welding seam, and repeating the operation of the welding seam position point obtaining module to obtain a welding seam position point 2 of the second laser stripe point cloud;
and the welding module is used for repeating the operation of the welding seam position point traversing module until the welding gun approaches the welding seam position point 1, performing robot track planning by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously performing laser stripe point cloud acquisition, welding seam position point calculation and robot track planning in the moving process of the robot until the welding operation is completed until the welding seam position point is welded to the final welding seam position point.
Compared with the defects and shortcomings of the prior art, the invention has the following beneficial effects:
(1) The invention provides a two-stage-based target detection algorithm for respectively carrying out weld joint identification and positioning on the image information of the binocular camera, solves the problem that the starting point of the weld joint needs to be set manually during tracking welding, and can be suitable for welding tasks of various workpieces;
(2) The invention provides a welding seam precise positioning and tracking method based on a 3D line laser camera, which is characterized in that high-precision welding seam position points are obtained through RANSAC point cloud linear segmentation and space linear least square linear fitting, and then the problems of workpiece thermal deformation, clamping loosening displacement and the like in the welding process can be better solved through real-time track planning.
Drawings
FIG. 1 is a flow chart of the steps of the method of the present invention;
FIG. 2 is a basic block diagram of a Faster-RCNN network;
FIG. 3 is a basic block diagram of a PP-YOLOE-R network;
FIG. 4 is a schematic view of a starting point and a starting vector of a weld to be welded in an embodiment of the present invention;
Fig. 5 is a schematic view of the structure of the device of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
The embodiment of the invention discloses a method for controlling welding operation of a robot tail end welding gun, which comprises the following steps as shown in fig. 1:
S1, performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above a welding working area to obtain a three-dimensional starting point and a starting vector of a weld.
In step S1, the weld rough positioning process specifically performs weld recognition and positioning on image information of the binocular camera based on a two-stage target detection algorithm, respectively: the first stage is to identify the weld joint region based on a weld joint identification model of a Faster-RCNN network; the two stages are that a rotating frame detection model based on a PP-YOLOE-R network classifies welding seams of the welding seam area image and welding piece bottom plate side lines, marks the welding seams with rotating frames, further connects the midpoint of the short side of the rotating frame to finish the extraction of the corresponding welding seams and the bottom plate side lines, and calculates intersection points of the extracted welding seams and the bottom plate side lines to obtain welding seam starting points; the starting point of the starting vector coincides with the welding seam starting point, and the ending point is the midpoint of the short side of the far end of the welding seam rotating frame. Further, performing pixel matching on the weld joint starting point and the starting vector ending point detected by the two phases to obtain a three-dimensional starting point and a starting vector of the weld joint.
Specifically, the step S1 specifically includes the steps of:
S1-1, calibrating the mutual positions of a left camera and a right camera and internal parameters and distortion coefficients of the cameras;
in step S1-1, the projection matrix of the two cameras is obtained after calibration is:
K is an internal reference matrix of the camera, and the rotation matrix R 3×3 and the translation matrix T 3×1 form an external reference matrix of the camera.
S1-2, recognizing a weld joint region based on a weld joint recognition model of a Faster-RCNN network;
In step S1-2, the image information of the binocular camera is respectively identified and positioned based on a two-stage target detection algorithm, and the algorithm is divided into two processes of identifying a welding seam region, acquiring a starting point of the welding seam and acquiring a starting vector. Wherein the weld region identification is based on a Faster-RCNN network (FIG. 2) weld identification model (reference: https:// arxiv. Org/abs/2211.02386, model use method: https:// aistudio. Baidu. Com/projectdetail/5058293), the model is composed of a feature extraction module, a candidate region generation module and a target identification and location regression module, the weld region identification model based on the Faster-RCNN network comprises the following steps:
(1) Extracting features of the acquired images by adopting a multi-layer convolutional neural network;
the feature extraction module performs feature extraction on the acquired image by adopting a multi-layer convolutional neural network, and comprises 13 convolutional layers, 13 ReLU layers and 4 pooling layers.
(2) Extracting Positive Anchors as candidate regions by using anchor frames and Softmax classification, and completing preliminary detection target positioning;
(3) And classifying targets in the candidate areas, and adjusting the positions of the target detection frames.
The target recognition and position regression module classifies targets of the candidate region and adjusts the positions of target detection frames, wherein the activation function of Softmax is as follows:
And satisfy the following
S1-3, extracting a weld joint starting point and a starting vector of a weld joint region based on a rotating frame target detection model of a PP-YOLOE-R network;
Step S1-3 preferably comprises the steps of:
(1) Classifying the welding seams of the welding seam area image and the edges of the bottom plate of the welding piece and marking by a rotating frame;
And (3) for acquiring a welding seam starting point and a starting vector, classifying welding seams of the welding seam area image and welding piece bottom plate edges based on a rotating frame target detection model (figure 3) of the PP-YOLOE-R network, and marking by using a rotating frame.
(2) Connecting the midpoint of the short side of the rotating frame to finish the extraction of the corresponding welding line and the bottom plate side line (figure 4);
(3) Calculating pixel values corresponding to the intersection points of the extracted welding lines and the bottom plate edge lines, and marking the pixel values as welding line starting points; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
In step S1-3, the imaging models of the starting points P of the left camera and the right camera are respectively:
Wherein, (u p1,vp1, 1) and (u p2,vp2, 1) respectively correspond to homogeneous coordinate values of P 1 and P 2 in the images of the left camera and the right camera, and M left and M right respectively correspond to projection matrixes of the left camera and the right camera, (X p,Yp,Zp,1)T is homogeneous coordinate value of the point P in a world coordinate system;
the starting vector end point Q image models of the left camera and the right camera are respectively as follows:
Wherein, (u q1,vq1, 1) and (u q2,vq2, 1) correspond to the homogeneous coordinate values of Q 1 and Q 2 in the left and right camera images, (X q,Yq,Zq,1)T is the homogeneous coordinate value in the Q world coordinate system.
S1-4, respectively performing pixel matching on a welding seam starting point and a starting vector ending point which are obtained by the binocular camera to obtain a three-dimensional starting point and a starting vector of the welding seam.
In step S1-4, the simultaneous equations (1), (2) and (3) are solved for the three-dimensional starting point P using least squares:
combined equations (1), (4) and (5), solve for three-dimensional start vector termination point Q using least squares:
The three-dimensional starting vector is:
s2, performing real-time and synchronous welding seam precise positioning and welding seam tracking on image information of a 3D line laser camera at the tail end of the robot, and performing real-time adjustment on the welding gun pose by obtaining a welding seam position point through the welding seam precise positioning.
In step S2, the weld fine positioning specifically includes using a 3D line laser camera to obtain a weld point cloud, implementing point cloud straight line segmentation through a RANSAC algorithm, further using space least square to perform straight line fitting, and finally obtaining position information of an intersection point through fitting straight lines to calculate weld characteristic points.
The method for obtaining the welding seam position point through the welding seam precise positioning so as to adjust the position and the posture of the welding gun in real time comprises the following steps:
S2-1, performing hand-eye calibration on the 3D line laser camera and the robot;
The obtained welding seam position points are required to be converted into the track points of the robot on the basis of hand-eye calibration of the 3D line laser camera and the robot. Wherein, the hand-eye calibration matrix The method comprises the following steps:
Wherein: Is a3 x 3 rotation matrix and, Is a3 x 1 translation matrix;
The transformation matrix from the 3D line laser camera coordinate system to the robot coordinate system is as follows:
wherein: p R is the homogeneous coordinate value under the robot coordinate system, P S is the homogeneous coordinate value of the weld position point under the 3D line laser camera coordinate system, Is a transformation matrix from the robot tool coordinate system to the robot coordinate system.
S2-2, acquiring a first laser stripe point cloud of a welding line through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm (https://kns.cnki.net/kcms2/article/abstractv=UQzSFoOd3SfaN7h_TG_BrRKczHdneP4RO_ScjjUOaqGoDHvtA7hHD9F0--89ZoxQYavFIaL4kjkS90eUWGHNhU2FcQIp8ghx6oE-rL7TtuTMEbBngnhla0JNb4VUkDy0o7xo3hV4b-h-nZfiAbajPg==&uniplatform=NZKPT&language=CHS article 3.2.1 section (1)), further fitting a segmentation straight line by using a space straight line least square method (https://kns.cnki.net/kcms2/article/abstractv=UQzSFoOd3SfaN7h_TG_BrRKczHdneP4RO_ScjjUOaqGoDHvtA7hHD9F0--89ZoxQYavFIaL4kjkS90eUWGHNhU2FcQIp8ghx6oE-rL7TtuTMEbBngnhla0JNb4VUkDy0o7xo3hV4b-h-nZfiAbajPg==&uniplatform=NZKPT&language=CHS article 3.2.1 section (2), and then obtaining a welding line position point 1 of the current laser stripe point cloud after the intersection point is obtained by extending the space straight line;
S2-3, the robot moves linearly along the three-dimensional initial vector direction by a small distance, a second laser stripe point cloud of the welding line is obtained, and a welding line position point 2 of the second laser stripe point cloud is obtained by using the method described in the step S2-2;
S2-4, repeating the step S2-3 until the welding gun approaches the welding seam position point 1, planning a robot track by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously acquiring laser stripe point clouds, calculating the welding seam position point and planning the robot track in the moving process of the robot (namely repeating the steps S2-S2-3) until the welding operation is completed until the welding seam position point is welded at the last.
And S3, according to the space positioning information in the step S1, in the directions of the weld joint starting point and the starting vector, controlling the robot to execute arc starting welding to the final weld joint position point arc quenching according to the step S2.
In the embodiment of the invention, the image information of the binocular camera is respectively identified and positioned based on the two-stage target detection algorithm, so that the problem that the starting point of the welding seam needs to be set manually during tracking welding is solved, and the method can be suitable for welding tasks of various workpieces. In addition, the embodiment of the invention obtains high-precision weld joint position points through RANSAC point cloud linear segmentation and space linear least square method linear fitting, and then can better cope with the problems of workpiece thermal deformation, clamping loosening displacement and the like in the welding process through real-time track planning.
The present invention further discloses an apparatus for controlling welding operation of a robot tip welding gun, as shown in fig. 5, the apparatus comprising:
the rough positioning unit 1 is used for performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above a welding working area to obtain a three-dimensional starting point and a starting vector of a weld.
In the coarse positioning unit 1, the weld joint coarse positioning process specifically performs weld joint recognition and positioning on image information of the binocular camera based on a two-stage target detection algorithm: the first stage is to identify the weld joint region based on a weld joint identification model of a Faster-RCNN network; the two stages are that a rotating frame detection model based on a PP-YOLOE-R network classifies welding seams of the welding seam area image and welding piece bottom plate side lines, marks the welding seams with rotating frames, further connects the midpoint of the short side of the rotating frame to finish the extraction of the corresponding welding seams and the bottom plate side lines, and calculates intersection points of the extracted welding seams and the bottom plate side lines to obtain welding seam starting points; the starting point of the starting vector coincides with the welding seam starting point, and the ending point is the midpoint of the short side of the far end of the welding seam rotating frame. Further, performing pixel matching on the weld joint starting point and the starting vector ending point detected by the two phases to obtain a three-dimensional starting point and a starting vector of the weld joint.
Specifically, the coarse positioning unit 1 specifically includes: the device comprises a calibration module, a welding seam area identification module, a welding seam endpoint identification module and a three-dimensional positioning module.
The calibration module is used for calibrating the mutual positions of the left camera and the right camera and the internal parameters and distortion coefficients of the cameras;
In the calibration module, the projection matrix of the two cameras is obtained after calibration is as follows:
K is an internal reference matrix of the camera, and the rotation matrix R 3×3 and the translation matrix T 3×1 form an external reference matrix of the camera.
The weld joint region identification module is used for identifying the weld joint region based on a weld joint identification model of the Faster-RCNN network;
in the weld joint region identification module, the image information of the binocular camera is respectively identified and positioned based on a two-stage target detection algorithm, and the algorithm is divided into two processes of weld joint region identification, weld joint starting point and initial vector acquisition. Wherein the weld region identification is based on a fast-RCNN network (figure 2) weld identification model (reference: https:// arxiv. Org/abs/2211.02386, model use method: https:// air udio. Baidu. Com/projectdetail/5058293), the model is composed of a feature extraction module, a candidate region generation module and a target identification and location regression module, and the weld region identification model based on the fast-RCNN network comprises:
the feature extraction module is used for extracting features of the acquired images by adopting a multi-layer convolutional neural network;
the feature extraction module performs feature extraction on the acquired image by adopting a multi-layer convolutional neural network, and comprises 13 convolutional layers, 13 ReLU layers and 4 pooling layers.
The detection target positioning module is used for classifying and extracting Positive Anchor s by utilizing the anchor frame and the Softmax to serve as a candidate area so as to finish preliminary detection target positioning;
and the detection frame position adjustment module is used for classifying targets in the candidate region and adjusting the positions of the target detection frames.
The target recognition and position regression module classifies targets of the candidate region and adjusts the positions of target detection frames, wherein the activation function of Softmax is as follows:
And satisfy the following
And the three-dimensional positioning module is used for extracting a weld joint starting point and a starting vector of the weld joint region based on the rotating frame target detection model of the PP-YOLOE-R network.
The three-dimensional positioning module preferably:
a classification labeling module, which is used for classifying the objects, weld and weldment for imaging weld area classifying the edges of the bottom plate and marking the edges by using a rotating frame;
And (3) for acquiring a welding seam starting point and a starting vector, classifying welding seams of the welding seam area image and welding piece bottom plate edges based on a rotating frame target detection model (figure 3) of the PP-YOLOE-R network, and marking by using a rotating frame.
The edge line extraction module is used for connecting the midpoint of the short edge of the rotating frame to finish the extraction of the corresponding welding line and the edge line of the bottom plate (figure 4);
The welding seam starting point marking module is used for marking the pixel value corresponding to the intersection point of the extracted welding seam and the bottom plate side line as a welding seam starting point; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
In the weld joint starting point marking module, imaging models of starting points P of the left camera and the right camera are respectively as follows:
Wherein, (u p1,vp1, 1) and (u p2,vp2, 1) respectively correspond to homogeneous coordinate values of P 1 and P 2 in the images of the left camera and the right camera, and M left and M right respectively correspond to projection matrixes of the left camera and the right camera, (X p,Yp,Zp,1)T is homogeneous coordinate value of the point P in a world coordinate system;
the starting vector end point Q image models of the left camera and the right camera are respectively as follows:
Wherein, (u q1,vq1, 1) and (u q2,vq2, 1) correspond to the homogeneous coordinate values of Q 1 and Q 2 in the left and right camera images, (X q,Yq,Zq,1)T is the homogeneous coordinate value in the Q world coordinate system.
And the three-dimensional positioning module is used for respectively carrying out pixel matching on the welding seam starting point and the starting vector ending point which are obtained by the binocular camera to obtain a three-dimensional starting point and a starting vector of the welding seam.
In the three-dimensional positioning module, the simultaneous equations (1), (2) and (3) are solved for the three-dimensional starting point P using least squares:
combined equations (1), (4) and (5), solve for three-dimensional start vector termination point Q using least squares:
The three-dimensional starting vector is:
And the pose adjusting unit 2 is used for carrying out real-time and synchronous welding seam precise positioning and welding seam tracking on the image information of the 3D line laser camera at the tail end of the robot, and carrying out real-time adjustment on the pose of the welding gun by obtaining a welding seam position point through the welding seam precise positioning.
In the pose adjusting unit 2, the weld fine positioning is specifically that a 3D line laser camera is used for acquiring weld point clouds, the point cloud linear segmentation is realized through a RANSAC algorithm, further space least square is used for carrying out linear fitting, and finally the position information of the characteristic points of the weld is calculated through the fitting of the linear to obtain the intersection point.
The obtaining the welding seam position point through the welding seam precise positioning to adjust the position and the posture of the welding gun in real time comprises the following steps:
the hand-eye calibration module is used for calibrating the hand-eye of the 3D line laser camera and the robot;
The obtained welding seam position points are required to be converted into the track points of the robot on the basis of hand-eye calibration of the 3D line laser camera and the robot. Wherein, the hand-eye calibration matrix The method comprises the following steps:
Wherein: Is a3 x 3 rotation matrix and, Is a3 x 1 translation matrix;
The transformation matrix from the 3D line laser camera coordinate system to the robot coordinate system is as follows:
wherein: p R is the homogeneous coordinate value under the robot coordinate system, P S is the homogeneous coordinate value of the weld position point under the 3D line laser camera coordinate system, Is a transformation matrix from the robot tool coordinate system to the robot coordinate system.
The weld joint position point acquisition module is used for acquiring a first laser stripe point cloud of a weld joint through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm (https://kns.cnki.net/kcms2/article/abstractv=UQzSFoOd3SfaN7h_TG_BrRKczHdneP4RO_ScjjUOaqGoDHvtA7hHD9F0--89ZoxQYavFIaL4kjkS90eUWGHNhU2FcQIp8ghx6oE-rL7TtuTMEbBngnhla0JNb4VUkDy0o7xo3hV4b-h-nZfiAbajPg==&uniplatform=NZKPT&language=CHS article 3.2.1 section (1)), further fitting a segmentation straight line by using a space straight line least square method (https://kns.cnki.net/kcms2/article/abstractv=UQzSFoOd3SfaN7h_TG_BrRKczHdneP4RO_ScjjUOaqGoDHvtA7hHD9F0--89ZoxQYavFIaL4kjkS90eUWGHNhU2FcQIp8ghx6oE-rL7TtuTMEbBngnhla0JNb4VUkDy0o7xo3hV4b-h-nZfiAbajPg==&uniplatform=NZKPT&language=CHS article 3.2.1 section (2), and then obtaining a weld joint position point 1 of the current laser stripe point cloud after the intersection point is found by extending the space straight line;
The welding seam position point traversing module is used for carrying out linear movement of the robot along the three-dimensional initial vector direction by a small distance to obtain a second laser stripe point cloud of the welding seam, and a welding seam position point 2 of the second laser stripe point cloud is obtained by using a method in the welding seam position point obtaining module;
and the welding module is used for repeating the operation of the welding seam position point traversing module until the welding gun approaches the welding seam position point 1, performing robot track planning by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously performing laser stripe point cloud acquisition, welding seam position point calculation and robot track planning in the moving process of the robot until the welding operation is completed until the welding seam position point is welded to the final welding seam position point.
And a welding operation unit 3 for controlling the robot to perform arc starting welding to the final weld position point arc extinguishing according to the operation of the pose adjustment unit in the directions of the weld starting point and the starting vector according to the spatial positioning information in the rough positioning unit.
In the embodiment of the invention, the image information of the binocular camera is respectively identified and positioned based on the two-stage target detection algorithm, so that the problem that the starting point of the welding seam needs to be set manually during tracking welding is solved, and the method can be suitable for welding tasks of various workpieces. In addition, the embodiment of the invention obtains high-precision weld joint position points through RANSAC point cloud linear segmentation and space linear least square method linear fitting, and then can better cope with the problems of workpiece thermal deformation, clamping loosening displacement and the like in the welding process through real-time track planning.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.
Claims (6)
1. A method of controlling a welding operation of a robotic end welding gun, the method comprising the steps of:
S1, performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above a welding working area to obtain a three-dimensional starting point and a starting vector of a weld;
S2, performing real-time and synchronous welding seam precise positioning and welding seam tracking on image information of a 3D line laser camera at the tail end of the robot, and performing real-time adjustment on the welding gun pose by obtaining a welding seam position point through the welding seam precise positioning;
s3, according to the space positioning information in the step S1, in the directions of the weld joint starting point and the starting vector, controlling the robot to execute arc starting welding to the final weld joint position point arc quenching according to the step S2;
the step S1 specifically comprises the following steps:
S1-1, calibrating the mutual positions of a left camera and a right camera and internal parameters and distortion coefficients of the cameras;
S1-2, recognizing a weld joint region based on a weld joint recognition model of a Faster-RCNN network;
S1-3, extracting a weld joint starting point and a starting vector of a weld joint region based on a rotating frame target detection model of a PP-YOLOE-R network;
S1-4, respectively performing pixel matching on a welding seam starting point and a starting vector ending point which are obtained by a binocular camera to obtain a three-dimensional starting point and a starting vector of the welding seam;
In step S2, the step of obtaining the welding seam position point through the welding seam precise positioning to adjust the pose of the welding gun in real time includes the following steps:
S2-1, performing hand-eye calibration on the 3D line laser camera and the robot;
S2-2, acquiring a first laser stripe point cloud of a welding line through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm, fitting a segmentation straight line by using a space straight line least square method, and then prolonging the space straight line to obtain a welding line position point 1 of the current laser stripe point cloud after intersection point solving;
S2-3, controlling the robot to perform linear movement along the three-dimensional initial vector direction by a small distance, obtaining a second laser stripe point cloud of the welding line, and repeating the step S2-2 to obtain a welding line position point 2 of the second laser stripe point cloud;
S2-4, repeating the step S2-3 until the welding gun approaches the welding seam position point 1, planning a robot track by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously acquiring laser stripe point clouds, calculating the welding seam position point and planning the robot track in the moving process of the robot until the welding operation is completed until the welding seam position point is welded at the last welding seam position point.
2. The method of claim 1, wherein in step S1-2, the fast-RCNN network-based weld identification model identifies a weld area comprising the steps of:
extracting features of the acquired images by adopting a multi-layer convolutional neural network;
Extracting Positive Anchors as candidate regions by using anchor frames and Softmax classification, and completing preliminary detection target positioning;
classifying targets in the candidate areas, and adjusting the positions of target detection frames;
In step S1-3, the method for extracting the weld joint starting point and the starting vector of the weld joint region by using the rotating frame target detection model based on the PP-YOLOE-R network comprises the following steps:
Classifying the welding seams of the welding seam area image and the edges of the bottom plate of the welding piece and marking by a rotating frame;
Connecting midpoint of short sides of rotating frame extracting corresponding welding lines and bottom plate side lines;
Calculating pixel values corresponding to the intersection points of the extracted welding lines and the bottom plate edge lines, and marking the pixel values as welding line starting points; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
3. The method according to claim 1, wherein in step S2, the weld tracking is specifically: the pose of the adjacent welding points is sent to the robot in real time, and the robot is controlled to move at the adjacent welding points through a linear interpolation method so as to guide the welding gun to track the welding joints.
4. An apparatus for controlling a welding operation of a robot tip welding gun, the apparatus comprising:
the rough positioning unit is used for performing weld rough positioning processing on image information of a binocular camera consisting of a left camera and a right camera above the welding working area to obtain a three-dimensional starting point and a starting vector of a weld;
the pose adjusting unit is used for carrying out real-time and synchronous welding seam precise positioning and welding seam tracking on the image information of the 3D line laser camera at the tail end of the robot, and carrying out real-time adjustment on the pose of the welding gun by obtaining a welding seam position point through the welding seam precise positioning;
A welding operation unit for controlling the robot to perform arc starting welding to the final weld position point arc extinguishing according to the operation of the pose adjustment unit in the directions of the weld starting point and the starting vector according to the spatial positioning information in the coarse positioning unit;
the coarse positioning unit includes:
The calibration module is used for calibrating the mutual positions of the left camera and the right camera and the internal parameters and distortion coefficients of the cameras;
the weld joint region identification module is used for identifying the weld joint region based on a weld joint identification model of the Faster-RCNN network;
the welding seam endpoint recognition module is used for extracting a welding seam starting point and a starting vector of a welding seam region based on a rotating frame target detection model of the PP-YOLOE-R network;
the three-dimensional positioning module is used for respectively carrying out pixel matching on a welding seam starting point and a starting vector ending point which are obtained by the binocular camera to obtain a three-dimensional starting point and a starting vector of the welding seam;
In the pose adjustment unit, the obtaining the welding seam position point through the welding seam precise positioning to adjust the pose of the welding gun in real time comprises the following steps:
the hand-eye calibration module is used for calibrating the hand-eye of the 3D line laser camera and the robot;
The weld joint position point acquisition module is used for acquiring a first laser stripe point cloud of a weld joint through a 3D line laser camera, performing point cloud straight line segmentation by using a RANSAC algorithm, fitting a segmentation straight line by using a space straight line least square method, and then obtaining a weld joint position point 1 of the current laser stripe point cloud after extending the space straight line to obtain an intersection point;
The welding seam position point traversing module is used for controlling the robot to perform linear movement along the three-dimensional initial vector direction by a small distance to obtain a second laser stripe point cloud of the welding seam, and repeating the operation of the welding seam position point obtaining module to obtain a welding seam position point 2 of the second laser stripe point cloud;
and the welding module is used for repeating the operation of the welding seam position point traversing module until the welding gun approaches the welding seam position point 1, performing robot track planning by using the obtained welding seam position point, starting the welding gun, moving the robot according to the planned track, and synchronously performing laser stripe point cloud acquisition, welding seam position point calculation and robot track planning in the moving process of the robot until the welding operation is completed until the welding seam position point is welded to the final welding seam position point.
5. The apparatus of claim 4, wherein in the weld area identification module, the fast-RCNN network-based weld identification model identifies a weld area comprising:
the feature extraction module is used for extracting features of the acquired images by adopting a multi-layer convolutional neural network;
The detection target positioning module is used for classifying and extracting Positive Anchors by utilizing the anchor frame and the Softmax to serve as candidate areas so as to finish preliminary detection target positioning;
The detection frame position adjustment module is used for classifying targets in the candidate area and adjusting the positions of the target detection frames;
in the three-dimensional positioning module, the extracting the weld joint starting point and the starting vector of the weld joint region by the rotating frame target detection model based on the PP-YOLOE-R network comprises the following steps:
a classification labeling module, which is used for classifying the objects, weld and weldment for imaging weld area classifying the edges of the bottom plate and marking the edges by using a rotating frame;
the edge line extraction module is used for extracting the edge line, for connecting the midpoint of the short sides of the rotating frame extracting corresponding welding lines and bottom plate side lines;
The welding seam starting point marking module is used for marking the pixel value corresponding to the intersection point of the extracted welding seam and the bottom plate side line as a welding seam starting point; the weld joint starting point is the starting point of the starting vector, the ending point of the starting vector is the midpoint of the short side of the far end of the weld joint rotating frame, and the starting vector is obtained after calculating the pixel value of the ending point.
6. The apparatus according to claim 4, wherein in the pose adjustment unit, the weld tracking is specifically: the pose of the adjacent welding points is sent to the robot in real time, and the robot is controlled to move at the adjacent welding points through a linear interpolation method so as to guide the welding gun to track the welding joints.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410135093.2A CN117885096B (en) | 2024-01-31 | 2024-01-31 | Method and device for controlling welding operation of robot tail end welding gun |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410135093.2A CN117885096B (en) | 2024-01-31 | 2024-01-31 | Method and device for controlling welding operation of robot tail end welding gun |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117885096A CN117885096A (en) | 2024-04-16 |
CN117885096B true CN117885096B (en) | 2024-08-20 |
Family
ID=90644254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410135093.2A Active CN117885096B (en) | 2024-01-31 | 2024-01-31 | Method and device for controlling welding operation of robot tail end welding gun |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117885096B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118485881B (en) * | 2024-07-12 | 2024-10-25 | 杭州申邦科技有限公司 | Special operation identification method based on visual detection |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111694011A (en) * | 2020-06-19 | 2020-09-22 | 安徽卡思普智能科技有限公司 | Road edge detection method based on data fusion of camera and three-dimensional laser radar |
CN113591810A (en) * | 2021-09-28 | 2021-11-02 | 湖南大学 | Vehicle target pose detection method and device based on boundary tight constraint network and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110945566A (en) * | 2018-08-01 | 2020-03-31 | 深圳市大疆创新科技有限公司 | Image registration method, device, computer system and movable equipment |
KR102602401B1 (en) * | 2021-03-17 | 2023-11-16 | 고려대학교 세종산학협력단 | Method and Apparatus for Suppression of Non-maximum using IOU-Prediction Value of PP-YOLO technique and Size and Ratio of Detection box |
CN114571153B (en) * | 2022-04-07 | 2023-10-10 | 福州大学 | A method of weld seam identification and robot weld seam tracking based on 3D point cloud |
-
2024
- 2024-01-31 CN CN202410135093.2A patent/CN117885096B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111694011A (en) * | 2020-06-19 | 2020-09-22 | 安徽卡思普智能科技有限公司 | Road edge detection method based on data fusion of camera and three-dimensional laser radar |
CN113591810A (en) * | 2021-09-28 | 2021-11-02 | 湖南大学 | Vehicle target pose detection method and device based on boundary tight constraint network and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN117885096A (en) | 2024-04-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Geng et al. | A novel seam extraction and path planning method for robotic welding of medium-thickness plate structural parts based on 3D vision | |
Zhang et al. | 3D reconstruction of complex spatial weld seam for autonomous welding by laser structured light scanning | |
CN114289934B (en) | Automatic welding system and method for large structural part based on three-dimensional vision | |
WO2021103154A1 (en) | Robot control method for smart spray coating of multiple vehicle models | |
CN112223293A (en) | Online grinding method of welding seam grinding and polishing robot | |
Ma et al. | Efficient and accurate start point guiding and seam tracking method for curve weld based on structure light | |
CN111192307A (en) | Self-adaptive deviation rectifying method based on laser cutting of three-dimensional part | |
CN112958959A (en) | Automatic welding and detection method based on three-dimensional vision | |
Liu et al. | Welding seam recognition and tracking for a novel mobile welding robot based on multi-layer sensing strategy | |
Wang et al. | A real-time weld line detection for derusting wall-climbing robot using dual cameras | |
Zhou et al. | Intelligent guidance programming of welding robot for 3D curved welding seam | |
CN111152229A (en) | Mechanical arm guiding method and device for 3D mechanical vision | |
CN118314138B (en) | Laser processing method and system based on machine vision | |
CN114515924A (en) | Tower foot workpiece automatic welding system and method based on weld joint identification | |
CN117885096B (en) | Method and device for controlling welding operation of robot tail end welding gun | |
CN117047237B (en) | Intelligent flexible welding system and method for special-shaped parts | |
CN116542914A (en) | Weld seam extraction and fitting method based on 3D point cloud | |
CN109341532A (en) | A structural feature-based part coordinate calibration method for automatic assembly | |
WO2024193077A1 (en) | Deep-learning-based intelligent welding method for high-altitude steel structure welding robot | |
CN114800574A (en) | Robot automatic welding system and method based on double three-dimensional cameras | |
CN114571160A (en) | Offline curved surface weld extraction and attitude estimation method | |
Geng et al. | A method of welding path planning of steel mesh based on point cloud for welding robot | |
CN117584121A (en) | A path planning method for welding robots based on point cloud scene understanding | |
Wu et al. | Research on welding guidance system of intelligent perception for steel weldment | |
CN114842144A (en) | Binocular vision three-dimensional reconstruction method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |