Background
The vision-guided robot generally refers to a robot arm that is provided with an image sensor, such as a Charge-coupled Device (CCD), on an end effector of the robot arm, so that the robot arm looks more eye-like, and when the image sensor obtains a workpiece position, the robot arm is controlled by a robot arm controller to move the end effector to the workpiece position for taking or placing an object.
However, before the above-mentioned workpiece picking and placing operations are performed, the robot must first perform a calibration operation by the vision-guided robot, so that the controller can store the coordinate position difference between the end effector and the lens of the image sensor.
In the conventional vision-guided robot system calibration technique, the calibration target used is a bitmap. Since the bitmap is a regular graph and has no directionality, the user needs to determine three feature points on the bitmap in sequence. Then, the calibration personnel first controls the robot arm to move to a proper height, so that the camera can shoot a complete bitmap image, and the position is an image calibration point. The user inputs the image coordinates of the three characteristic points on the bitmap image in the image processing software, and inputs the center distance between the point in the bitmap and the center distance between the points in the real world, and the image processing software calculates the coordinate system conversion relation from the image coordinate system to the real world coordinate system, so that the image processing software defines the real world coordinate system X-Y.
After the calibration procedure is finished, the calibration personnel also needs to move the robot arm, move the working point of the operation tool of the robot arm to the three characteristic points in sequence and record the coordinate value of the robot arm of the working point of the tool at the characteristic point. After the calibration is completed, the interior of the robot arm controller automatically calculates and defines the base coordinate system of the robot arm according to the robot arm coordinate values. At this time, the base coordinate system of the robot arm will coincide with the real world coordinate system in the image processing software. Therefore, when the image processing software analyzes the image and obtains the real world coordinates of the object through conversion, the real world coordinates can be directly transmitted to the robot arm for operation without additional conversion.
However, the conventional vision-guided robot calibration technique is completely dependent on manpower, so that the procedure is time-consuming and error-prone. In addition, whether the operating tool operating point is correctly moved to each feature point depends on visual confirmation by the calibration staff, so that different calibration results may be generated due to different calibration staff, and a visual error may be generated.
Related art, for example, US6812665, describes an off-line relative calibration method that can compensate for an error between a robot center point (TCP) and a workpiece to create an accurate machining path. However, the robot arm needs to know the profile parameters of the standard workpiece in advance to perform the standard parameter correction, and the error between the current workpiece and the standard workpiece parameter obtained by the forced feedback or displacement sensor is compensated during the online operation.
US7019825 describes a hand-eye correction method for acquiring images of at least two workpieces by a camera mounted at the end of a robot arm. And the arm moves to obtain at least two images, and rotation and translation vector calculation of the arm and the camera is carried out through projection invariant description. However, at least two or more workpiece images are acquired for projection invariant calculation, and sufficient edge information is limited for shooting the workpiece, otherwise, the conversion needs to be performed for optimization calculation, which is time-consuming and cannot obtain good results.
Also, U.S. patent publication No. US20050225278a1 provides a measuring system for determining the moving manner of the robot arm so that the position of the tool center point on the light receiving surface is moved to a predetermined point on the light receiving surface, moving the robot by the determined position and storing the robot arm position to determine the position of the tool center point with respect to the tool mounting surface of the robot. In the image correction method, the center point of the correction tool is driven by the robot arm to the center point of the display of the alignment image, and the center point is used as the base for the calculation of the common coordinate system. Therefore, the manual checking operation process is complicated and time-consuming.
Disclosure of Invention
The present invention is directed to a method for calibrating a vision-guided robot arm, which saves time and reduces errors in calibration.
Therefore, the calibration method for the vision-guided robot arm provided by the invention is used for the robot arm, and the robot arm is provided with a base; the end of the robot arm is provided with a flange surface; the robot arm is electrically connected with a controller, and the controller has the functions of inputting data, outputting data, storing data, processing operation data and displaying data; the controller prestores a substrate coordinate system and a flange coordinate system, wherein the substrate coordinate system is a coordinate space formed by an X axis, a Y axis and a Z axis which are mutually vertical, and the substrate coordinate has a substrate coordinate origin; the robot arm has a working range; the flange coordinate system is a coordinate space formed by an X1 axis, a Y1 axis and a Z1 axis which are perpendicular to each other, and the flange coordinate system has a flange coordinate origin; an operating tool is mounted on the flange surface; the operating tool has an operating tool center point; the controller sets an operating tool coordinate system, the operating tool coordinate system is a coordinate space formed by an X2 axis, a Y2 axis and a Z2 axis which are perpendicular to each other, the operating tool coordinate system is provided with an operating tool coordinate origin, and the operating tool coordinate origin is positioned at the operating tool center point; the image sensor is arranged on the flange surface and is electrically connected with the controller; the image sensor is internally provided with an image sensing chip which is provided with an image sensing plane; the controller sets a first coordinate system of the image sensor, which is a coordinate space formed by an X3 axis, a Y3 axis and a Z3 axis that are perpendicular to each other, wherein an X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor needs to be parallel to the image sensing plane of the image sensing chip; the first coordinate system of the image sensor is provided with a first coordinate origin of the image sensor; the user can operate the controller to select the flange coordinate system, the operating tool coordinate system or the first coordinate system of the image sensor as a current coordinate system, and the current coordinate system represents a coordinate system currently used; the method is characterized in that: A) setting the operating conditions: setting a correction height, a first correction coordinate point, a second correction coordinate point, a third correction coordinate point and a fourth correction coordinate point under the base coordinate system by the controller; B) placing a correction target: placing a calibration target within the working range of the robot arm; the calibration target has a positioning mark; C) moving the operating tool center point: selecting the coordinate system of the operation tool as the current coordinate system, operating the robot arm to move the operation tool, so that the center point of the operation tool is moved to the positioning mark; the controller stores a current position coordinate under the base coordinate system; D) moving the image sensor: selecting the first coordinate system of the image sensor as the current coordinate system and adding the correction height; the controller controls the robot arm to move the image sensor, so that a first coordinate origin of the image sensor is moved to a correction reference position coordinate, the correction reference position coordinate is positioned above the positioning mark, and only the difference of Z-axis coordinate values is the correction height; E) analyzing the positioning mark image: the image sensor captures a positioning image which is an image with the positioning mark; the controller sets a positioning image center on the positioning image through an image analysis software and analyzes the positioning image; obtaining the position of the positioning mark in the positioning image relative to the center of the positioning image through the image analysis software, and enabling the controller to obtain a positioning mark image coordinate; F) correcting the image and the real distance: operating the robot arm to move the image sensor so that a first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points; when the first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points, the image sensor respectively captures a first image, a second image, a third image and a fourth image, the controller analyzes the first image, the second image, the third image and the fourth image through the image analysis software, and respectively obtains a first calibration image coordinate, a second calibration image coordinate, a third calibration image coordinate and a fourth calibration image coordinate of the positioning mark in the first to fourth images; G) calculating the image correction data: knowing the coordinate values of the first to fourth calibration coordinate points and the first to fourth calibration image coordinates under the substrate coordinate system, calculating to obtain an image calibration data; by the image correction data, the conversion relation between the distance in the image and the distance of the real world can be known; H) calculating the compensation quantity of the coordinate system of the image sensor: and calculating a first coordinate system compensation amount of an image sensor by using the coordinates of the positioning mark image and the image correction data, and compensating the error between the position in the image of the image sensor and the position of the operating tool.
By the method, the vision-guided robot arm correction method provided by the invention is not limited to a specific correction target such as a bitmap, and the correction operation can be performed only by specifying the positioning mark in the correction target, so that the time for the correction operation can be saved. In addition, the coordinate position is judged by an image analysis mode, and visual errors caused by artificial judgment can be reduced.
It should be noted that, in step a), the Z-axis components of the first to fourth calibration coordinate points are all the same and located at the same height.
Further, the vision-guided robot arm calibration method of claim 1, wherein: the number of correction coordinate points needs to be four or more. However, as more coordinate points are used for calibration, the calculation amount is larger, the calculation time is longer, and the calculation cost is increased, so that an appropriate number of calibration points is selected, and in the embodiment, four-point calibration is performed.
In step G), the image correction data is calculated by knowing the coordinates of the first to fourth correction coordinate points as Xri=[xri yri]TI is 1-4, and the corresponding coordinates of the first to fourth corrected images are Xci=[xciyci]TI is 1 to 4, each expressed in a matrix as follows:
the matrix XRIs formed by the first to fourth calibration coordinate points under the base coordinate system, and the matrix XCThe first to fourth corrected image coordinates in the image space are expressed by the following relations:
XR=AXc,
the matrix A is an Affine transformation matrix (affinity transformation matrix) between two plane coordinate systems, and the matrix X is calculatedcGeneralized inverse matrix of Mor-Pentos Xc +(Moore-Penrose pseudo-inverse matrix) the matrix A can be calculated, i.e.:
A=XRXc +,
generalized inverse matrix Xc +The solution can be performed by Singular Value Decomposition (SVD), and the matrix a is the image correction data, and shows the conversion relationship between the distance in the image and the distance in the real world.
In step H), the compensation amount of the first coordinate system of the image sensor is set to the controller to generate a second coordinate system of the sensor.
Detailed Description
For a detailed description of the technical features of the present invention, reference will now be made to the following preferred embodiments, taken in conjunction with the accompanying drawings, in which:
referring to fig. 1-4, a vision-guided robot calibration method according to a preferred embodiment of the present invention is applied to a robot 10, which is a six-axis robot, and the robot 10 has a base 11. The robot arm 10 has a flange face 12 at the end for attachment to an object. The robot 10 is electrically connected to a controller 13, and the controller 13 has functions of inputting data, outputting data, storing data, processing operation data, and displaying data. When the robot arm 10 leaves the factory, the controller 13 stores a base coordinate system and a flange coordinate system in advance. The base coordinate system is a coordinate space formed by an X axis, a Y axis and a Z axis perpendicular to each other, and the base coordinate has a base coordinate origin, which is located on the base 11 in this embodiment, but not limited thereto, and may be selected elsewhere. The robot 10 has a working range in the base coordinate system. The flange coordinate system is a coordinate space formed by an X1 axis, a Y1 axis, and a Z1 axis perpendicular to each other, and has a flange coordinate origin, which is located at the geometric center of the flange surface 12 in this embodiment. The flange coordinate system and the base coordinate system have a relationship of x1, y1, z1, a1, b1, c1, wherein:
x 1: the distance relationship between the X1 axial direction of the flange coordinate system and the X axial direction of the base coordinate system;
y 1: the distance relationship between the Y1 axial direction of the flange coordinate system and the Y axial direction of the base coordinate system;
z 1: the distance relationship between the Z1 axis of the flange coordinate system and the Z axis of the base coordinate system;
a 1: a rotation angle of an X1 axis of the flange coordinate system about the X axis of the base coordinate system;
b 1: a rotation angle of the flange coordinate system Y1 axial direction about the Y axial direction of the base coordinate system;
c 1: the Z1 axis of the flange coordinate system is rotated about the Z axis of the base coordinate system.
An operating tool 15 is mounted on the flange surface 12, and in the present embodiment, the operating tool 15 is exemplified by a suction cup, but not limited thereto. The operating tool 15 has an operating Tool Center Point (TCP). The user sets an operating tool coordinate system in the controller 13, the operating tool coordinate system being a coordinate space formed by an X2 axis, a Y2 axis, and a Z2 axis perpendicular to each other, the operating tool coordinate system having an operating tool coordinate origin, the operating tool coordinate origin being located at the operating tool center point TCP. The relation between the operating tool coordinate system and the flange coordinate system is x2, y2, z2, a2, b2 and c2, wherein:
x 2: the distance relationship between the X2 axial direction of the operating tool coordinate system and the X1 axial direction of the flange coordinate system;
y 2: the distance relationship between the Y2 axial direction of the operating tool coordinate system and the Y1 axial direction of the flange coordinate system;
z 2: the Z2 axis of the operating tool coordinate system is in relation to the Z1 axis of the flange coordinate system;
a 2: a rotation angle of the X2 axis of the operating tool coordinate system about the X1 axis of the flange coordinate system;
b 2: a rotation angle of Y2 of the operating tool coordinate system axially about the Y1 axis of the flange coordinate system;
c 2: the Z2 axis of the operating tool coordinate system is rotated an angle of rotation about the Z1 axis of the flange coordinate system.
An image sensor 17, in this embodiment, a Charge Coupled Device (CCD), is mounted on the flange surface 12 and electrically connected to the controller 13, and the image sensor 17 is used for capturing images. It should be noted that the image sensor 17 has an image sensing chip 171 therein, and the image sensing chip 171 has an image sensing plane 171a (not shown). The user sets a first coordinate system of the image sensor in the controller 13, which is a coordinate space formed by an X3 axis, a Y3 axis and a Z3 axis perpendicular to each other, and an X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor needs to be parallel to the image sensing plane 171a of the image sensing chip 171. The first coordinate system of the image sensor has a first origin of coordinates of the image sensor, which is located on the image sensing plane 171a in this embodiment. The relation between the first coordinate system of the image sensor and the flange coordinate system is x3, y3, z3, a3, b3 and c3, wherein:
x 3: the distance relationship between the X3 axial direction of the first coordinate system of the image sensor and the X1 axial direction of the flange coordinate system;
y 3: the distance relationship between the Y3 axial direction of the first coordinate system of the image sensor and the Y1 axial direction of the flange coordinate system;
z 3: the distance relationship between the Z3 axial direction of the first coordinate system of the image sensor and the Z1 axial direction of the flange coordinate system;
a 3: a rotation angle of the X3 axis of the image sensor first coordinate system about the X1 axis of the flange coordinate system;
b 3: a rotation angle of the Y3 axis of the image sensor first coordinate system about the Y1 axis of the flange coordinate system;
c 3: the Z3 axis of the first coordinate system of the image sensor is rotated about the Z1 axis of the flange coordinate system.
It should be noted that the user can operate the controller 13 to select the flange coordinate system, the operation tool coordinate system or the first coordinate system of the image sensor as a current coordinate system, which represents the coordinate system currently being used. The user sets a location point under the base coordinate system, and after selecting the current coordinate system, the controller 13 controls the origin of the current coordinate system to move to the location point, and makes the X1Y1 plane, X2Y2 plane, or X3Y3 plane of the current coordinate system parallel to the XY plane of the base coordinate system. For example, when the user selects the operation tool coordinate system as the current coordinate system, the controller 13 controls the robot arm 10 such that the operation tool coordinate origin is moved to the position point, and the X2Y2 plane formed by the X2 axis and the Y2 axis of the tool coordinate system is parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system. For another example, when the user selects the first coordinate system of the image sensor as the current coordinate system, the controller 13 controls the robot arm 10 to move the first origin of coordinates of the image sensor to the position point, and the X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor is parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system.
As shown in fig. 3, the calibration method of the vision-guided robot arm provided by the present invention comprises the following steps:
A) setting of operating conditions
A user sets a calibration height Zca1, a first calibration coordinate point P1, a second calibration coordinate point P2, a third calibration coordinate point P3, and a fourth calibration coordinate point P4 under the base coordinate system at the controller 13. It should be noted that the Z-axis components of the first to fourth calibration coordinate points P1-P4 are all the same and are located at the same height.
B) Placing a calibration target
The user places a calibration target 18 within the working range of the robot 10. The calibration target 18 has a positioning mark 181, and the positioning mark 181 is a dot in this embodiment, but is not limited to a dot.
C) Center point of mobile operating tool
Selecting the operating tool coordinate system as the current coordinate system, operating the robot 10 to move the operating tool 15 such that the operating tool center point TCP is moved to the position mark 181. The controller 13 stores a current position coordinate Psp in the base coordinate system.
D) Mobile image sensor
The image sensor first coordinate system is selected as the current coordinate system and the corrected height Zca1 is added. The controller 13 controls the robot arm 10 to move the image sensor 17 such that the image sensor first coordinate origin is moved to a calibration reference position coordinate Pcp, which is located above the positioning mark 181. In the base coordinate system, the correction reference position coordinates Pcp are different from the current position coordinates Psp only by the correction height Zca1 in the Z-axis coordinate value, and the other X-axis and Y-axis component values are the same.
E) Alignment mark image analysis
The image sensor 17 captures a positioning image, which is an image with the positioning mark 181. The controller 13 sets a positioning image center in the positioning image through an image analysis software and analyzes the positioning image, wherein the positioning image center is a geometric center of the positioning image in the present embodiment, but not limited thereto. The image analysis software obtains the position of the positioning mark in the positioning image relative to the center of the positioning image, so that the controller 13 obtains a positioning mark image coordinate Xcs.
In addition, the aforementioned image analysis software is a general commercially available image analysis software for determining an object in an image and analyzing a coordinate position of the object in the image, which is not repeated herein.
F) Correction of image and real distance
The robot 10 is operated to move the image sensor 17 such that the image sensor first coordinate origin is moved to the first to fourth calibration coordinate points P1-P4. When the first origin of coordinates of the image sensor is moved to the first to fourth calibration coordinate points P1-P4, the image sensor 17 captures a first image, a second image, a third image and a fourth image, respectively, and the controller 13 analyzes the first image, the second image, the third image and the fourth image through the image analysis software to obtain a first calibration image coordinate Xc1, a second calibration image coordinate Xc2, a third calibration image coordinate Xc3 and a fourth calibration image coordinate Xc4 of the positioning mark 181 in the first to fourth images, respectively.
G) Calculating image correction data
Knowing the coordinate values (real space) of the first to fourth calibration coordinate points P1-P4 in the substrate coordinate system and the first calibration image coordinate Xc1, the second calibration image coordinate Xc2, the third calibration image coordinate Xc3 and the fourth calibration image coordinate Xc4 (image space) of the positioning mark 181 in the first to fourth images, the distance relationship between the distance in the image and the real space (substrate coordinate system) can be calculated to obtain an image calibration data. By the image correction data, the conversion relationship between the distance in the image and the distance in the real world can be known.
In the present embodiment, four-point correction is used as an example, but the present invention is not limited to four points, and four or more points may be used. The more coordinate points are used for calibration, the larger the calculation amount, the more calculation time and the calculation cost are increased, so that an appropriate number of calibration points is selected, and in the embodiment, four-point calibration is performed.
The method for calculating the image correction data in the present embodiment is as follows, but not limited thereto.
The coordinates of the first to fourth calibration coordinate points P1-P4 are known as Xri=[xri yri]TAnd i is 1-4. And the corresponding first to fourth corrected image coordinates are Xci=[xci yci]TAnd i is 1-4. Expressed in matrix respectively as follows:
the matrix XRIs formed by the first to fourth calibration coordinate points P1-P4 under the substrate coordinate system and matrix XCThe first to fourth corrected image coordinates in the image space are expressed by the following relations:
XR=AXc,
the matrix a is an Affine transformation matrix (Affine transformation matrix) between two planar coordinate systems. By calculating the matrix XcGeneralized inverse matrix of Mor-Pentos Xc +(Moore-Penrose pseudo-inverse matrix) the matrix A can be calculated, i.e.:
A=XRXc +,
generalized inverse matrix Xc +The solution can be performed by Singular Value Decomposition (SVD). The matrix A is the image correction data and displaysThe translation between the distance within the image and the real world distance.
H) Calculating the compensation amount of the first coordinate system of the image sensor
The alignment mark image coordinates Xcs and the image calibration data are used to calculate a first coordinate system compensation amount of the image sensor.
Under ideal conditions. Since the X2Y2 plane formed by the X2 axis and the Y2 axis of the tool coordinate system and the X3Y3 plane formed by the X3 axis and the Y3 axis of the first image sensor coordinate system are all parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system, and the correction reference position coordinates Pcp and the current position coordinates Psp only differ by the correction height Zca1 without component differences in the X axis and the Y axis, if the conversion between the tool coordinate system and the first image sensor coordinate system is ideal, the positioning mark in the positioning image will be located at the positioning image center, which also represents the position of the positioning mark 181 in the operating tool coordinate system, and will coincide with the image center in the image sensor coordinate system. In this way, after obtaining the image correction data (the ratio of the distance in the image to the distance in the real world), the user can intuitively operate the controller 13 to control the robot arm 10 and the operation tool 15 through the frame data and the image correction data captured by the image sensor 17.
However, in general, the position of the positioning mark 181 in the image has an error with the image center and an image compensation amount T is requiredcompAnd (4) compensating. Since the image coordinates Xcs of the anchor mark are the coordinate values of the anchor point 181 with the center of the anchor image as the origin in the anchor image, the coordinate values of the image coordinates Xcs of the anchor mark can be converted into the image offset TcompAnd displaying the error to be compensated in the image for converting the tool coordinate system and the first coordinate system of the image sensor. If the operating tool is to be controlled with the anchor point 181 as the center and intuitively by the image captured from the image sensor 17, only the image compensation amount T is requiredcompThe image captured by the image sensor 17 is added to make the positioning point image in the pictureThe operation tool is positioned in the center of the picture, so that a user can intuitively operate the operation tool through the picture captured by the sensor. The controller 13 needs the compensation amount of the first coordinate system of the image sensor to control the movement of the operating tool, so as to compensate the error between the position in the image of the image sensor 17 and the position of the operating tool.
It should be noted that the compensation amount of the first coordinate system of the image sensor can also be set to the controller 13 to generate a second coordinate system of the sensor. Therefore, it is not necessary to add the compensation amount to the image captured by the image sensor 17 every time, but when the robot arm 10 drives the image sensor 17 to move, the compensation amount of the first coordinate system of the image sensor is directly added to the moving position of the sensor 17, which is convenient for the user to use.
By the method, the vision-guided robot arm correction method provided by the invention is not limited to a specific correction target such as a bitmap, and the correction operation can be performed only by specifying the positioning mark in the correction target, so that the time for the correction operation can be saved. In addition, the coordinate position is judged by an image analysis mode, and visual errors caused by artificial judgment can be reduced.