[go: up one dir, main page]

CN110919658B - Robot calibration method based on vision and multi-coordinate system closed-loop conversion - Google Patents

Robot calibration method based on vision and multi-coordinate system closed-loop conversion Download PDF

Info

Publication number
CN110919658B
CN110919658B CN201911279737.0A CN201911279737A CN110919658B CN 110919658 B CN110919658 B CN 110919658B CN 201911279737 A CN201911279737 A CN 201911279737A CN 110919658 B CN110919658 B CN 110919658B
Authority
CN
China
Prior art keywords
coordinate system
robot
camera
calibration
conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911279737.0A
Other languages
Chinese (zh)
Other versions
CN110919658A (en
Inventor
刘华山
蔡明军
程新
李祥健
应丰糠
陈荣川
夏玮
梁健
江荣鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Original Assignee
Donghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University filed Critical Donghua University
Priority to CN201911279737.0A priority Critical patent/CN110919658B/en
Publication of CN110919658A publication Critical patent/CN110919658A/en
Application granted granted Critical
Publication of CN110919658B publication Critical patent/CN110919658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)
  • Numerical Control (AREA)

Abstract

本发明公开了一种基于视觉和多坐标系闭环转换的机器人标定方法。本发明提供的一种基于视觉和多坐标系闭环转换的机器人标定方法,该方法能解决传统机器人标定方法需要构建计算量非常巨大的AX=XB方程的问题减轻计算负担;a、b、c三个坐标系转换闭环相结合提高了各坐标系之间转换关系的准确性;对得到的机器人末端位置误差应用非线性最小二乘法优化提高机器人的定位精确度。

Figure 201911279737

The invention discloses a robot calibration method based on vision and multi-coordinate system closed-loop conversion. The present invention provides a robot calibration method based on vision and multi-coordinate system closed-loop conversion, which can solve the problem that the traditional robot calibration method needs to construct an AX=XB equation with a huge amount of calculation and reduce the calculation burden; a, b, c three The combination of two coordinate system conversion closed loops improves the accuracy of the conversion relationship between each coordinate system; the nonlinear least squares method is used to optimize the obtained robot end position error to improve the robot's positioning accuracy.

Figure 201911279737

Description

一种基于视觉和多坐标系闭环转换的机器人标定方法A robot calibration method based on vision and multi-coordinate system closed-loop transformation

技术领域technical field

本发明涉及机器人和图像识别技术领域,特别涉及一种基于视觉和多坐标系闭环转换的机器人标定方法。The invention relates to the technical field of robots and image recognition, in particular to a robot calibration method based on vision and multi-coordinate system closed-loop conversion.

背景技术Background technique

随着信息时代大数据、人工智能、图像识别等技术的快速发展,机器人技术也得到了飞跃的进步。工业机器人具有结构简单、灵活度高、工作空间大等特点,被广泛应用在汽车、物流、电子、医疗、航空航天等领域。机器人定位精度分为重复定位精度和绝对定位精度,现如今机器人的重复定位精度高,然而绝对定位精度低。所以提高机器人的绝对定位精度也是机器人技术研究的重要领域之一。With the rapid development of big data, artificial intelligence, image recognition and other technologies in the information age, robot technology has also made great progress. Industrial robots have the characteristics of simple structure, high flexibility, and large working space, and are widely used in automotive, logistics, electronics, medical, aerospace and other fields. Robot positioning accuracy is divided into repetitive positioning accuracy and absolute positioning accuracy. Nowadays, the repetitive positioning accuracy of robots is high, but the absolute positioning accuracy is low. Therefore, improving the absolute positioning accuracy of the robot is also one of the important fields of robotics research.

发明内容Contents of the invention

本发明要解决的技术问题是:未经标定的机器人绝对定位精度低。The technical problem to be solved by the invention is: the absolute positioning accuracy of the uncalibrated robot is low.

为了解决上述技术问题,本发明的技术方案是提供了一种基于视觉和多坐标系闭环转换的机器人标定方法,其特征在于,包括以下步骤:In order to solve the above technical problems, the technical solution of the present invention is to provide a robot calibration method based on vision and multi-coordinate closed-loop conversion, which is characterized in that it includes the following steps:

步骤1:建立相机固定在机器人本体外的Eye-to-hand手眼基础模型,国际象棋棋盘作为靶标固定在机器人末端位置;机器人带动国际象棋棋盘在相机能拍摄到合适图像的范围内移动,用相机拍摄到的处于不同位姿的棋盘图像进行相机标定;Step 1: Establish the Eye-to-hand basic model with the camera fixed outside the robot body, and the chess board is fixed at the end of the robot as a target; the robot drives the chess board to move within the range where the camera can capture suitable images, and uses the camera The captured chessboard images in different poses are used for camera calibration;

步骤2:进行手靶标定,包括以下步骤:Step 2: Perform hand target calibration, including the following steps:

步骤201:将标定笔作为靶标固定在机器人末端,以机器人末端关节的坐标系为基础在空间中设定9个标记点,机器人带动标定笔在每个标记点处以不同的姿态进行定点直到完成9个标记点的定点,机器人末端关节坐标系上的点D1~D9投影到图像坐标系上的对应的点分别为d1~d9,通过欧式变换表示D1~D9到d1~d9的转换,由此得到图像坐标系与机器人末端关节坐标系之间的转换关系;Step 201: Fix the calibration pen as a target on the end of the robot, set 9 marking points in space based on the coordinate system of the robot end joint, and the robot drives the calibration pen to fix the point at each marking point with different postures until the completion of 9 For fixed points of marked points, the corresponding points projected from points D 1 to D 9 on the robot end joint coordinate system to the image coordinate system are respectively d 1 to d 9 , which are represented by Euclidean transformation from D 1 to D 9 to d 1 to The conversion of d 9 , thus obtaining the conversion relationship between the image coordinate system and the robot end joint coordinate system;

步骤202:基于步骤201获得的图像坐标系与机器人末端关节坐标系之间的转换关系,推导出相机坐标系与机器人末端关节坐标系之间的转换关系Tcm,即完成手眼标定;Step 202: Based on the conversion relationship between the image coordinate system and the robot end joint coordinate system obtained in step 201, deduce the conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system, that is, complete the hand-eye calibration;

步骤203:基于转换关系Tcm,结合步骤1相机标定后得到的相机参数,推导出靶标坐标系与相机坐标系之间的转换关系TbcStep 203: Based on the transformation relationship T cm , combined with the camera parameters obtained after camera calibration in step 1, deduce the transformation relationship T bc between the target coordinate system and the camera coordinate system;

步骤204:由Tmb=TcmTbc,推导出机器人末端关节坐标系与靶标坐标系之间的转换关系Tmb,即完成手靶标定;Step 204: From T mb =T cm T bc , deduce the conversion relationship T mb between the robot end joint coordinate system and the target coordinate system, that is, complete the hand-target calibration;

步骤3:结合预标定数据,推出机器人末端在机器人基坐标系下的位置;Step 3: Combining with the pre-calibration data, deduce the position of the end of the robot in the robot base coordinate system;

步骤4:将上述得到的机器人末端位置与机器人预期的末端位置进行对比,得到之间的误差;Step 4: Compare the end position of the robot obtained above with the expected end position of the robot to obtain the error between them;

步骤5:根据步骤4得到的机器人末端位置的误差建立误差约束方程,然后应用非线性最小二乘法优化,通过不停的迭代计算寻找到误差函数的局部最小值,并认为该局部最小值能够使得目标函数取得最优解,完成参数标定。Step 5: Establish the error constraint equation according to the error of the end position of the robot obtained in step 4, and then apply the nonlinear least square method to optimize, find the local minimum value of the error function through non-stop iterative calculation, and think that the local minimum value can make The objective function obtains the optimal solution, and the parameter calibration is completed.

优选地,所述步骤1具体包括以下步骤:Preferably, said step 1 specifically includes the following steps:

步骤101:使标定板坐标系的Z轴方向与法兰盘坐标系的Z轴方向保持一致,在相机能拍摄到合适图像的范围内变动机器人末端上国际象棋棋盘的位姿,使得固定的相机采集到多张不同位姿的棋盘图像;Step 101: Make the Z-axis direction of the calibration plate coordinate system consistent with the Z-axis direction of the flange plate coordinate system, change the pose of the chessboard on the end of the robot within the range where the camera can capture a suitable image, so that the fixed camera Collect multiple chessboard images in different poses;

步骤102:提取棋盘图像中的角点并利用OpenCV中的cornerSubPix()函数将角点位置精确到亚像素级精度,进行相机标定;得到相机的内、外参数矩阵,其中,相机的内参数矩阵为

Figure BDA0002316384970000021
fx、fy表示焦距,u0和u0表示的是相机光轴与图像平面的交点;相机的外参数矩阵为/>
Figure BDA0002316384970000022
R表示从世界坐标系到相机坐标系的旋转矩阵,R=RxRyRz,Rx、Ry、Rz表示相机坐标系绕世界坐标系的x,y,z轴旋转,T表示从世界坐标系到相机坐标系的平移矩阵,T=[tx ty tz],Tx、Ty、Tz表示相机坐标系沿着世界坐标系的x,y,z轴平移,则Step 102: Extract the corner points in the checkerboard image and use the cornerSubPix() function in OpenCV to accurately position the corner points to sub-pixel precision, and perform camera calibration; obtain the internal and external parameter matrix of the camera, wherein, the internal parameter matrix of the camera for
Figure BDA0002316384970000021
f x , f y represent the focal length, u 0 and u 0 represent the intersection of the camera optical axis and the image plane; the external parameter matrix of the camera is />
Figure BDA0002316384970000022
R represents the rotation matrix from the world coordinate system to the camera coordinate system, R=R x R y R z , R x , R y , R z represent the rotation of the camera coordinate system around the x, y, z axes of the world coordinate system, and T represents The translation matrix from the world coordinate system to the camera coordinate system, T=[t x t y t z ], T x , T y , T z represent the translation of the camera coordinate system along the x, y, z axes of the world coordinate system, then

其中相机坐标系到世界坐标系的变换公式为:The transformation formula from the camera coordinate system to the world coordinate system is:

Figure BDA0002316384970000023
Figure BDA0002316384970000023

式(1)中,(Xw,Yw,Zw)表示世界坐标系中的点;(Xc,Yc,Zc)表示相机坐标系中相应的点;In formula (1), (X w , Y w , Z w ) represents a point in the world coordinate system; (X c , Y c , Z c ) represents the corresponding point in the camera coordinate system;

图像坐标系与世界坐标系的变换关系为:The transformation relationship between the image coordinate system and the world coordinate system is:

Figure BDA0002316384970000031
Figure BDA0002316384970000031

通过式(2)算出图像坐标系中的点(x,y)所对应的世界坐标系中的点(X,Y,Z)。The point (X, Y, Z) in the world coordinate system corresponding to the point (x, y) in the image coordinate system is calculated by formula (2).

优选地,所述步骤3包括以下步骤:Preferably, said step 3 includes the following steps:

步骤301:采用空间位置定点进行预标定,在机器人末端关节坐标系下将标定笔依次移动到设定的9个标记点,此时每个标记点处机器人不需要变动姿态,固定在机器人外的相机对其进行拍照,在拍照的同时记录各个标记点处机器人末端的坐标P′(X,Y,Z)Step 301: Perform pre-calibration with fixed points in space, move the calibration pen to the set 9 marked points in sequence under the coordinate system of the end joint of the robot. At this time, the robot does not need to change its posture at each marked point. The camera takes pictures of it, and records the coordinates P′ (X, Y, Z) of the end of the robot at each marking point while taking pictures;

步骤302:将相机采集到不同标记点的标定笔图像进行预处理;然后进一步处理得到预标定的Tcm(1~9),Tcm(1~9)表示9组预标定的相机坐标系与机器人末端关节坐标系之间的转换矩阵,Tcm(1~9)结合记录的机器人末端坐标得到9组相机坐标系与机器人的世界坐标系之间的转换关系Tcw(1~9),取9组数据的平均值作为最终的相机坐标系与机器人的世界坐标系之间的转换关系TcwStep 302: Preprocess the calibration pen images captured by the camera at different marking points; then further process to obtain the pre-calibrated T cm(1~9) , T cm(1~9) represents 9 groups of pre-calibrated camera coordinates system and the robot end joint coordinate system, T cm(1~9) combines the recorded robot end coordinates to obtain the conversion relationship T cw (1~9 ) , take the average value of 9 sets of data as the conversion relationship T cw between the final camera coordinate system and the world coordinate system of the robot;

步骤303:将预标定得到的转换关系Tcw、靶标坐标系与相机坐标系之间的转换关系Tbc和手靶标定得到的Tmb相结合可推导出机器人末端位置,将其与预期的机械手末端位置进行对比得到误差。Step 303: Combining the conversion relationship T cw obtained by pre-calibration, the conversion relationship T bc between the target coordinate system and the camera coordinate system, and T mb obtained by hand-target calibration, the end position of the robot can be deduced, and compared with the expected manipulator The end position is compared to get the error.

与现有技术相比本发明具有以下优势:Compared with the prior art, the present invention has the following advantages:

本发明提供的一种基于视觉和多坐标系闭环转换的机器人标定方法,该方法能解决传统机器人标定方法需要构建计算量非常巨大的AX=XB方程的问题减轻计算负担;a、b、c三个坐标系转换闭环相结合提高了各坐标系之间转换关系的准确性;对得到的机器人末端位置误差应用非线性最小二乘法优化提高机器人的定位精确度。The present invention provides a robot calibration method based on vision and multi-coordinate system closed-loop conversion, which can solve the problem that the traditional robot calibration method needs to construct an AX=XB equation with a huge amount of calculation and reduce the calculation burden; a, b, c three The combination of two coordinate system conversion closed loops improves the accuracy of the conversion relationship between each coordinate system; the nonlinear least squares method is used to optimize the obtained robot end position error to improve the robot's positioning accuracy.

附图说明Description of drawings

图1是一种基于视觉和多坐标系闭环转换的机器人标定方法的结构示意图1;Fig. 1 is a structural schematic diagram 1 of a robot calibration method based on vision and multi-coordinate system closed-loop conversion;

图2是一种基于视觉和多坐标系闭环转换的机器人标定方法的结构示意图2;Fig. 2 is a structural schematic diagram 2 of a robot calibration method based on vision and multi-coordinate system closed-loop conversion;

图3是一种基于视觉和多坐标系闭环转换的机器人标定方法的各坐标系转换关系图;Fig. 3 is a kind of coordinate system conversion relationship diagram of a robot calibration method based on vision and multi-coordinate system closed-loop conversion;

图4是一种基于视觉和多坐标系闭环转换的机器人标定方法流程图。Fig. 4 is a flowchart of a robot calibration method based on vision and multi-coordinate system closed-loop conversion.

图标:icon:

1-机器人本体;2-标定板;3-相机;4-标定笔;{W}-机器人的世界坐标系;{M}-机器人末端关节坐标系;{B}-标定板或标定笔坐标系;{C}-相机坐标系。1-robot body; 2-calibration board; 3-camera; 4-calibration pen; {W}-world coordinate system of the robot; {M}-coordinate system of robot end joints; {B}-coordinate system of calibration board or calibration pen ; {C} - camera coordinate system.

具体实施方式Detailed ways

下面结合具体实施例,进一步阐述本发明。应理解,这些实施例仅用于说明本发明而不用于限制本发明的范围。此外应理解,在阅读了本发明讲授的内容之后,本领域技术人员可以对本发明作各种改动或修改,这些等价形式同样落于本申请所附权利要求书所限定的范围。Below in conjunction with specific embodiment, further illustrate the present invention. It should be understood that these examples are only used to illustrate the present invention and are not intended to limit the scope of the present invention. In addition, it should be understood that after reading the teachings of the present invention, those skilled in the art can make various changes or modifications to the present invention, and these equivalent forms also fall within the scope defined by the appended claims of the present application.

本发明将相机与机器人相结合,发明一种基于视觉和多坐标系闭环转换的机器人标定方法。该方法能解决传统机器人标定方法需要构建计算量非常巨大的AX=XB方程的问题减轻计算负担;a、b、c三个坐标系转换闭环相结合提高了各坐标系之间转换关系的准确性;对得到的机器人末端位置误差应用非线性最小二乘法优化提高机器人的定位精确度。The invention combines a camera with a robot to invent a robot calibration method based on vision and multi-coordinate system closed-loop conversion. This method can solve the problem that the traditional robot calibration method needs to construct the AX=XB equation with a huge amount of calculation and reduce the calculation burden; the combination of the three coordinate system conversion closed loops of a, b and c improves the accuracy of the conversion relationship between the coordinate systems ; Apply the nonlinear least squares method to optimize the obtained robot end position error to improve the positioning accuracy of the robot.

本发明提供的一种基于视觉和多坐标系闭环转换的机器人标定方法,具体包括以下步骤:A robot calibration method based on vision and multi-coordinate system closed-loop transformation provided by the present invention specifically includes the following steps:

步骤1:建立相机固定在机器人本体外的Eye-to-hand手眼基础模型,国际象棋棋盘作为靶标固定在机器人末端,如图1所示,标定板坐标系的Z轴与法兰盘坐标系的Z轴保持一致。机器人带动棋盘在相机能拍摄到合适图像的范围内移动,用相机拍摄到的不同位姿的标定板图像进行相机标定。Step 1: Establish the Eye-to-hand basic model with the camera fixed outside the robot body. The chess board is fixed at the end of the robot as a target. As shown in Figure 1, the Z axis of the calibration board coordinate system and the flange plate coordinate system The Z axis stays the same. The robot drives the chessboard to move within the range where the camera can capture suitable images, and uses the calibration board images of different poses captured by the camera to perform camera calibration.

本步骤中,相机外参和内参标定具体包括如下步骤:In this step, the camera external reference and internal reference calibration specifically includes the following steps:

(1)通过不断变动机器人末端位置上标定板的位姿,使得固定的相机采集到多张棋盘图像。(1) By continuously changing the pose of the calibration board at the end position of the robot, the fixed camera captures multiple chessboard images.

(2)提取棋盘图像中的角点并利用OpenCV中的cornerSubPix()函数将角点位置精确到亚像素级精度,进行相机标定,得到相机的内外参数矩阵。(2) Extract the corner points in the checkerboard image and use the cornerSubPix() function in OpenCV to make the corner point position accurate to sub-pixel level precision, perform camera calibration, and obtain the internal and external parameter matrix of the camera.

相机坐标系到世界坐标系的变换公式为:The transformation formula from the camera coordinate system to the world coordinate system is:

Figure BDA0002316384970000051
Figure BDA0002316384970000051

式(1)中,、、(Xw,Yw,Zw)表示世界坐标系中的点;(Xc,Yc,Zc)表示相机坐标系中相应的点;

Figure BDA0002316384970000052
为相机的外参矩阵;R表示从世界坐标系到相机坐标系的旋转矩阵,R=RxRyRz,Rx、Ry、Rz表示相机坐标系绕世界坐标系的x,y,z轴旋转;T表示从世界坐标系到相机坐标系的平移矩阵,T=[tx ty tz],Tx、Ty、Tz表示相机坐标系在世界坐标系的x,y,z轴上的平移。In formula (1), ,, (X w , Y w , Z w ) represent points in the world coordinate system; (X c , Y c , Z c ) represent corresponding points in the camera coordinate system;
Figure BDA0002316384970000052
is the external parameter matrix of the camera; R represents the rotation matrix from the world coordinate system to the camera coordinate system, R=R x R y R z , R x , R y , R z represent the x, y of the camera coordinate system around the world coordinate system , z-axis rotation; T represents the translation matrix from the world coordinate system to the camera coordinate system, T=[t x t y t z ], T x , T y , T z represent the x, y of the camera coordinate system in the world coordinate system , the translation on the z-axis.

图像坐标系与世界坐标系的变换关系:The transformation relationship between the image coordinate system and the world coordinate system:

Figure BDA0002316384970000053
Figure BDA0002316384970000053

式(2)中,S表示比例系数;

Figure BDA0002316384970000054
为相机的内参矩阵;fx、fy表示焦距,一般情况下,二者相等;u0,v0表示的是相机光轴与图像平面的交点,通常位于图像中心处,故其值常取分辨率的一半;In formula (2), S represents the proportional coefficient;
Figure BDA0002316384970000054
is the internal reference matrix of the camera; f x , f y represent the focal length, and generally they are equal; u 0 , v 0 represent the intersection point of the camera optical axis and the image plane, which are usually located at the center of the image, so their values are often taken as half the resolution;

通过式(2)可算出图像坐标系中的点(x,y)所对应的世界坐标系中的点(X,Y,Z)。The point (X, Y, Z) in the world coordinate system corresponding to the point (x, y) in the image coordinate system can be calculated by formula (2).

进一步可以得到从世界坐标系与像素坐标系的变换关系:Further, the transformation relationship between the world coordinate system and the pixel coordinate system can be obtained:

Figure BDA0002316384970000055
Figure BDA0002316384970000055

Figure BDA0002316384970000061
Figure BDA0002316384970000061

式(4)、(5)中,f表示相机的焦距;(u,v)表示像素坐标系中的点。In formulas (4) and (5), f represents the focal length of the camera; (u, v) represents a point in the pixel coordinate system.

步骤2:将标定板换成标定笔,标定笔作为靶标固定在机器人末端,如图2所示,以机器人末端关节的坐标系为基础在空间中设定9个标记点。机器人带动标定笔在标记点处以不同的姿态进行定点直到完成9个标记点的定点,得到图像坐标系与机器人末端关节坐标系之间的转换关系。进而可推导出相机坐标系与机器人末端关节坐标系之间的转换关系Tcm即完成手眼标定。如图3中坐标系转换闭环c所示,结合相机标定可推导出靶标坐标系与相机坐标系之间的转换关系Tbc;进而推导出机器人末端关节坐标系与靶标坐标系之间的转换关系Tmb即完成手靶标定。Step 2: Replace the calibration board with a calibration pen, which is fixed at the end of the robot as a target, as shown in Figure 2, and set 9 marker points in space based on the coordinate system of the robot’s end joints. The robot drives the calibration pen to fix the points at the marked points with different postures until the fixed points of the 9 marked points are completed, and the conversion relationship between the image coordinate system and the robot end joint coordinate system is obtained. Furthermore, the conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system can be deduced to complete the hand-eye calibration. As shown in the coordinate system conversion closed loop c in Figure 3, the conversion relationship T bc between the target coordinate system and the camera coordinate system can be deduced by combining the camera calibration; and then the conversion relationship between the robot end joint coordinate system and the target coordinate system can be deduced T mb is to complete the hand-target calibration.

本步骤中,手靶标定具体包括如下步骤:In this step, the hand target calibration specifically includes the following steps:

将标定板换成标定笔,以标定笔作为靶标,使标定笔的坐标系和之前的标定板坐标系保持一致如图2所示。在机器人末端关节的坐标系上在空间中设定一个正方体,且该正方体处于相机合适的拍摄范围内,以正方体的8个顶点和1个中心点作为标记点D1~D9;机器人带动标定笔在每个标记点处以4种不同的姿态进行定点,直到完成9个标记点的定点;可知机器人末端关节坐标系上的点D1~D9投影到图像坐标系上的对应的点分别为d1~d9,可通过欧式变换即旋转向量和平移向量:Replace the calibration board with a calibration pen, use the calibration pen as the target, and make the coordinate system of the calibration pen consistent with the previous calibration board coordinate system, as shown in Figure 2. Set a cube in space on the coordinate system of the end joint of the robot, and the cube is within the appropriate shooting range of the camera, and the 8 vertices and 1 center point of the cube are used as the marking points D 1 ~ D 9 ; the robot drives the calibration The pen fixes the points with 4 different postures at each marked point until the fixed points of 9 marked points are completed; it can be seen that the corresponding points projected from the points D 1 to D 9 on the end joint coordinate system of the robot to the image coordinate system are respectively d 1 ~d 9 , through the Euclidean transformation, that is, the rotation vector and the translation vector:

Di=R·di-t(i∈(1,9)) (6)D i =R·d i -t(i∈(1,9)) (6)

式(6)表示D1~D9到d1~d9的转换,式(6)中,t表示平移向量。由此可得到图像坐标系与机器人末端关节坐标系之间的转换关系。结合前面相机标定得到的相机参数和公式(7):Equation (6) represents the transformation from D 1 to D 9 to d 1 to d 9 , and in Equation (6), t represents a translation vector. Thus, the conversion relationship between the image coordinate system and the robot end joint coordinate system can be obtained. Combining the camera parameters and formula (7) obtained from the previous camera calibration:

Figure BDA0002316384970000071
Figure BDA0002316384970000071

得到相机坐标系和机器人末端关节坐标系之间的转换关系Tcm完成手眼标定;如图3中坐标系转换闭环c所示,可得机器人末端关节坐标系与靶标之间的转换关系Tmb=相机坐标系与机器人末端关节坐标系之间转换关系Tcm×靶标坐标系与相机坐标系之间的转换关系Tbc(Tmb=TcmTbc),推算出机器人末端关节坐标系与靶标坐标系之间的转换关系Tmb即手靶标定。。Obtain the conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system to complete the hand-eye calibration; as shown in the coordinate system conversion closed loop c in Figure 3, the conversion relationship T mb between the robot end joint coordinate system and the target can be obtained = The conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system × the conversion relationship T bc between the target coordinate system and the camera coordinate system (T mb = T cm T bc ), deduce the robot end joint coordinate system and the target coordinates The conversion relationship T mb between systems is the hand-target calibration. .

步骤3:再结合预标定数据,推出机器人末端在机器人基坐标系下的位置。Step 3: Combining with the pre-calibration data, deduce the position of the end of the robot in the robot base coordinate system.

本步骤中,预标定具体包括如下步骤:In this step, the pre-calibration specifically includes the following steps:

采用空间位置定点法进行预标定。通过离线编程的方式,控制机器人将标定笔依次移动到设定的9个标记点此时每个标记点处机器人不需要变动姿态,固定在机器人外的相机对其进行拍照,在拍照的同时记录各个标记点处机器人末端的坐标P′(X,Y,Z)和六个关节的转角Δ′(1~6)The pre-calibration is carried out by the fixed-point method of spatial position. Through offline programming, the robot is controlled to move the calibration pen to the set 9 marking points in sequence. At this time, the robot does not need to change its posture at each marking point. The camera fixed outside the robot takes pictures of them and records them at the same time. The coordinates P′ (X, Y, Z) of the end of the robot at each marked point and the rotation angles Δ′ (1~6) of the six joints.

将相机采集到的标定笔图像进行预处理,然后进一步处理可得到预标定的Tcm′;根据图3中坐标系转换闭环b所示可得T6=TcmTcw,结合记录的机器人末端坐标可得到相机坐标系与机器人的世界坐标系之间的转换关系TcwThe image of the calibration pen collected by the camera is preprocessed, and then further processed to obtain the pre-calibrated T cm ′; according to the closed loop b of the coordinate system conversion in Figure 3, T 6 = T cm T cw can be obtained, combined with the recorded end of the robot Coordinates can obtain the conversion relationship T cw between the camera coordinate system and the robot's world coordinate system.

如图3中坐标系转换闭环a可知T6=TmbTbcTcw,将预标定得到的Tcw、前面得到的靶标坐标系与相机坐标系之间的转换关系Tbc和手靶标定得到的Tmb相结合可推导出机器人末端位置P1(X,Y,Z),将其与预期的机器人末端位置P0(X,Y,Z)进行对比得到误差。As shown in Figure 3, the coordinate system conversion closed loop a shows that T 6 = T mb T bc T cw , and the T cw obtained by pre-calibration, the conversion relationship T bc between the target coordinate system and the camera coordinate system obtained earlier, and the hand target calibration can be obtained Combining T mb of the robot can deduce the end position of the robot P 1 (X, Y, Z) , and compare it with the expected end position of the robot P 0 (X, Y, Z) to obtain the error.

步骤4:将上述得到的机器人末端位置与机器人预期的末端位置进行对比,得到之间的误差。Step 4: Compare the end position of the robot obtained above with the expected end position of the robot to obtain the error between them.

步骤5:根据机器人末端位置误差建立误差约束方程,然后应用非线性最小二乘法优化,通过不停的迭代计算寻找到误差函数的局部最小值,并认为该局部最小值能够使得我们的目标函数取得最优解(最小值),完成参数标定。Step 5: Establish the error constraint equation according to the position error of the robot end, and then apply the nonlinear least square method to optimize, find the local minimum value of the error function through non-stop iterative calculations, and think that the local minimum value can make our objective function obtain Optimal solution (minimum value), complete parameter calibration.

本步骤中,得到误差后应用非线性最小二乘法优化具体包括如下步骤:In this step, after the error is obtained, the nonlinear least squares method is used for optimization, which specifically includes the following steps:

利用最小二乘法参数优化目的是找到一组合适的几何参数值,在该组参数值下机器人的末端定位误差达到最小。The purpose of parameter optimization using the least square method is to find a set of appropriate geometric parameter values, under which the robot's end positioning error can be minimized.

可以构建非线性最小二乘法优化的目标函数为:The objective function that can be constructed for nonlinear least squares optimization is:

Figure BDA0002316384970000081
Figure BDA0002316384970000081

式(8)中,P0(X,Y,Z)i表示第i个期望的机器人末端位置;P0(X,Y,Z)j表示第j个期望的机器人末端位置;P1(X,Y,Z)i表示推导出的机器人第i个末端位置;P1(X,Y,Z)j表示推导出的机器人第j个末端位置。In formula (8), P 0(X,Y,Z)i represents the i-th expected robot end position; P 0(X,Y,Z)j represents the j-th expected robot end position; P 1(X ,Y,Z)i represents the derived i-th end position of the robot; P 1(X,Y,Z)j represents the derived j-th end position of the robot.

这是一个典型的以误差平方和最小为准则来估计非线性静态模型参数的问题。通过迭代优化算法求解目标函数的极小值。算法采用Levenberg-Marquardt算法,其计算形式为:This is a typical problem of estimating nonlinear static model parameters with the criterion of minimizing the sum of squared errors. The minimum value of the objective function is solved by an iterative optimization algorithm. The algorithm adopts the Levenberg-Marquardt algorithm, and its calculation form is:

HLM=-(JTJ+μI)-1JTe (9)H LM =-(J T J+μI) -1 J T e (9)

式(9)中,HLM为算法步长,J为误差函数的雅可比矩阵,e为误差函数,μ为正数。In formula (9), H LM is the step size of the algorithm, J is the Jacobian matrix of the error function, e is the error function, and μ is a positive number.

将机器人末端位置坐标P0(X,Y,Z)和P1(X,Y,Z)带入计算目标函数,然后计算迭代步长HLM,修正参数值并重复上述步骤直到到达最大迭代次数或目标函数减小到要求的标准。Bring the robot end position coordinates P 0 (X, Y, Z) and P 1 (X, Y, Z) into the calculation objective function, then calculate the iteration step size H LM , correct the parameter value and repeat the above steps until reaching the maximum number of iterations Or the objective function is reduced to the required standard.

通过采用上述技术方案,将视觉应用于机器人标定,得到各坐标之间的转换关系,更好的提高机器人定位精度。By adopting the above-mentioned technical solution, vision is applied to robot calibration to obtain the conversion relationship between coordinates, and to better improve the positioning accuracy of the robot.

Claims (1)

1.一种基于视觉和多坐标系闭环转换的机器人标定方法,其特征在于,包括以下步骤:1. A robot calibration method based on vision and multi-coordinate system closed-loop conversion, is characterized in that, comprises the following steps: 步骤1:建立相机固定在机器人本体外的Eye-to-hand手眼基础模型,国际象棋棋盘作为靶标固定在机器人末端位置;机器人带动国际象棋棋盘在相机能拍摄到合适图像的范围内移动,用相机拍摄到的棋盘图像进行相机标定,标定板坐标系的Z轴与法兰盘坐标系的Z轴保持一致,机器人带动棋盘在相机能拍摄到合适图像的范围内移动,用相机拍摄到的不同位姿的标定板图像进行相机标定,包括以下步骤:Step 1: Establish the Eye-to-hand basic model with the camera fixed outside the robot body, and the chess board is fixed at the end of the robot as a target; the robot drives the chess board to move within the range where the camera can capture a suitable image, and uses the camera The captured chessboard image is used for camera calibration. The Z axis of the calibration plate coordinate system is consistent with the Z axis of the flange plate coordinate system. The robot drives the chessboard to move within the range where the camera can capture a suitable image. The image of the calibration board of the attitude is used for camera calibration, including the following steps: 步骤101:使标定板坐标系的Z轴方向与法兰盘坐标系的Z轴方向保持一致,在相机能拍摄到合适图像的范围内变动机器人末端上国际象棋棋盘的位姿,使得固定的相机采集到多张不同位姿的棋盘图像;Step 101: Make the Z-axis direction of the calibration plate coordinate system consistent with the Z-axis direction of the flange plate coordinate system, change the pose of the chessboard on the end of the robot within the range where the camera can capture a suitable image, so that the fixed camera Collect multiple chessboard images in different poses; 步骤102:提取棋盘图像中的角点并利用OpenCV中的cornerSubPix()函数将角点位置精确到亚像素级精度,进行相机标定;得到相机的内、外参数矩阵,其中,相机的内参数矩阵为
Figure FDA0003957054680000011
fx、fy表示焦距,u0、v0表示相机光轴与图像平面的交点;相机的外参数矩阵为/>
Figure FDA0003957054680000012
R表示从世界坐标系到相机坐标系的旋转矩阵,R=RxRyRz,Rx、Ry、Rz表示相机坐标系绕世界坐标系的x,y,z轴的旋转,T表示从世界坐标系到相机坐标系的平移矩阵,T=[tx ty tz],Tx、Ty、Tz表示相机坐标系沿着世界坐标系的x,y,z轴平移,则
Step 102: Extract the corner points in the checkerboard image and use the cornerSubPix() function in OpenCV to accurately position the corner points to sub-pixel precision, and perform camera calibration; obtain the internal and external parameter matrix of the camera, wherein, the internal parameter matrix of the camera for
Figure FDA0003957054680000011
f x , f y represent the focal length, u 0 , v 0 represent the intersection of the camera optical axis and the image plane; the external parameter matrix of the camera is />
Figure FDA0003957054680000012
R represents the rotation matrix from the world coordinate system to the camera coordinate system, R=R x R y R z , R x , R y , R z represent the rotation of the camera coordinate system around the x, y, z axes of the world coordinate system, T Represents the translation matrix from the world coordinate system to the camera coordinate system, T=[t x t y t z ], T x , T y , T z represent the translation of the camera coordinate system along the x, y, z axes of the world coordinate system, but
其中相机坐标系到世界坐标系的变换公式为:The transformation formula from the camera coordinate system to the world coordinate system is:
Figure FDA0003957054680000013
Figure FDA0003957054680000013
式(1)中,(Xw,Yw,Zw)表示世界坐标系中的点;(Xc,Yc,Zc)表示相机坐标系中相应的点;In formula (1), (X w , Y w , Z w ) represents a point in the world coordinate system; (X c , Y c , Z c ) represents the corresponding point in the camera coordinate system; 图像坐标系与世界坐标系的变换关系为:The transformation relationship between the image coordinate system and the world coordinate system is:
Figure FDA0003957054680000021
Figure FDA0003957054680000021
S表示比例系数,通过式(2)算出图像坐标系中的点(x,y)所对应的世界坐标系中的点(X,Y,Z);S represents the scale factor, and the point (X, Y, Z) in the world coordinate system corresponding to the point (x, y) in the image coordinate system is calculated by formula (2); 步骤2:进行手靶标定,包括以下步骤:Step 2: Perform hand target calibration, including the following steps: 步骤201:将标定笔作为靶标固定在机器人末端,以机器人末端关节的坐标系为基础在空间中设定9个标记点,机器人带动标定笔在每个标记点处以不同的姿态进行定点直到完成9个标记点的定点,机器人末端关节坐标系上的点D1~D9投影到图像坐标系上的对应的点分别为d1~d9,通过欧式变换表示D1~D9到d1~d9的转换,由此得到图像坐标系与机器人末端关节坐标系之间的转换关系;Step 201: Fix the calibration pen as a target on the end of the robot, set 9 marking points in space based on the coordinate system of the robot end joint, and the robot drives the calibration pen to fix the point at each marking point with different postures until the completion of 9 For fixed points of marked points, the corresponding points projected from points D 1 to D 9 on the robot end joint coordinate system to the image coordinate system are respectively d 1 to d 9 , which are represented by Euclidean transformation from D 1 to D 9 to d 1 to The conversion of d 9 , thus obtaining the conversion relationship between the image coordinate system and the robot end joint coordinate system; 步骤202:基于步骤201获得的图像坐标系与机器人末端关节坐标系之间的转换关系,推导出相机坐标系与机器人末端关节坐标系之间的转换关系Tcm,即完成手眼标定;Step 202: Based on the conversion relationship between the image coordinate system and the robot end joint coordinate system obtained in step 201, deduce the conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system, that is, complete the hand-eye calibration; 步骤203:基于转换关系Tcm,结合步骤1相机标定后得到的相机参数,推导出靶标坐标系与相机坐标系之间的转换关系TbcStep 203: Based on the transformation relationship T cm , combined with the camera parameters obtained after camera calibration in step 1, deduce the transformation relationship T bc between the target coordinate system and the camera coordinate system; 步骤204:由Tmb=TcmTbc,推导出机器人末端关节坐标系与靶标坐标系之间的转换关系Tmb,即完成手靶标定;Step 204: From T mb =T cm T bc , deduce the conversion relationship T mb between the robot end joint coordinate system and the target coordinate system, that is, complete the hand-target calibration; 上述步骤中,过欧式变换即旋转向量和平移向量:In the above steps, the Euclidean transformation is the rotation vector and the translation vector: Di=R·di+t(i∈(1,9))D i = R·d i +t(i∈(1,9)) 上式表示D1~D9到d1~d9的转换,式中,t表示平移向量,由此可得到图像坐标系与机器人末端关节坐标系之间的转换关系,相机标定得到的相机参数和公式:The above formula represents the conversion from D 1 to D 9 to d 1 to d 9 , where t represents the translation vector, from which the conversion relationship between the image coordinate system and the robot end joint coordinate system can be obtained, and the camera parameters obtained by camera calibration and the formula:
Figure FDA0003957054680000031
Figure FDA0003957054680000031
得到相机坐标系和机器人末端关节坐标系之间的转换关系Tcm完成手眼标定,可得机器人末端关节坐标系与靶标之间的转换关系Tmb=相机坐标系与机器人末端关节坐标系之间转换关系Tcm×靶标坐标系与相机坐标系之间的转换关系Tbc,Tmb=TcmTbc,推算出机器人末端关节坐标系与靶标坐标系之间的转换关系Tmb即手靶标定;Obtain the conversion relationship T cm between the camera coordinate system and the robot end joint coordinate system to complete the hand-eye calibration, and obtain the conversion relationship T mb between the robot end joint coordinate system and the target = conversion between the camera coordinate system and the robot end joint coordinate system The relationship T cm × the conversion relationship T bc between the target coordinate system and the camera coordinate system, T mb = T cm T bc , deduce the conversion relationship T mb between the robot end joint coordinate system and the target coordinate system, that is, hand target calibration; 步骤3:结合预标定数据,推出机器人末端在机器人基坐标系下的位置,包括以下步骤:Step 3: Combining the pre-calibration data, deduce the position of the robot end in the robot base coordinate system, including the following steps: 步骤301:采用空间位置定点进行预标定,在机器人末端关节坐标系下将标定笔依次移动到设定的9个标记点,此时每个标记点处机器人不需要变动姿态,固定在机器人外的相机对其进行拍照,在拍照的同时记录各个标记点处机器人末端的坐标P′(X,Y,Z)Step 301: Perform pre-calibration with fixed points in space, move the calibration pen to the set 9 marked points in sequence under the coordinate system of the end joint of the robot. At this time, the robot does not need to change its posture at each marked point. The camera takes pictures of it, and records the coordinates P′ (X, Y, Z) of the end of the robot at each marking point while taking pictures; 步骤302:将相机采集到不同标记点处的标定笔图像进行预处理;然后进一步处理得到预标定的Tcm(1~9),Tcm(1~9)表示9组预标定的相机坐标系与机器人末端关节坐标系之间的转换矩阵,Tcm(1~9)结合记录的机器人末端坐标得到9组相机坐标系与机器人的世界坐标系之间的转换关系Tcw(1~9),取9组数据的平均值作为最终的相机坐标系与机器人的世界坐标系之间的转换关系TcwStep 302: Preprocess the calibration pen images captured by the camera at different marking points; then further process to obtain the pre-calibrated T cm(1~9) , where T cm(1~9) represents 9 groups of pre-calibrated cameras The conversion matrix between the coordinate system and the robot end joint coordinate system, T cm(1~9) combines the recorded robot end coordinates to obtain the conversion relationship between the 9 sets of camera coordinate systems and the robot’s world coordinate system T cw (1~9 9) Take the average value of 9 sets of data as the conversion relationship T cw between the final camera coordinate system and the world coordinate system of the robot; 步骤303:将预标定得到的转换关系Tcw、靶标坐标系与相机坐标系之间的转换关系Tbc和手靶标定得到的Tmb相结合可推导出机器人末端位置,将其与预期的机械手末端位置进行对比得到误差;Step 303: Combining the conversion relationship T cw obtained by pre-calibration, the conversion relationship T bc between the target coordinate system and the camera coordinate system, and T mb obtained by hand-target calibration, the end position of the robot can be deduced, and compared with the expected manipulator The end position is compared to get the error; 步骤4:将上述得到的机器人末端位置与机器人预期的末端位置进行对比,得到之间的误差;Step 4: Compare the end position of the robot obtained above with the expected end position of the robot to obtain the error between them; 步骤5:根据步骤4得到的机器人末端位置的误差建立误差约束方程,然后应用非线性最小二乘法优化,通过不停的迭代计算寻找到误差函数的局部最小值,并认为该局部最小值能够使得目标函数取得最优解,完成参数标定,其中,构建非线性最小二乘法优化的目标函数为:Step 5: Establish the error constraint equation according to the error of the end position of the robot obtained in step 4, and then apply the nonlinear least square method to optimize, find the local minimum value of the error function through non-stop iterative calculation, and think that the local minimum value can make The objective function obtains the optimal solution and completes parameter calibration. Among them, the objective function for constructing nonlinear least square method optimization is:
Figure FDA0003957054680000041
Figure FDA0003957054680000041
式中,P0(X,Y,Z)i表示第i个期望的机器人末端位置;P0(X,Y,Z)j表示第j个期望的机器人末端位置;P1(X,Y,Z)i表示推导出的机器人第i个末端位置;P1(X,Y,Z)j表示推导出的机器人第j个末端位置。In the formula, P 0(X,Y,Z)i represents the i-th expected robot end position; P 0(X,Y,Z)j represents the j-th expected robot end position; P 1(X,Y, Z)i represents the derived i-th end position of the robot; P 1(X,Y,Z)j represents the derived j-th end position of the robot.
CN201911279737.0A 2019-12-13 2019-12-13 Robot calibration method based on vision and multi-coordinate system closed-loop conversion Active CN110919658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911279737.0A CN110919658B (en) 2019-12-13 2019-12-13 Robot calibration method based on vision and multi-coordinate system closed-loop conversion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911279737.0A CN110919658B (en) 2019-12-13 2019-12-13 Robot calibration method based on vision and multi-coordinate system closed-loop conversion

Publications (2)

Publication Number Publication Date
CN110919658A CN110919658A (en) 2020-03-27
CN110919658B true CN110919658B (en) 2023-03-31

Family

ID=69860355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911279737.0A Active CN110919658B (en) 2019-12-13 2019-12-13 Robot calibration method based on vision and multi-coordinate system closed-loop conversion

Country Status (1)

Country Link
CN (1) CN110919658B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111515950B (en) 2020-04-28 2022-04-08 腾讯科技(深圳)有限公司 Method, device, device and storage medium for determining transformation relationship of robot coordinate system
CN111768364B (en) * 2020-05-15 2022-09-20 成都飞机工业(集团)有限责任公司 Aircraft surface quality detection system calibration method
CN112223285B (en) * 2020-09-30 2022-02-01 南京航空航天大学 Robot hand-eye calibration method based on combined measurement
CN113119083B (en) * 2021-03-19 2022-05-06 深圳市优必选科技股份有限公司 Robot calibration method and device, robot and storage medium
CN113237434B (en) * 2021-04-25 2022-04-01 湖南大学 An eye-in-hand calibration method of laser profile sensor based on stepped calibration object
CN114260899A (en) * 2021-12-29 2022-04-01 广州极飞科技股份有限公司 Hand-eye calibration method and device, electronic equipment and computer readable storage medium
CN115049744A (en) * 2022-07-11 2022-09-13 深圳市易尚展示股份有限公司 Robot hand-eye coordinate conversion method and device, computer equipment and storage medium
CN115464658A (en) * 2022-09-30 2022-12-13 柳州职业技术学院 An industrial robot kinematics calibration device
CN116277035B (en) * 2023-05-15 2023-09-12 北京壹点灵动科技有限公司 Robot control method and device, processor and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2769947B2 (en) * 1992-05-15 1998-06-25 株式会社椿本チエイン Manipulator position / posture control method
CN107747941B (en) * 2017-09-29 2020-05-15 歌尔股份有限公司 Binocular vision positioning method, device and system
CN109859275B (en) * 2019-01-17 2022-08-02 南京邮电大学 Monocular vision hand-eye calibration method of rehabilitation mechanical arm based on S-R-S structure
CN110136208B (en) * 2019-05-20 2020-03-17 北京无远弗届科技有限公司 Joint automatic calibration method and device for robot vision servo system
CN110355464A (en) * 2019-07-05 2019-10-22 上海交通大学 Visual Matching Method, system and the medium of laser processing

Also Published As

Publication number Publication date
CN110919658A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN110919658B (en) Robot calibration method based on vision and multi-coordinate system closed-loop conversion
CN112132894B (en) A real-time tracking method of robotic arm based on binocular vision guidance
CN109859275B (en) Monocular vision hand-eye calibration method of rehabilitation mechanical arm based on S-R-S structure
CN111801198B (en) Hand-eye calibration method, system and computer storage medium
CN106097300B (en) A kind of polyphaser scaling method based on high-precision motion platform
CN107160380B (en) Camera calibration and coordinate transformation method based on SCARA manipulator
CN114474056B (en) A monocular vision high-precision target positioning method for grasping operation
CN105073348B (en) Robotic system and method for calibration
CN115546289A (en) Robot-based three-dimensional shape measurement method for complex structural part
CN108648237A (en) A kind of space-location method of view-based access control model
CN110136204A (en) Sound membrane dome assembly system based on double-sided telecentric lens camera machine tool position calibration
CN114519738A (en) Hand-eye calibration error correction method based on ICP algorithm
CN110465946B (en) A calibration method for the relationship between pixel coordinates and robot coordinates
CN110722558B (en) Origin correction method and device for robot, controller and storage medium
CN117381800B (en) A hand-eye calibration method and system
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN112927133A (en) Image space projection splicing method based on integrated calibration parameters
CN115446847A (en) Systems and methods for improving 3D eye-hand coordination accuracy for robotic systems
CN113870366B (en) Calibration method and calibration system of three-dimensional scanning system based on pose sensor
CN106335061A (en) Hand-eye relation calibration method based on four-freedom-degree robot
CN115401685A (en) Hand-eye calibration of camera-guided devices
CN110405731A (en) A Fast Calibration Method of Base Coordinate System of Dual Manipulators
CN112700505B (en) A hand-eye calibration method, device and storage medium based on binocular three-dimensional tracking
CN118999495A (en) Robot high-precision pose estimation method in composite material automatic laying and forming process
CN117340879A (en) Industrial machine ginseng number identification method and system based on graph optimization model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant