[go: up one dir, main page]

CN118322213A - Vision-based kinematic calibration method for industrial robots in large workspaces - Google Patents

Vision-based kinematic calibration method for industrial robots in large workspaces Download PDF

Info

Publication number
CN118322213A
CN118322213A CN202410623842.6A CN202410623842A CN118322213A CN 118322213 A CN118322213 A CN 118322213A CN 202410623842 A CN202410623842 A CN 202410623842A CN 118322213 A CN118322213 A CN 118322213A
Authority
CN
China
Prior art keywords
aruco
robot
vision
error
marker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410623842.6A
Other languages
Chinese (zh)
Inventor
高栋
尹远浩
邓柯楠
路勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202410623842.6A priority Critical patent/CN118322213A/en
Publication of CN118322213A publication Critical patent/CN118322213A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

大尺寸工作空间下基于视觉的工业机器人运动学标定方法,解决了针对现有基于视觉的机器人运动学标定方法由于视场限制,仅能在有限空间内提升机器人的绝对定位精度的问题,属于机器人运动学标定技术领域。本发明包括:根据工业机器人配置,在工作空间内布局ArUco标记;根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,生成一个全覆盖的ArUco地图;机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,构建机器人末端绝对位置模型,得到测量位姿;根据机器人关节角度及运动学参数名义值,计算末端名义位姿和测量位姿的误差矩阵;建立位姿误差模型,根据误差矩阵,得到机器人运动学参数误差。

The vision-based kinematic calibration method for industrial robots in large-scale workspaces solves the problem that the existing vision-based robot kinematic calibration method can only improve the absolute positioning accuracy of the robot in a limited space due to the field of view limitation, and belongs to the field of robot kinematic calibration technology. The present invention includes: according to the configuration of the industrial robot, ArUco markers are arranged in the workspace; according to the set shooting path, a monocular vision system is used to capture ArUco marker images to generate a fully covered ArUco map; the robot uses a monocular vision system to capture ArUco marker images in different postures, constructs an absolute position model of the robot end, and obtains the measured posture; according to the robot joint angle and the nominal value of the kinematic parameters, the error matrix of the nominal posture of the end and the measured posture is calculated; a posture error model is established, and the robot kinematic parameter error is obtained according to the error matrix.

Description

大尺寸工作空间下基于视觉的工业机器人运动学标定方法Vision-based kinematic calibration method for industrial robots in large workspaces

技术领域Technical Field

本发明涉及一种大尺寸工作空间下基于视觉的工业机器人运动学标定方法,属于机器人运动学标定技术领域。The invention relates to a vision-based kinematic calibration method for an industrial robot in a large-size workspace, and belongs to the technical field of robot kinematic calibration.

背景技术Background technique

随着先进制造对高速、高精度、大承载工业机器人的需要,对机器人绝对定位精度要求也越来越高,通过标定技术来提高机器人绝对定位精度的研究成为热点问题。基于视觉的机器人运动学标定技术,因其成本低廉和操作简便而受到青睐。当前,机器人运动学标定主要依赖特定的标记或特征点(如棋盘格和精密球)作为校准参考。在文献《Kinematicidentification of industrial robot using end-effector mounted monocularcamera bypassing measurement of 3-d pose[J].IEEE/ASME Transactions onMechatronics 27.1(2021):383-394.》中介绍了一种工业机器人运动学辨识方法,该方法使用安装在末端执行器上的单目相机,无需进行三维姿态测量,直接使用棋盘格标定板的二维图像,将过程简化为单级估算,但是受限于相机视场,机器人标定空间受限。在文献《Anovel vision-based calibration framework for industrial robotic manipulators[J].Robotics and Computer-Integrated Manufacturing 73(2022):102248.》提出了一种新颖的视觉校准框架,采用固定在外部的单一相机和机器人末端的ArUco标记来实现对工业机械手臂的校准,但是这种方法无法兼顾相机视场和测量距离,其在大尺寸工作空间内应用的灵活性受到限制。With the demand for high-speed, high-precision, and large-load industrial robots in advanced manufacturing, the requirements for the absolute positioning accuracy of robots are becoming higher and higher. Research on improving the absolute positioning accuracy of robots through calibration technology has become a hot topic. Vision-based robot kinematic calibration technology is favored because of its low cost and simple operation. At present, robot kinematic calibration mainly relies on specific markers or feature points (such as chessboards and precision balls) as calibration references. In the paper "Kinematic identification of industrial robot using end-effector mounted monocular camera bypassing measurement of 3-d pose[J].IEEE/ASME Transactions on Mechatronics 27.1(2021):383-394.", a kinematic identification method for industrial robots is introduced. This method uses a monocular camera mounted on the end effector. It does not require three-dimensional pose measurement and directly uses the two-dimensional image of the chessboard calibration plate to simplify the process to a single-level estimation. However, due to the camera field of view, the robot calibration space is limited. In the paper "Anovel vision-based calibration framework for industrial robotic manipulators[J]. Robotics and Computer-Integrated Manufacturing 73(2022):102248.", a novel visual calibration framework is proposed, which uses a single camera fixed on the outside and ArUco markers at the end of the robot to calibrate the industrial robotic arm. However, this method cannot take into account the camera field of view and measurement distance, and its flexibility in application in large-scale workspaces is limited.

总之,当前基于视觉的机器人运动学标定方法目前尚存在一个共性问题:相机视场限制了机器人在标定过程中的活动范围,致使机器人绝对定位精度的提升主要集中在受限的测量区域内,所以当机器人在大工作空间内应用时,其绝对定位精度很难满足全空间高精度的应用需求。In summary, the current vision-based robot kinematic calibration methods still have a common problem: the camera field of view limits the robot's range of movement during the calibration process, causing the improvement of the robot's absolute positioning accuracy to be mainly concentrated in the restricted measurement area. Therefore, when the robot is used in a large workspace, its absolute positioning accuracy is difficult to meet the application requirements of high precision in the entire space.

发明内容Summary of the invention

针对现有基于视觉的机器人运动学标定方法由于视场限制,仅能在有限空间内提升机器人的绝对定位精度的问题,本发明提供一种大尺寸工作空间下基于视觉的工业机器人运动学标定方法。In view of the problem that the existing vision-based robot kinematics calibration method can only improve the absolute positioning accuracy of the robot in a limited space due to the limitation of the field of view, the present invention provides a vision-based industrial robot kinematics calibration method in a large-size workspace.

本发明的一种大尺寸工作空间下基于视觉的工业机器人运动学标定方法,包括:The present invention provides a vision-based kinematic calibration method for an industrial robot in a large-size workspace, comprising:

根据工业机器人配置,在工作空间内布局ArUco标记;ArUco markers are laid out in the workspace according to the industrial robot configuration;

根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,利用图像拼接技术将ArUco标记图像生成一个全覆盖的ArUco地图;According to the set shooting path, the monocular vision system is used to capture ArUco tagged images, and the image stitching technology is used to generate a fully covered ArUco map from the ArUco tagged images;

机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,根据所述全覆盖的ArUco地图识别机器人末端位置,构建机器人末端绝对位置模型;The robot uses a monocular vision system to capture ArUco marked images in different positions, identifies the end position of the robot according to the fully covered ArUco map, and constructs an absolute position model of the end position of the robot;

根据机器人关节角度及运动学参数名义值,计算末端名义位姿,并利用机器人末端绝对位置模型获得末端测量位姿,得到末端名义位姿和测量位姿的误差矩阵;According to the robot joint angle and the nominal value of kinematic parameters, the terminal nominal posture is calculated, and the terminal measurement posture is obtained using the robot terminal absolute position model, and the error matrix between the terminal nominal posture and the measurement posture is obtained;

建立位姿误差模型,根据误差矩阵,得到机器人运动学参数误差。The posture error model is established, and the robot kinematic parameter error is obtained according to the error matrix.

作为优选,根据工业机器人配置,在工作空间内布局ArUco标记的方法包括:Preferably, according to the industrial robot configuration, the method for laying out ArUco markers in the workspace comprises:

建立布置方案的目标函数F(O):Establish the objective function F(O) of the layout plan:

F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)

其中,O表示标记的布置方案,Among them, O represents the layout of the mark,

V(T)表示ArUco标记的可视性参数,α表示V(T)的权重;V(T) represents the visibility parameter of ArUco marker, and α represents the weight of V(T);

U(D)表示ArUco标记布置的均匀性参数,β表示U(D)的权重;U(D) represents the uniformity parameter of ArUco marker placement, and β represents the weight of U(D);

P(R)表示机器人在其运动范围内所有位置的平均定位精度,γ表示P(R)的权重;P(R) represents the average positioning accuracy of the robot at all positions within its motion range, and γ represents the weight of P(R);

C(S)代表布置成本,δ表示C(S)的权重;C(S) represents the layout cost, and δ represents the weight of C(S);

采用遗传算法对ArUco标记的布置方案O进行优化,以最大化目标函数F(O),在优化过程中,不断迭代更新标记的位置、数量和方向,直至找到能使F(O)达到最大值的布置方案。A genetic algorithm is used to optimize the layout scheme O of ArUco markers to maximize the objective function F(O). During the optimization process, the position, number and direction of the markers are continuously updated iteratively until a layout scheme that maximizes F(O) is found.

作为优选, As a preference,

其中,D表示工作空间内各检验点的标记密度集合,σ2(D)为D的方差。Where D represents the set of label densities of each test point in the workspace, and σ 2 (D) is the variance of D.

作为优选, As a preference,

其中,vi表示第i个ArUco标记的可视性,取决于ArUco标记与相机之间的角度和距离,Ai表示第i个ArUco标记在相机视野中的可视面积,Atotal表示相机视野的总面积,n表示ArUco标记的总数。Where, vi represents the visibility of the ith ArUco marker, which depends on the angle and distance between the ArUco marker and the camera, Ai represents the visible area of the ith ArUco marker in the camera’s field of view, Atotal represents the total area of the camera’s field of view, and n represents the total number of ArUco markers.

作为优选, As a preference,

其中,m是定位操作总次数,dj是第j次操作中的定位误差,dmax是允许的最大误差距离。Where m is the total number of positioning operations, dj is the positioning error in the jth operation, and dmax is the maximum allowed error distance.

作为优选,根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,利用图像拼接技术将ArUco标记图像生成一个全覆盖的ArUco地图的方法包括:Preferably, the method of capturing ArUco tagged images using a monocular vision system according to a set shooting path, and generating a fully covered ArUco map from the ArUco tagged images using image stitching technology includes:

根据设定的闭环拍摄路径,依次拍摄ArUco标记图像,确保相邻ArUco标记图像中存在相同的ArUco标记;According to the set closed-loop shooting path, shoot ArUco mark images in sequence to ensure that the same ArUco mark exists in adjacent ArUco mark images;

确定每个ArUco标记的位置,并采用L-M算法将目标的ArUco标记角点的重投影误差降至最低,得到初始ArUco地图;Determine the position of each ArUco marker, and use the L-M algorithm to minimize the reprojection error of the target's ArUco marker corner points to obtain the initial ArUco map;

重投影误差的目标函数为The objective function of the reprojection error is

and

其中,Ψ(δ,γti·cj)表示角点从三维空间坐标到像素坐标的投影,δ表示相机内参矩阵,γt表示相机的外参,γi表示标记坐标系到世界坐标系的变换矩阵,cj表示标记角点的三维坐标,表示标记角点的像素坐标,K是闭环拍摄路径中的图像总数,xk是第k张图像的位姿,zk是从第k张图像到第k+1张图像的位姿变换,是位子之间的运算符。Among them, Ψ(δ,γ ti ·c j ) represents the projection of the corner point from the three-dimensional space coordinates to the pixel coordinates, δ represents the camera intrinsic parameter matrix, γ t represents the camera extrinsic parameter, γ i represents the transformation matrix from the marker coordinate system to the world coordinate system, and c j represents the three-dimensional coordinates of the marker corner point. represents the pixel coordinates of the marked corner points, K is the total number of images in the closed-loop shooting path, xk is the pose of the kth image, zk is the pose transformation from the kth image to the k+1th image, is the operator between places.

作为优选,构建机器人末端绝对位置模型的方法:As a preferred method, the method of constructing the absolute position model of the robot end is:

根据全覆盖的ArUco地图和机器人在不同位姿下使用单目视觉系统捕获的ArUco标记图像,建立ArUco图像坐标系和相机坐标系的坐标转换关系;Based on the fully covered ArUco map and the ArUco marker images captured by the robot using the monocular vision system at different positions, the coordinate transformation relationship between the ArUco image coordinate system and the camera coordinate system is established;

利用手眼标定模型,测算出机器人末端坐标系与相机坐标系间的相对位姿关系,从而确定机器人末端绝对位置模型。The hand-eye calibration model is used to calculate the relative posture relationship between the robot end coordinate system and the camera coordinate system, thereby determining the absolute position model of the robot end.

作为优选,建立位姿误差模型,通过Python编程处理所述误差矩阵,得到机器人运动学参数误差。Preferably, a posture error model is established, and the error matrix is processed by Python programming to obtain the robot kinematic parameter error.

本发明的有益效果,The beneficial effects of the present invention are:

本发明通过采用单目视觉系统和ArUco标记地图,不仅大幅降低了高精度校准技术的应用成本,同时简化了校准过程,使得系统安装和操作更为简单快捷。这一特点使得本发明特别适合于需要频繁进行标定或调整的生产环境,为用户节省了大量的时间和经济成本。本发明有效解决了传统视觉标定方法受相机视场角限制的问题,能够在较大的工作空间内实现更高精度的机器人定位。这一改进不仅提高了生产效率,还提升了操控质量,尤其适用于对定位精度要求极高的应用场景。本发明的标定方法适用于各种工业机器人型号和品牌,具有良好的通用性。ArUco标记地图的灵活设计使得标定系统可以根据实际需求调整,具有很好的可扩展性。同时,该方法提供的增强信息和冗余度确保了即使在部分标记被遮挡或损坏的情况下也能保持较高的标定精度,从而增强了系统在复杂环境下的鲁棒性。The present invention not only greatly reduces the application cost of high-precision calibration technology by adopting a monocular vision system and an ArUco marker map, but also simplifies the calibration process, making the system installation and operation simpler and faster. This feature makes the present invention particularly suitable for production environments that require frequent calibration or adjustment, saving users a lot of time and economic costs. The present invention effectively solves the problem that the traditional visual calibration method is limited by the camera field of view, and can achieve higher-precision robot positioning in a larger workspace. This improvement not only improves production efficiency, but also improves the control quality, and is particularly suitable for application scenarios with extremely high positioning accuracy requirements. The calibration method of the present invention is applicable to various industrial robot models and brands and has good versatility. The flexible design of the ArUco marker map allows the calibration system to be adjusted according to actual needs and has good scalability. At the same time, the enhanced information and redundancy provided by the method ensure that a high calibration accuracy can be maintained even when some markers are blocked or damaged, thereby enhancing the robustness of the system in complex environments.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

图1位本发明具体实施流程图;FIG1 is a flowchart of a specific implementation of the present invention;

图2为本发明机器人运动学模型示意图;FIG2 is a schematic diagram of a kinematic model of a robot according to the present invention;

图3为本发明ArUco地图建立示意图;FIG3 is a schematic diagram of establishing an ArUco map according to the present invention;

图4为本发明机器人运动学标定平台示意图。FIG. 4 is a schematic diagram of a robot kinematics calibration platform according to the present invention.

图5为本发明机器人定位精度分区域验证示意图。FIG. 5 is a schematic diagram of regional verification of the robot positioning accuracy of the present invention.

图6为本发明具体实例中的机器人标定前后的定位精度对比图。FIG. 6 is a comparison diagram of the positioning accuracy of the robot before and after calibration in a specific example of the present invention.

具体实施方式Detailed ways

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will be combined with the drawings in the embodiments of the present invention to clearly and completely describe the technical solutions in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by ordinary technicians in this field without creative work are within the scope of protection of the present invention.

需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。It should be noted that, in the absence of conflict, the embodiments of the present invention and the features in the embodiments may be combined with each other.

下面结合附图和具体实施例对本发明作进一步说明,但不作为本发明的限定。The present invention will be further described below in conjunction with the accompanying drawings and specific embodiments, but they are not intended to limit the present invention.

本实施方式的一种大尺寸工作空间下基于视觉的工业机器人运动学标定方法,包含如下步骤:A vision-based kinematic calibration method for an industrial robot in a large-size workspace in this embodiment includes the following steps:

步骤1、根据工业机器人配置,在工作空间内布局ArUco标记,并优化布局;Step 1: ArUco markers are laid out in the workspace according to the industrial robot configuration and the layout is optimized.

步骤2、根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,利用图像拼接技术将ArUco标记图像生成一个全覆盖的ArUco地图;Step 2: Use a monocular vision system to capture ArUco tagged images according to the set shooting path, and use image stitching technology to generate a fully covered ArUco map from the ArUco tagged images;

步骤3、机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,根据所述全覆盖的ArUco地图识别机器人末端位置,构建机器人末端绝对位置模型;Step 3: The robot uses a monocular vision system to capture ArUco marker images in different positions, identifies the end position of the robot according to the fully covered ArUco map, and constructs an absolute position model of the end position of the robot;

步骤4、根据机器人关节角度及运动学参数名义值,计算末端名义位姿,并利用机器人末端绝对位置模型获得末端测量位姿,得到末端名义位姿和测量位姿的误差矩阵;Step 4: Calculate the nominal pose of the end according to the robot joint angle and the nominal value of the kinematic parameters, and use the absolute position model of the robot end to obtain the end measurement pose, and obtain the error matrix between the nominal pose of the end and the measurement pose;

步骤5、建立位姿误差模型,根据误差矩阵,得到机器人运动学参数误差。Step 5: Establish a posture error model and obtain the robot kinematic parameter error based on the error matrix.

本实施方式通过采用单目视觉系统和ArUco标记地图,不仅大幅降低了高精度校准技术的应用成本,同时简化了校准过程,使得系统安装和操作更为简单快捷。这一特点使得本发明特别适合于需要频繁进行标定或调整的生产环境,为用户节省了大量的时间和经济成本。This implementation not only greatly reduces the application cost of high-precision calibration technology by using a monocular vision system and ArUco marker maps, but also simplifies the calibration process, making system installation and operation simpler and faster. This feature makes the present invention particularly suitable for production environments that require frequent calibration or adjustment, saving users a lot of time and economic costs.

本实施方式之后确定现场设备(工业机器人,工作台,加工设备等)布局,根据工作距离及测量精度选取合适的工业相机。相机与机器人末端的安装位置和角度经过精心设计,以优化图像捕获的质量;After this implementation, the layout of the on-site equipment (industrial robots, workbenches, processing equipment, etc.) is determined, and the appropriate industrial camera is selected according to the working distance and measurement accuracy. The installation position and angle of the camera and the end of the robot are carefully designed to optimize the quality of image capture;

在机器人工作空间内的适当位置放置带有唯一ID的ArUco标记,并使用图像拼接技术生成全面覆盖工作区的ArUco地图(校准基准)。在拍摄过程中,至少一个完整的ArUco标记完全位于相机的视野之中;Place ArUco markers with unique IDs at appropriate locations within the robot's workspace and use image stitching technology to generate an ArUco map (calibration benchmark) that fully covers the workspace. During the capture process, at least one complete ArUco marker is completely within the camera's field of view;

选定机器人姿态,尽量覆盖机器人的整个工作范围,并考虑到机器人的运动特性和视觉系统的捕获能力。机器人在预定的姿态点执行运动,采集工业机器人每个姿态下的六个关节角度数据;同时,安装在机器人末端的工业相机捕获包含ArUco标记的图像。每个姿态点的图像数据和机器人的关节角度数据被记录和存储,用于后续的数据分析和处理;The robot posture is selected to cover the entire working range of the robot as much as possible, taking into account the robot's motion characteristics and the capture capability of the visual system. The robot performs motion at the predetermined posture points and collects the six joint angle data of the industrial robot in each posture; at the same time, the industrial camera installed at the end of the robot captures the image containing the ArUco marker. The image data of each posture point and the robot's joint angle data are recorded and stored for subsequent data analysis and processing;

步骤1中ArUco标记的布局具体为:The layout of the ArUco markers in step 1 is as follows:

首先,在工业机器人的工作空间内,根据机器人的具体配置和作业需求,优化ArUco标记的布局。本实施例中,标记的布局遵循以下原则:First, in the workspace of the industrial robot, the layout of the ArUco markers is optimized according to the specific configuration and operation requirements of the robot. In this embodiment, the layout of the markers follows the following principles:

(1)标记布置均匀性:通过合理规划标记的位置和间距,确保标记覆盖整个工作空间,并在不同位置保持相似的标记密度。这一过程通过优化目标函数ArUco标记布置的均匀性U(D)来实现,其中(1) Marker layout uniformity: By rationally planning the position and spacing of markers, we ensure that the markers cover the entire workspace and maintain similar marker density at different locations. This process is achieved by optimizing the objective function ArUco marker layout uniformity U(D), where

D表示工作空间内各检验点的标记密度集合,σ2(D)为D的方差;D represents the set of label densities of each inspection point in the workspace, σ 2 (D) is the variance of D;

(2)ArUco标记可视性:在选择标记的放置位置时,考虑相机的参数和视野范围,避免将ArUco标记放置在机器人或其他物体的遮挡区域。ArUco标记的总可视性V(T)通过以下公式计算,以保证相机能够准确监测和定位每个标记;(2) ArUco marker visibility: When choosing the placement of the marker, consider the camera parameters and field of view to avoid placing the ArUco marker in the occluded area of the robot or other objects. The total visibility V(T) of the ArUco marker is calculated by the following formula to ensure that the camera can accurately monitor and locate each marker;

(3)机器人姿态变化:为了增加机器人的位姿范围,将ArUco标记摆放在包括非平面位置在内的多种位置。这样不仅提高了位姿范围,还增加了标定的鲁棒性和稳定性,特别适用于复杂工作空间和多姿态操作。(3) Robot pose change: In order to increase the robot’s pose range, ArUco markers are placed in a variety of positions, including non-planar positions. This not only increases the pose range, but also increases the robustness and stability of the calibration, which is particularly suitable for complex workspaces and multi-pose operations.

综合考虑标记的可视性、均匀性和机器人的定位精度,定义了优化目标函数:Taking into account the visibility, uniformity and positioning accuracy of the markers, the optimization objective function is defined:

F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)

其中,F(O)是优化目标函数,O代表标记的布置方案。α表示V(T)的权重,β表示U(D)的权重,γ表示P(R)的权重,P(R)代表机器人在其运动范围内所有位置的平均定位精度,通过公式计算,其中m是定位操作总次数,dj是第j次操作中的定位误差,dmax是允许的最大误差距离。C(S)代表布置成本,通过公式计算,其中ck是第k个标记的布置成本,δ表示C(S)的权重。Among them, F(O) is the optimization objective function, O represents the layout of the mark. α represents the weight of V(T), β represents the weight of U(D), γ represents the weight of P(R), and P(R) represents the average positioning accuracy of all positions of the robot within its range of motion. Calculate, where m is the total number of positioning operations, d j is the positioning error in the jth operation, and d max is the maximum allowed error distance. C(S) represents the placement cost, which is calculated by the formula Calculate , where ck is the placement cost of the kth token and δ represents the weight of C(S).

采用遗传算法对ArUco标记的布置方案O进行优化,以最大化目标函数F(O)。在优化过程中,不断迭代更新标记的位置、数量和方向,直至找到能使F(O)达到最大值的布置方案。The genetic algorithm is used to optimize the layout scheme O of ArUco markers to maximize the objective function F(O). During the optimization process, the position, number and direction of the markers are continuously updated iteratively until a layout scheme that maximizes F(O) is found.

步骤2中ArUco地图的建立具体为:The establishment of the ArUco map in step 2 is as follows:

(1)机器人在不同姿态拍摄ArUco标记,确保相邻图像中存在相同的标记。在机器人姿态1时,当机器人同时检测到图像中的标记1和标记3时,可以建立摄像机和标记1之间的姿态关系以及标记3之间的姿态关系这样,我们就可以得出在姿势1下,标记1和标记3之间的姿势变换矩阵在姿态2中,当同时检测到标记3和标记4时,可以确定它们之间的姿态变换矩阵此外,由于有共同的标记3,因此可以计算出标记4和标记1之间的姿势变换矩阵同样,通过在每个位置捕捉到的图像中的共同标记,我们可以确定任意两个ArUco标记在空间中的相对位置关系。将其中一个ArUco坐标系指定为参考世界坐标系,就可以确定所有ArUco标记与参考坐标系的相对位置。(1) The robot captures ArUco markers in different poses to ensure that the same marker exists in adjacent images. In robot pose 1, when the robot detects markers 1 and 3 in the image at the same time, the pose relationship between the camera and marker 1 can be established. And the posture relationship between markers 3 In this way, we can derive the pose transformation matrix between marker 1 and marker 3 in pose 1 In pose 2, when marker 3 and marker 4 are detected at the same time, the pose transformation matrix between them can be determined In addition, since there is a common marker 3, the pose transformation matrix between marker 4 and marker 1 can be calculated Similarly, by using the common markers in the images captured at each location, we can determine the relative position of any two ArUco markers in space. By specifying one of the ArUco coordinate systems as the reference world coordinate system, we can determine the relative positions of all ArUco markers to the reference coordinate system.

(2)由于光照不足、相机快速移动、分辨率低和对焦不良等原因,二维码角点的像素坐标变得不精确,导致计算出的位置矩阵出现误差。简而言之,观测噪声导致简单的投影关系非常不准确,累积误差很大。因此,需要在相机观测数据不精确的情况下,确定每个ArUco的精确位点位置。这是一个将重投影误差最小化的优化问题:(2) Due to insufficient lighting, fast camera movement, low resolution, and poor focus, the pixel coordinates of the QR code corners become inaccurate, resulting in errors in the calculated position matrix. In short, observation noise causes simple projection relationships to be very inaccurate and the cumulative error is large. Therefore, it is necessary to determine the precise location of each ArUco when the camera observation data is inaccurate. This is an optimization problem to minimize the reprojection error:

其中,Ψ(δ,γti·cj)表示角点从三维空间坐标到像素坐标的投影,δ表示相机内参矩阵,γt表示相机的外参,γi表示标记坐标系到世界坐标系的变换矩阵,cj表示标记角点的三维坐标,表示标记角点的像素坐标,K是闭环拍摄路径中的图像总数,Among them, Ψ(δ,γ ti ·c j ) represents the projection of the corner point from the three-dimensional space coordinates to the pixel coordinates, δ represents the camera intrinsic parameter matrix, γ t represents the camera extrinsic parameter, γ i represents the transformation matrix from the marker coordinate system to the world coordinate system, and c j represents the three-dimensional coordinates of the marker corner point. represents the pixel coordinates of the marked corner points, K is the total number of images in the closed-loop shooting path,

(3)同时考虑到图像拼接次数的增加会导致误差的累计,为缓解这一问题,设定闭环图像采集路线,并优化总体误差,引入闭环约束公式:(3) At the same time, considering that the increase in the number of image stitching will lead to the accumulation of errors, in order to alleviate this problem, a closed-loop image acquisition route is set and the overall error is optimized, and the closed-loop constraint formula is introduced:

其中K是闭环中的图像总数,xk是第k张图像的位姿,zk是从第k张图像到第k+1张图像的位姿变换,是位子之间的运算符。where K is the total number of images in the closed loop, xk is the pose of the kth image, zk is the pose transformation from the kth image to the k+1th image, is the operator between places.

步骤3中,机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,In step 3, the robot uses a monocular vision system to capture ArUco marker images in different positions.

步骤3中构建机器人末端绝对位置模型具体为:The specific construction of the robot end absolute position model in step 3 is:

(1)测量系统包括空间中的多个ArUco标记、工业机器人和安装在工业机器人机器人末端的工业相机组成。相机捕捉并通过图像识别技术来对ArUco标记进行姿态估计,建立ArUco码图像坐标系和相机坐标系的坐标转换关系。其中,PnP算法用于确定相机相对于标定板的6自由度姿态。在该算法中,ArUco标记所指示的四个角点的三维到二维对应关系用以下公式描述:(1) The measurement system consists of multiple ArUco markers in space, an industrial robot, and an industrial camera installed at the end of the industrial robot. The camera captures and estimates the posture of the ArUco marker through image recognition technology, and establishes the coordinate transformation relationship between the ArUco code image coordinate system and the camera coordinate system. Among them, the PnP algorithm is used to determine the 6-DOF posture of the camera relative to the calibration plate. In this algorithm, the 3D to 2D correspondence relationship of the four corner points indicated by the ArUco marker is described by the following formula:

(2)利用手眼标定模型,测算出KUKA kr500机器人末端坐标系与大恒工业相机坐标系间的相对位姿关系,从而确定机器人末端绝对位置模型。本实施例本步骤标定得到的手眼矩阵为:(2) Using the hand-eye calibration model, the relative position relationship between the KUKA kr500 robot terminal coordinate system and the Daheng industrial camera coordinate system is calculated, thereby determining the absolute position model of the robot terminal. The hand-eye matrix obtained by calibration in this step of this embodiment is:

步骤4中得到测量位姿的误差矩阵,具体为:The error matrix of the measured pose is obtained in step 4, specifically:

(1)根据工业机器人理论DH参数建立运动学模型,并计算每个姿态下的机器人末端名义位姿;(1) Establish a kinematic model based on the theoretical DH parameters of industrial robots and calculate the nominal position of the robot end in each posture;

(2)利用机器人末端绝对位置模型获得末端测量位姿,得到末端名义位姿和测量位姿的误差矩阵;(2) Using the absolute position model of the robot end to obtain the end measurement posture, and obtain the error matrix between the end nominal posture and the measured posture;

步骤5中,建立位姿误差模型,通过Python编程处理所述误差矩阵,得到机器人运动学参数误差。In step 5, a posture error model is established, and the error matrix is processed by Python programming to obtain the robot kinematic parameter error.

为了验证本发明的有效性和优越性,将基于ArUco地图的标定方法与传统的基于棋盘格的标定方法进行对比。工业机器人的操作空间进一步划分为9个区域,每个区域内随机设置10个验证点,通过比较两种方法在不同区域内的定位精度,以及分析误差分布,进一步证明本发明方法的优势。In order to verify the effectiveness and superiority of the present invention, the calibration method based on the ArUco map is compared with the traditional calibration method based on the checkerboard. The operation space of the industrial robot is further divided into 9 areas, and 10 verification points are randomly set in each area. By comparing the positioning accuracy of the two methods in different areas and analyzing the error distribution, the advantages of the present invention are further demonstrated.

辨识得到的机器人运动学参数误差如表所示:The identified robot kinematic parameter errors are shown in the table:

表1运动学参数误差结果Table 1 Kinematic parameter error results

分析处理后的数据,评估本发明方法对机器人绝对定位精度的改善效果。特别关注在不同区域内的误差分布情况,以及与传统方法的比较结果,从而全面验证本发明的有效性和应用价值。The processed data are analyzed to evaluate the improvement effect of the method of the present invention on the absolute positioning accuracy of the robot. Special attention is paid to the error distribution in different areas and the comparison results with the traditional method, so as to fully verify the effectiveness and application value of the present invention.

综上,本发明成功地提出并实现了一种利用ArUco地图和单目视觉系统对大尺寸工作空间内工业机器人进行运动学标定的高效方法。通过创新性地结合了ArUco标记的广泛部署、高级图像拼接技术、精确的图像处理和标记识别,以及先进的绝对位置模型构建和运动学参数标定技术,本发明显著提高了工业机器人在大尺寸工作区域内的定位精度和操作效率。该系统不仅操作简便,成本效益高,而且具有良好的适应性和扩展性,能够满足各种高精度操控任务的需求。通过本发明的实施,能够为各类工业应用提供强有力的技术支持,特别是在航空航天、汽车制造和精密机械加工等领域,具有广泛的应用前景和显著的经济社会价值。In summary, the present invention successfully proposes and implements an efficient method for kinematic calibration of industrial robots in large-scale workspaces using ArUco maps and monocular vision systems. By innovatively combining the widespread deployment of ArUco markers, advanced image stitching technology, precise image processing and marker recognition, as well as advanced absolute position model construction and kinematic parameter calibration technology, the present invention significantly improves the positioning accuracy and operating efficiency of industrial robots in large-scale work areas. The system is not only easy to operate and cost-effective, but also has good adaptability and scalability, and can meet the needs of various high-precision manipulation tasks. Through the implementation of the present invention, it can provide strong technical support for various industrial applications, especially in the fields of aerospace, automobile manufacturing and precision machining, and has broad application prospects and significant economic and social value.

虽然在本文中参照了特定的实施方式来描述本发明,但是应该理解的是,这些实施例仅仅是本发明的原理和应用的示例。因此应该理解的是,可以对示例性的实施例进行许多修改,并且可以设计出其他的布置,只要不偏离所附权利要求所限定的本发明的精神和范围。应该理解的是,可以通过不同于原始权利要求所描述的方式来结合不同的从属权利要求和本文中所述的特征。还可以理解的是,结合单独实施例所描述的特征可以使用在其他所述实施例中。Although the present invention is described herein with reference to specific embodiments, it should be understood that these embodiments are merely examples of the principles and applications of the present invention. It should therefore be understood that many modifications may be made to the exemplary embodiments and that other arrangements may be devised without departing from the spirit and scope of the present invention as defined by the appended claims. It should be understood that the various dependent claims and features described herein may be combined in a manner different from that described in the original claims. It should also be understood that features described in conjunction with individual embodiments may be used in other described embodiments.

Claims (10)

1.大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,所述方法包括:1. A vision-based kinematic calibration method for an industrial robot in a large-size workspace, characterized in that the method comprises: 根据工业机器人配置,在工作空间内布局ArUco标记;ArUco markers are laid out in the workspace according to the industrial robot configuration; 根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,利用图像拼接技术将ArUco标记图像生成一个全覆盖的ArUco地图;According to the set shooting path, the monocular vision system is used to capture ArUco tagged images, and the image stitching technology is used to generate a fully covered ArUco map from the ArUco tagged images; 机器人在不同位姿下使用单目视觉系统捕获ArUco标记图像,根据所述全覆盖的ArUco地图识别机器人末端位置,构建机器人末端绝对位置模型;The robot uses a monocular vision system to capture ArUco marked images in different positions, identifies the end position of the robot according to the fully covered ArUco map, and constructs an absolute position model of the end position of the robot; 根据机器人关节角度及运动学参数名义值,计算末端名义位姿,并利用机器人末端绝对位置模型获得末端测量位姿,得到末端名义位姿和测量位姿的误差矩阵;According to the robot joint angle and the nominal value of kinematic parameters, the terminal nominal posture is calculated, and the terminal measurement posture is obtained using the robot terminal absolute position model, and the error matrix between the terminal nominal posture and the measurement posture is obtained; 建立位姿误差模型,根据误差矩阵,得到机器人运动学参数误差。The posture error model is established, and the robot kinematic parameter error is obtained according to the error matrix. 2.根据权利要求1所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,根据工业机器人配置,在工作空间内布局ArUco标记的方法包括:2. The method for kinematic calibration of a large-size workspace based on vision according to claim 1 is characterized in that, according to the configuration of the industrial robot, the method for laying out ArUco markers in the workspace comprises: 建立布置方案的目标函数F(O):Establish the objective function F(O) of the layout plan: F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S)F(O)=α·V(T)+β·U(D)+γ·P(R)-δ·C(S) 其中,O表示标记的布置方案,Among them, O represents the layout of the mark, V(T)表示ArUco标记的可视性参数,α表示V(T)的权重;V(T) represents the visibility parameter of ArUco marker, and α represents the weight of V(T); U(D)表示ArUco标记布置的均匀性参数,β表示U(D)的权重;U(D) represents the uniformity parameter of ArUco marker placement, and β represents the weight of U(D); P(R)表示机器人在其运动范围内所有位置的平均定位精度,γ表示P(R)的权重;P(R) represents the average positioning accuracy of the robot at all positions within its motion range, and γ represents the weight of P(R); C(S)代表布置成本,δ表示C(S)的权重;C(S) represents the layout cost, and δ represents the weight of C(S); 采用遗传算法对ArUco标记的布置方案O进行优化,以最大化目标函数F(O),在优化过程中,不断迭代更新标记的位置、数量和方向,直至找到能使F(O)达到最大值的布置方案。A genetic algorithm is used to optimize the layout scheme O of ArUco markers to maximize the objective function F(O). During the optimization process, the position, number and direction of the markers are continuously updated iteratively until a layout scheme that maximizes F(O) is found. 3.根据权利要求2所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,3. The vision-based kinematic calibration method for industrial robots in large-size workspaces according to claim 2 is characterized in that: 其中,D表示工作空间内各检验点的标记密度集合,σ2(D)为D的方差。Where D represents the set of label densities of each test point in the workspace, and σ 2 (D) is the variance of D. 4.根据权利要求2所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,4. The vision-based kinematic calibration method for industrial robots in large-size workspaces according to claim 2 is characterized in that: 其中,vi表示第i个ArUco标记的可视性,取决于ArUco标记与相机之间的角度和距离,Ai表示第i个ArUco标记在相机视野中的可视面积,Atotal表示相机视野的总面积,n表示ArUco标记的总数。Where, vi represents the visibility of the ith ArUco marker, which depends on the angle and distance between the ArUco marker and the camera, Ai represents the visible area of the ith ArUco marker in the camera’s field of view, Atotal represents the total area of the camera’s field of view, and n represents the total number of ArUco markers. 5.根据权利要求2所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,5. The vision-based kinematic calibration method for industrial robots in large-size workspaces according to claim 2, characterized in that: 其中,m是定位操作总次数,dj是第j次操作中的定位误差,dmax是允许的最大误差距离。Where m is the total number of positioning operations, dj is the positioning error in the jth operation, and dmax is the maximum allowed error distance. 6.根据权利要求1所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,根据设定的拍摄路径使用单目视觉系统捕获ArUco标记图像,利用图像拼接技术将ArUco标记图像生成一个全覆盖的ArUco地图的方法包括:6. The method for kinematic calibration of a large-size workspace based on vision according to claim 1 is characterized in that the method of capturing ArUco marker images using a monocular vision system according to a set shooting path and generating a fully covered ArUco map from the ArUco marker images using image stitching technology comprises: 根据设定的闭环拍摄路径,依次拍摄ArUco标记图像,确保相邻ArUco标记图像中存在相同的ArUco标记;According to the set closed-loop shooting path, shoot ArUco mark images in sequence to ensure that the same ArUco mark exists in adjacent ArUco mark images; 确定每个ArUco标记的位置,并采用L-M算法将目标的ArUco标记角点的重投影误差降至最低,得到初始ArUco地图;Determine the position of each ArUco marker, and use the L-M algorithm to minimize the reprojection error of the target's ArUco marker corner points to obtain the initial ArUco map; 重投影误差的目标函数为The objective function of the reprojection error is and 其中,Ψ(δ,γti·cj)表示角点从三维空间坐标到像素坐标的投影,δ表示相机内参矩阵,γt表示相机的外参,γi表示标记坐标系到世界坐标系的变换矩阵,cj表示标记角点的三维坐标,表示标记角点的像素坐标,K是闭环拍摄路径中的图像总数,xk是第k张图像的位姿,zk是从第k张图像到第k+1张图像的位姿变换,⊕是位子之间的运算符。Among them, Ψ(δ,γ ti ·c j ) represents the projection of the corner point from the three-dimensional space coordinates to the pixel coordinates, δ represents the camera intrinsic parameter matrix, γ t represents the camera extrinsic parameter, γ i represents the transformation matrix from the marker coordinate system to the world coordinate system, and c j represents the three-dimensional coordinates of the marker corner point. represents the pixel coordinates of the marked corner point, K is the total number of images in the closed-loop shooting path, xk is the pose of the kth image, zk is the pose transformation from the kth image to the k+1th image, and ⊕ is the operator between positions. 7.根据权利要求1所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,构建机器人末端绝对位置模型的方法:7. The vision-based kinematic calibration method for industrial robots in large-size workspaces according to claim 1 is characterized in that the method for constructing the absolute position model of the robot end is: 根据全覆盖的ArUco地图和机器人在不同位姿下使用单目视觉系统捕获的ArUco标记图像,建立ArUco图像坐标系和相机坐标系的坐标转换关系;Based on the fully covered ArUco map and the ArUco marker images captured by the robot using the monocular vision system at different positions, the coordinate transformation relationship between the ArUco image coordinate system and the camera coordinate system is established; 利用手眼标定模型,测算出机器人末端坐标系与相机坐标系间的相对位姿关系,从而确定机器人末端绝对位置模型。The hand-eye calibration model is used to calculate the relative posture relationship between the robot end coordinate system and the camera coordinate system, thereby determining the absolute position model of the robot end. 8.根据权利要求1所述的大尺寸工作空间下基于视觉的工业机器人运动学标定方法,其特征在于,建立位姿误差模型,通过Python编程处理所述误差矩阵,得到机器人运动学参数误差。8. The vision-based kinematic calibration method for an industrial robot in a large-size workspace according to claim 1 is characterized in that a posture error model is established, and the error matrix is processed by Python programming to obtain the robot kinematic parameter error. 9.一种计算机可读的存储设备,所述存储设备存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至8任一所述大尺寸工作空间下基于视觉的工业机器人运动学标定方法。9. A computer-readable storage device storing a computer program, wherein the computer program, when executed by a processor, implements a vision-based kinematic calibration method for an industrial robot in a large-size workspace as described in any one of claims 1 to 8. 10.一种大尺寸工作空间下基于视觉的工业机器人运动学标定装置,包括存储设备、处理器以及存储在所述存储设备中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序实现如权利要求1至8任一所述大尺寸工作空间下基于视觉的工业机器人运动学标定方法。10. A vision-based kinematic calibration device for an industrial robot in a large-size workspace, comprising a storage device, a processor, and a computer program stored in the storage device and executable on the processor, wherein the processor executes the computer program to implement the vision-based kinematic calibration method for an industrial robot in a large-size workspace as described in any one of claims 1 to 8.
CN202410623842.6A 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces Pending CN118322213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410623842.6A CN118322213A (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410623842.6A CN118322213A (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Publications (1)

Publication Number Publication Date
CN118322213A true CN118322213A (en) 2024-07-12

Family

ID=91772697

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410623842.6A Pending CN118322213A (en) 2024-05-20 2024-05-20 Vision-based kinematic calibration method for industrial robots in large workspaces

Country Status (1)

Country Link
CN (1) CN118322213A (en)

Similar Documents

Publication Publication Date Title
CN111127568B (en) Camera pose calibration method based on spatial point location information
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
CN108274092B (en) Automatic bevel cutting system and cutting method based on 3D vision and model matching
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
US20200298411A1 (en) Method for the orientation of an industrial robot, and industrial robot
JP6855491B2 (en) Robot system, robot system control device, and robot system control method
CN112109072B (en) Accurate 6D pose measurement and grabbing method for large sparse feature tray
CN114742883B (en) An automated assembly method and system based on planar workpiece positioning algorithm
JP2016170050A (en) Position attitude measurement device, position attitude measurement method and computer program
CN107097122A (en) A kind of robot for independently grinding large-scale free form surface
CN111390910A (en) Manipulator target grabbing and positioning method, computer readable storage medium and manipulator
CN114800574A (en) Robot automatic welding system and method based on double three-dimensional cameras
CN117840995A (en) Automatic wall-building method and system based on two-stage visual servoing
Sutanto et al. The role of exploratory movement in visual servoing without calibration
CN115908562A (en) A different surface point cooperative marker and its measuring method
Bodor et al. Mobile camera positioning to optimize the observability of human activity recognition tasks
CN118864400A (en) A precise grasping method of palletizing robot based on machine vision
CN118322213A (en) Vision-based kinematic calibration method for industrial robots in large workspaces
CN111283664A (en) A registration system and method for robot augmented reality teaching
TWI788253B (en) Adaptive mobile manipulation apparatus and method
CN113858214B (en) Positioning method and control system for robot operation
Motai et al. SmartView: hand-eye robotic calibration for active viewpoint generation and object grasping
JP7533265B2 (en) Support system, image processing device, support method and program
JPH02110788A (en) Shape recognition method for 3D objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination