[go: up one dir, main page]

CN115284297A - Workpiece positioning method, robot and robot operation method - Google Patents

Workpiece positioning method, robot and robot operation method Download PDF

Info

Publication number
CN115284297A
CN115284297A CN202211063956.7A CN202211063956A CN115284297A CN 115284297 A CN115284297 A CN 115284297A CN 202211063956 A CN202211063956 A CN 202211063956A CN 115284297 A CN115284297 A CN 115284297A
Authority
CN
China
Prior art keywords
workpiece
robot
virtual
camera
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211063956.7A
Other languages
Chinese (zh)
Other versions
CN115284297B (en
Inventor
植美浃
韦卓光
廖伟东
翟军
李俊渊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai Ruiji Technology Co ltd
China International Marine Containers Group Co Ltd
CIMC Containers Holding Co Ltd
Original Assignee
Shenzhen Qianhai Ruiji Technology Co ltd
China International Marine Containers Group Co Ltd
CIMC Containers Holding Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai Ruiji Technology Co ltd, China International Marine Containers Group Co Ltd, CIMC Containers Holding Co Ltd filed Critical Shenzhen Qianhai Ruiji Technology Co ltd
Priority to CN202211063956.7A priority Critical patent/CN115284297B/en
Publication of CN115284297A publication Critical patent/CN115284297A/en
Application granted granted Critical
Publication of CN115284297B publication Critical patent/CN115284297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

本申请揭示了一种工件定位方法、计算机设备、计算机可读存储介质、机器人及机器人作业方法,该方案构建出虚拟相机和工件模型,采用虚拟相机拍摄虚拟场景中的工件模型,获取工件模型在基坐标系下的虚拟点云,将虚拟点云与由相机装置采集的工件图像获得的实际点云匹配,基于匹配结果确定工件的位置,避免了直接将实际点云与工件模型匹配方式导致的匹配不准确,提高了工件实时定位的准确性,从而提高机器人作业的质量和效率。

Figure 202211063956

The present application discloses a workpiece positioning method, computer equipment, computer-readable storage medium, robot and robot operation method. The solution constructs a virtual camera and a workpiece model, uses the virtual camera to photograph the workpiece model in the virtual scene, and obtains the workpiece model in the The virtual point cloud in the base coordinate system matches the virtual point cloud with the actual point cloud obtained by the workpiece image collected by the camera device, and determines the position of the workpiece based on the matching result, avoiding the direct matching of the actual point cloud and the workpiece model. The inaccurate matching improves the accuracy of real-time positioning of the workpiece, thereby improving the quality and efficiency of robot operations.

Figure 202211063956

Description

工件定位方法、机器人及机器人作业方法Workpiece positioning method, robot and robot operation method

技术领域technical field

本申请涉及机器人技术领域,特别涉及一种工件定位方法、计算机设备、计算机可读存储介质、机器人及机器人作业方法。The present application relates to the field of robot technology, in particular to a workpiece positioning method, computer equipment, computer readable storage medium, robot and robot operation method.

背景技术Background technique

在机器人作业领域,通过线激光扫描工件的方式获取工件位置、通过人工示教编程方式获取工件位置等,存在耗费时间长、效率低等问题。同时,由于线激光扫描采集的工件点云数据会存在较多的噪声点,准确性较低,导致机器人作业质量和效率低;而人工示教编程方式引入了人为的不可控因素,往往会引发事故。In the field of robot operations, obtaining the position of the workpiece by scanning the workpiece with a line laser, and obtaining the position of the workpiece by manual teaching and programming, etc., have problems such as long time-consuming and low efficiency. At the same time, since the point cloud data of the workpiece collected by line laser scanning will have many noise points, the accuracy is low, resulting in low quality and efficiency of the robot's work; and the manual teaching programming method introduces artificial uncontrollable factors, which often lead to ACCIDENT.

随着机器人技术、三维视觉技术的快速发展,以及产业智能制造的升级,机器人视觉越来越多的应用于工业生产以及服务业等场景,通过视觉系统引导机器人作业,是实现机器人作业智能化的重要手段。基于三维视觉引导机器人作业,效率更高,对工件的定位也更加准确。With the rapid development of robot technology, 3D vision technology, and the upgrading of industrial intelligent manufacturing, robot vision is more and more used in industrial production and service industries. Guiding robot operations through visual systems is the key to realizing intelligent robot operations. important means. Based on the three-dimensional vision to guide the robot operation, the efficiency is higher and the positioning of the workpiece is more accurate.

然而,现有技术中,是直接将相机装置获取到的工件点云数据与工件模型对应的虚拟点云数据匹配,工件模型包含有厚度数据,而通过相机装置仅能采集到视线范围内的工件点云数据,直接匹配工件点云数据与工件模型中的虚拟点云数据,无法确保与工件点云数据匹配的虚拟点云数据是相对应的,匹配误差大。如图1所示,标记a表示工件点云轮廓,标记b表示直接由工件模型获取到的虚拟点云轮廓,标记a与标记b并非位于同一平面上,而是具有一定的距离;即是,虚拟点云数据与工件点云数据不是完全对应的,将导致匹配不准确。However, in the prior art, the workpiece point cloud data acquired by the camera device is directly matched with the virtual point cloud data corresponding to the workpiece model. The workpiece model contains thickness data, and the camera device can only collect the workpiece within the line of sight. The point cloud data directly matches the workpiece point cloud data with the virtual point cloud data in the workpiece model. It cannot ensure that the virtual point cloud data matched with the workpiece point cloud data is corresponding, and the matching error is large. As shown in Figure 1, mark a represents the contour of the workpiece point cloud, mark b represents the contour of the virtual point cloud obtained directly from the workpiece model, mark a and mark b are not located on the same plane, but have a certain distance; that is, The virtual point cloud data is not completely corresponding to the workpiece point cloud data, which will lead to inaccurate matching.

申请内容application content

为了解决工件定位不准确的问题,本申请提供了一种工件定位方法、计算机设备、计算机可读存储介质、机器人及机器人作业方法。In order to solve the problem of inaccurate workpiece positioning, the present application provides a workpiece positioning method, computer equipment, computer readable storage medium, robot and robot operation method.

根据本申请实施例的一方面,公开了一种工件定位方法,用于机器人对工件作业的场景,所述机器人上设有相机装置。该工件定位方法包括:According to an aspect of the embodiments of the present application, a workpiece positioning method is disclosed, which is used in a scene where a robot works on a workpiece, and the robot is provided with a camera device. The workpiece positioning method includes:

基于所述相机装置采集的工件图像,获取所述工件在所述机器人的基坐标系下的实际点云;Acquiring an actual point cloud of the workpiece in the base coordinate system of the robot based on the image of the workpiece collected by the camera device;

获取所述工件在所述基坐标系下对应的工件模型;Obtaining a workpiece model corresponding to the workpiece in the base coordinate system;

构建虚拟相机和虚拟场景,将所述工件模型置于所述虚拟场景,并使所述工件模型位于所述虚拟相机的视场角内;Constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene, and making the workpiece model within the field of view of the virtual camera;

采用所述虚拟相机拍摄所述工件模型,获取所述工件模型在所述基坐标系下的虚拟点云;photographing the workpiece model by using the virtual camera, and obtaining a virtual point cloud of the workpiece model in the base coordinate system;

匹配所述实际点云和所述虚拟点云,获得所述工件的位置。Matching the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

在一种示例性实施例中,所述相机装置设置在所述机器人的末端;所述基于所述相机装置采集的工件图像,获取所述工件在所述机器人的基坐标系下的实际点云,包括:In an exemplary embodiment, the camera device is arranged at the end of the robot; the actual point cloud of the workpiece in the base coordinate system of the robot is obtained based on the image of the workpiece collected by the camera device ,include:

获取所述相机装置采集到的工件图像及所述相机装置采集所述工件图像时所述机器人的位姿矩阵;Obtaining the workpiece image collected by the camera device and the pose matrix of the robot when the camera device collects the workpiece image;

根据所述位姿矩阵和所述机器人的手眼矩阵,将所述工件图像中的点云在相机坐标系下的坐标转换到所述基坐标系下,获得所述实际点云;所述手眼矩阵表示相机坐标系相对于所述机器人的工具坐标系的转换关系。According to the pose matrix and the hand-eye matrix of the robot, the coordinates of the point cloud in the workpiece image in the camera coordinate system are transformed into the base coordinate system to obtain the actual point cloud; the hand-eye matrix Indicates the transformation relationship between the camera coordinate system and the tool coordinate system of the robot.

在一种示例性实施例中,所述构建虚拟相机,包括:In an exemplary embodiment, said constructing a virtual camera includes:

根据所述位姿矩阵和所述手眼矩阵确定所述虚拟相机在所述虚拟场景中的位置;determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix;

根据所述相机装置的参数,配置所述虚拟相机的参数,构建出所述虚拟相机;所述参数包括视场角和像素。According to the parameters of the camera device, the parameters of the virtual camera are configured to construct the virtual camera; the parameters include field angle and pixels.

在一种示例性实施例中,所述根据所述相机装置的参数,配置所述虚拟相机的参数,包括:In an exemplary embodiment, configuring parameters of the virtual camera according to parameters of the camera device includes:

配置所述虚拟相机的视场角与所述相机装置的视场角一致;Configuring the angle of view of the virtual camera to be consistent with the angle of view of the camera device;

配置所述虚拟相机的像素与所述相机装置的像素一致。The pixels of the virtual camera are configured to coincide with the pixels of the camera device.

在一种示例性实施例中,所述根据所述位姿矩阵和所述手眼矩阵确定所述虚拟相机在所述虚拟场景中的位置,包括:In an exemplary embodiment, the determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix includes:

根据所述位姿矩阵和所述手眼矩阵确定所述相机装置相对于所述机器人的位置;determining a position of the camera device relative to the robot based on the pose matrix and the hand-eye matrix;

根据所述相机装置相对于所述机器人的位置,设置所述虚拟相机在所述虚拟场景中的位置,使所述虚拟相机的位置与所述相机装置的位置相对应。According to the position of the camera device relative to the robot, the position of the virtual camera in the virtual scene is set, so that the position of the virtual camera corresponds to the position of the camera device.

在一种示例性实施例中,所述获取所述工件在所述基坐标系下的工件模型,包括:In an exemplary embodiment, the acquiring the workpiece model of the workpiece in the base coordinate system includes:

获取所述工件多个角度的图像数据;acquiring image data from multiple angles of the workpiece;

利用三维建模软件,将所述多个角度的图像数据整合成三维数模;Using three-dimensional modeling software, the image data of the multiple angles are integrated into a three-dimensional digital model;

将所述三维数模转换到所述基坐标系下,获得所述工件模型。Transforming the three-dimensional digital-analog into the base coordinate system to obtain the workpiece model.

在一种示例性实施例中,所述匹配所述实际点云和所述虚拟点云,获得所述工件的位置,包括:In an exemplary embodiment, the matching the actual point cloud and the virtual point cloud to obtain the position of the workpiece includes:

采用迭代最近邻点算法对所述实际点云和所述虚拟点云进行配准,获得所述工件的位置。An iterative nearest neighbor algorithm is used to register the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

根据本申请实施例的一方面,公开了一种机器人,包括:According to an aspect of the embodiment of the present application, a robot is disclosed, including:

机器人本体,所述机器人本体具有多个运动轴;a robot body, the robot body has a plurality of motion axes;

相机装置,所述相机装置设置在所述机器人本体;及a camera device, the camera device is arranged on the robot body; and

控制器,所述控制器与所述机器人本体和所述相机装置连接,用于控制所述机器人本体和所述相机装置,并执行如上所述的工件定位方法的步骤。A controller, the controller is connected with the robot body and the camera device, and is used to control the robot body and the camera device, and execute the steps of the workpiece positioning method as described above.

根据本申请实施例的一方面,公开了一种机器人作业方法,所述机器人设有末端工具和相机装置,所述作业方法包括:According to an aspect of the embodiment of the present application, a robot working method is disclosed, the robot is provided with an end tool and a camera device, and the working method includes:

控制所述相机装置采集工件图像;controlling the camera device to collect images of workpieces;

采用如上所述的工件定位方法进行工件定位,获得工件的位置;Using the above-mentioned workpiece positioning method to perform workpiece positioning to obtain the position of the workpiece;

控制所述末端工具移动到所述工件的位置进行作业。Controlling the end tool to move to the position of the workpiece for operation.

在一种示例性实施例中,所述末端工具为焊枪,所述相机装置为三维相机。In an exemplary embodiment, the end tool is a welding gun, and the camera device is a three-dimensional camera.

根据本申请实施例的一方面,公开了一种计算机设备,用于机器人对工件作业的场景,所述机器人上设有相机装置。该计算机设备包括:According to one aspect of the embodiments of the present application, a computer device is disclosed, which is used in a scene where a robot works on a workpiece, and the robot is provided with a camera device. This computer equipment includes:

第一获取模块,用于基于所述相机装置采集的工件图像,获取所述工件在机器人的基坐标系下的实际点云;The first acquisition module is used to acquire the actual point cloud of the workpiece in the base coordinate system of the robot based on the image of the workpiece collected by the camera device;

第一构建模块,用于获取所述工件在所述基坐标系下对应的工件模型;The first building block is used to obtain the workpiece model corresponding to the workpiece in the base coordinate system;

第二构建模块,用于构建虚拟相机和虚拟场景,将所述工件模型置于所述虚拟场景,并使所述工件模型位于所述虚拟相机的视场角内;The second building block is used to construct a virtual camera and a virtual scene, place the workpiece model in the virtual scene, and make the workpiece model within the field of view of the virtual camera;

第二获取模块,用于采用所述虚拟相机拍摄所述工件模型,获取所述工件模型在所述基坐标系下的虚拟点云;A second acquiring module, configured to use the virtual camera to photograph the workpiece model, and acquire a virtual point cloud of the workpiece model in the base coordinate system;

点云匹配模块,用于匹配所述实际点云和所述虚拟点云,获得所述工件的位置。A point cloud matching module, configured to match the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

根据本申请实施例的一方面,公开了一种计算机设备,包括:According to an aspect of the embodiment of the present application, a computer device is disclosed, including:

一个或多个处理器;one or more processors;

存储器,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述计算机设备实现前述工件定位方法。The memory is used to store one or more programs, and when the one or more programs are executed by the one or more processors, the computer device implements the aforementioned workpiece positioning method.

根据本申请实施例的一方面,公开了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机可读指令,当所述计算机可读指令被计算机的处理器执行时,使所述计算机执行前述工件定位方法。According to an aspect of the embodiments of the present application, a computer-readable storage medium is disclosed, the computer-readable storage medium stores computer-readable instructions, and when the computer-readable instructions are executed by a processor of a computer, all The computer executes the aforementioned workpiece positioning method.

本申请的实施例提供的技术方案至少包括以下有益效果:The technical solutions provided by the embodiments of the present application at least include the following beneficial effects:

本申请提供的技术方案,构建出虚拟相机和工件模型,采用虚拟相机拍摄虚拟场景中的工件模型,获取工件模型在基坐标系下的虚拟点云,将虚拟点云与由相机装置采集的工件图像获得的实际点云匹配,基于匹配结果确定工件的位置,避免了直接将实际点云与工件模型匹配方式导致的匹配不准确,提高了工件实时定位的准确性,从而提高机器人作业的质量和效率。The technical solution provided by this application constructs a virtual camera and a workpiece model, uses the virtual camera to shoot the workpiece model in the virtual scene, obtains the virtual point cloud of the workpiece model in the base coordinate system, and combines the virtual point cloud with the workpiece collected by the camera device The actual point cloud matching obtained from the image determines the position of the workpiece based on the matching result, avoiding the inaccurate matching caused by directly matching the actual point cloud with the workpiece model, improving the accuracy of real-time positioning of the workpiece, thereby improving the quality and efficiency.

应当理解的是,以上的一般描述和后文的细节描述仅是示例性的,并不能限制本申请。It is to be understood that both the foregoing general description and the following detailed description are exemplary only and are not restrictive of the application.

附图说明Description of drawings

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本申请的实施例,并与说明书一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description serve to explain the principles of the application.

图1是现有技术中工件点云与工件模型匹配时的界面图。Fig. 1 is an interface diagram when a workpiece point cloud is matched with a workpiece model in the prior art.

图2是一种示例性实施例示出的机器人结构图。Fig. 2 is a structural diagram of a robot shown in an exemplary embodiment.

图3是根据一示例性实施例示出的一种工件定位方法的流程图。Fig. 3 is a flow chart of a workpiece positioning method according to an exemplary embodiment.

图4是图3对应实施例中步骤S101的细节流程图。FIG. 4 is a detailed flowchart of step S101 in the embodiment corresponding to FIG. 3 .

图5是图3对应实施例中步骤S102的细节流程图。FIG. 5 is a detailed flowchart of step S102 in the embodiment corresponding to FIG. 3 .

图6是图3对应实施例中步骤S103的部分细节流程图。FIG. 6 is a partially detailed flowchart of step S103 in the embodiment corresponding to FIG. 3 .

图7是图6对应实施例中步骤S1031的细节流程图。FIG. 7 is a detailed flowchart of step S1031 in the embodiment corresponding to FIG. 6 .

图8是根据一示例性实施例虚拟相机和工件模型的示意图。Fig. 8 is a schematic diagram of a virtual camera and a workpiece model according to an exemplary embodiment.

图9是根据一示例性实施例实际点云与虚拟点云匹配的界面图。Fig. 9 is an interface diagram of matching the actual point cloud with the virtual point cloud according to an exemplary embodiment.

图10是根据一示例性实施例示出的一种机器人作业方法的流程图。Fig. 10 is a flow chart of a robot working method according to an exemplary embodiment.

图11是根据一示例性实施例示出的用于实现本申请实施例的计算机设备的框图。Fig. 11 is a block diagram showing a computer device for implementing the embodiment of the present application according to an exemplary embodiment.

图12是根据一示例性实施例示出的用于实现本申请实施例的计算机设备的计算机系统结构框图。Fig. 12 is a structural block diagram of a computer system for implementing a computer device according to an exemplary embodiment of the present application.

附图标记说明如下:The reference signs are explained as follows:

100、机器人;101、机器人本体;102、相机装置;103、末端工具;200、计算机设备;201、第一获取模块;202、第一构建模块;203、第二构建模块;204、第二获取模块;205、点云匹配模块;300、计算机系统;301、CPU;302、ROM;303、存储部分;304、RAM;305、总线;306、I/O接口;307、输入部分;308、输出部分;309、通信部分;310、驱动器;311、可拆卸介质;401、虚拟相机;402、工件模型。100. Robot; 101. Robot body; 102. Camera device; 103. End tool; 200. Computer equipment; 201. First acquisition module; 202. First building module; 203. Second building module; 204. Second acquisition Module; 205, point cloud matching module; 300, computer system; 301, CPU; 302, ROM; 303, storage part; 304, RAM; 305, bus; 306, I/O interface; 307, input part; 308, output part; 309, communication part; 310, driver; 311, removable medium; 401, virtual camera; 402, workpiece model.

具体实施方式Detailed ways

尽管本申请可以容易地表现为不同形式的实施方式,但在附图中示出并且在本说明书中将详细说明的仅仅是其中一些具体实施方式,同时可以理解的是本说明书应视为是本申请原理的示范性说明,而并非旨在将本申请限制到在此所说明的那样。Although the present application may readily take the form of embodiments, only some of them are shown in the drawings and will be described in detail in this specification, and it is to be understood that this specification should be considered as the It is an exemplary illustration of the principles of the application and is not intended to limit the application to what is described herein.

此外,本申请的描述中所提到的术语“包括”、“包含”、“具有”以及它的任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或模块的过程、方法、系统、产品或设备,没有限定于已列出的步骤或模块,而是可选的还包括其它没有列出的步骤或模块,或可选的还包括对于这些过程、方法、产品或设备固有的其它步骤或模块。In addition, the terms "comprising", "comprising", "having" and any variations thereof mentioned in the description of the present application are intended to cover non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or modules is not limited to the listed steps or modules, but optionally also includes other unlisted steps or modules, or optional Other steps or modules inherent to these processes, methods, products or devices are also included.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个特征。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more features.

在本申请的描述中,除非另有说明,“多个”的含义是指两个或两个以上。In the description of the present application, unless otherwise specified, the meaning of "plurality" refers to two or more.

需要说明的是,本申请实施例中,“示例性”或者“举例地”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性”或者“举例地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性”或者“举例地”或者“例如”等词旨在以具体方式呈现相关概念。It should be noted that, in the embodiments of the present application, words such as "exemplary" or "example" or "for example" are used as examples, illustrations or illustrations. Any embodiment or design solution described as "exemplary" or "example" or "for example" in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design solutions. Rather, the use of words such as "exemplary" or "for example" or "such as" is intended to present related concepts in a concrete manner.

以下将详细地对示例性实施例进行说明。以下示例性实施例中所描述的实施方式并不代表与本申请相一致的所有实施方式。相反,它们仅是与如申请内容中所详述的、本申请的一些方面相一致的装置和方法的例子。Exemplary embodiments will be described in detail below. The implementations described in the following exemplary embodiments do not represent all implementations consistent with this application. Rather, they are merely examples of apparatuses and methods consistent with aspects of the present application, as detailed in the Summary of the Application.

图2示出了一示例性实施例的机器人结构图。如图2所示,机器人100包括机器人本体101、相机装置102及末端工具103,相机装置102设置在机器人本体101上,末端工具103设置在机器人本体101的末端;通过机器人本体101带动相机装置102和末端工具103移动,从而利用相机装置102近距离拍摄工件图像,进而通过分析工件图像获得工件的位置,并进一步带动末端工具103移动到工件的位置进行作业。Fig. 2 shows a structural diagram of a robot of an exemplary embodiment. As shown in Figure 2, the robot 100 includes a robot body 101, a camera device 102 and an end tool 103, the camera device 102 is arranged on the robot body 101, and the end tool 103 is arranged at the end of the robot body 101; the camera device 102 is driven by the robot body 101 Move with the end tool 103, so as to use the camera device 102 to take a close-range image of the workpiece, and then obtain the position of the workpiece by analyzing the image of the workpiece, and further drive the end tool 103 to move to the position of the workpiece for operation.

可以理解地,机器人100还包括有控制器,控制器与机器人本体101、相机装置102及末端工具103连接,以控制机器人本体101的运动,以及控制相机装置102拍摄工件图像,并且控制器还分析工件图像,从而获得工件的位置,进而可以控制机器人本体101运动,带动末端工具103移动到工件的位置进行作业。可以理解地,控制器可以是内置于机器人本体101,也可以是设置在机器人本体101以外。It can be understood that the robot 100 also includes a controller, the controller is connected with the robot body 101, the camera device 102 and the end tool 103 to control the movement of the robot body 101, and control the camera device 102 to take images of the workpiece, and the controller also analyzes The image of the workpiece is obtained to obtain the position of the workpiece, and then the movement of the robot body 101 can be controlled to drive the end tool 103 to move to the position of the workpiece for operation. It can be understood that the controller can be built in the robot body 101 , or can be arranged outside the robot body 101 .

末端工具103可以是焊枪,工件即为待焊接的物件,此时机器人100即为焊接机器人。末端工具103可以是胶枪,工件即为待涂胶的物件,此时机器人100即为涂胶机器人。末端工具103还可以是刀具类,工件即为待切割的物件,此时机器人100即为切割机器人。当然,末端工具103还可以是其它可以设置在机器人100的末端并通过机器人100带动进行作业的其它工具,不限于前述焊枪、胶枪、刀具等。The end tool 103 may be a welding torch, the workpiece is the object to be welded, and the robot 100 is a welding robot at this time. The end tool 103 may be a glue gun, the workpiece is the object to be glued, and the robot 100 is a glue-applying robot. The end tool 103 can also be a cutting tool, the workpiece is the object to be cut, and the robot 100 is a cutting robot at this time. Of course, the end tool 103 can also be other tools that can be arranged at the end of the robot 100 and driven by the robot 100 to perform operations, not limited to the aforementioned welding gun, glue gun, knife, etc.

本申请实施例提供一种工件定位方法、计算机设备、计算机可读存储介质、机器人及机器人作业方法,可以实现准确定位工件的实时位置,从而提高机器人作业的质量和效率。The embodiments of the present application provide a workpiece positioning method, computer equipment, computer readable storage medium, robot and robot operation method, which can accurately locate the real-time position of the workpiece, thereby improving the quality and efficiency of the robot operation.

本申请实施例提供的工件定位方法、计算机设备、计算机可读存储介质、机器人及机器人作业方法,具体通过如下实施例进行说明,首先描述本申请实施例中的工件定位方法。The workpiece positioning method, computer equipment, computer-readable storage medium, robot and robot operation method provided in the embodiments of the present application are specifically described through the following embodiments. First, the workpiece positioning method in the embodiments of the present application is described.

本申请实施例首先提供了一种工件定位方法,用于机器人对工件进行三维定位以对工件作业的场景,机器人上设有相机装置。该工件定位方法包括:The embodiment of the present application firstly provides a workpiece positioning method, which is used in a scene where a robot performs three-dimensional positioning on a workpiece to work on the workpiece, and the robot is provided with a camera device. The workpiece positioning method includes:

基于相机装置采集的工件图像,获取工件在机器人的基坐标系下的实际点云;Based on the image of the workpiece collected by the camera device, the actual point cloud of the workpiece in the base coordinate system of the robot is obtained;

获取工件在基坐标系下对应的工件模型;Obtain the workpiece model corresponding to the workpiece in the base coordinate system;

构建虚拟相机和虚拟场景,将工件模型置于虚拟场景,并使工件模型位于虚拟相机的视场角内;Construct a virtual camera and a virtual scene, place the workpiece model in the virtual scene, and make the workpiece model within the field of view of the virtual camera;

采用虚拟相机拍摄工件模型,获取工件模型在基坐标系下的虚拟点云;Use a virtual camera to shoot the workpiece model, and obtain the virtual point cloud of the workpiece model in the base coordinate system;

匹配实际点云和虚拟点云,获得工件的位置。Match the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

本申请提供的技术方案,构建出虚拟相机和工件模型,采用虚拟相机拍摄虚拟场景中的工件模型,获取工件模型在基坐标系下的虚拟点云,将虚拟点云与由相机装置采集的工件图像获得的实际点云匹配,基于匹配结果确定工件的位置,避免了直接将实际点云与工件模型匹配方式导致的匹配不准确,对工件实际点云噪声抗干扰能力强,提高了工件实时定位的准确性,从而提高机器人作业的质量和效率。同时,也避免了通过示教机器人方式进行作业,减轻了工作人员的工作负担,也提高了机器人的作业质量和效率。The technical solution provided by this application constructs a virtual camera and a workpiece model, uses the virtual camera to shoot the workpiece model in the virtual scene, obtains the virtual point cloud of the workpiece model in the base coordinate system, and combines the virtual point cloud with the workpiece collected by the camera device The actual point cloud matching obtained by the image determines the position of the workpiece based on the matching result, avoiding the inaccurate matching caused by directly matching the actual point cloud with the workpiece model, and has strong anti-interference ability to the actual point cloud noise of the workpiece, which improves the real-time positioning of the workpiece accuracy, thereby improving the quality and efficiency of robot operations. At the same time, it also avoids the operation by teaching the robot, reduces the workload of the staff, and improves the operation quality and efficiency of the robot.

以下结合本说明书实施例中的附图,对本申请的实施方式予以进一步地详尽阐述。The implementation manners of the present application will be further described in detail below in conjunction with the accompanying drawings in the embodiments of the present specification.

参阅图3所示,本申请一示例性实施例提供的工件定位方法,包括以下步骤S101~S105。Referring to FIG. 3 , the workpiece positioning method provided by an exemplary embodiment of the present application includes the following steps S101-S105.

S101,基于相机装置采集的工件图像,获取工件在机器人的基坐标系下的实际点云。S101. Acquire an actual point cloud of the workpiece in the base coordinate system of the robot based on the image of the workpiece collected by the camera device.

在一个示例性实施例中,相机装置安装在机器人的末端,如图2所示。在该一个示例性实施例中,如图4所示,步骤S101包括以下步骤S1011~S1012。In an exemplary embodiment, a camera device is mounted at the end of the robot, as shown in FIG. 2 . In this exemplary embodiment, as shown in FIG. 4 , step S101 includes the following steps S1011-S1012.

S1011,获取相机装置采集到的工件图像及相机装置采集工件图像时机器人的位姿矩阵。S1011, acquiring the image of the workpiece collected by the camera device and the pose matrix of the robot when the image of the workpiece is collected by the camera device.

S1012,根据位姿矩阵和机器人的手眼矩阵,将工件图像中的点云在相机坐标系下的坐标转换到基坐标系下,获得实际点云。S1012. According to the pose matrix and the hand-eye matrix of the robot, the coordinates of the point cloud in the workpiece image in the camera coordinate system are transformed into the base coordinate system to obtain an actual point cloud.

其中,手眼矩阵表示相机坐标系相对于机器人的工具坐标系的转换关系;具体包括机器人末端到安装在末端的相机装置的平移加上机器人的末端到安装在末端的相机装置的旋转。至于如何获得手眼矩阵为现有技术,在此不再赘述。Among them, the hand-eye matrix represents the transformation relationship between the camera coordinate system and the tool coordinate system of the robot; specifically, it includes the translation from the end of the robot to the camera device installed on the end plus the rotation from the end of the robot to the camera device installed on the end. As for how to obtain the hand-eye matrix is the prior art, it will not be repeated here.

其中,位姿矩阵表示机器人的工具坐标系相对于机器人的基坐标系的转换关系。工具坐标系用于定义末端工具的中心位置和末端工具的姿态。Among them, the pose matrix represents the transformation relationship between the tool coordinate system of the robot and the base coordinate system of the robot. The tool coordinate system is used to define the center position of the end tool and the pose of the end tool.

详细地,在步骤S1012中,基于映射关系pcd1=transform(pcd0,toolPos*handEye)将工件图像中的点云在相机坐标系下的坐标转换到基坐标系下,获得实际点云。In detail, in step S1012, based on the mapping relationship pcd1=transform(pcd0, toolPos*handEye), the coordinates of the point cloud in the workpiece image in the camera coordinate system are transformed into the base coordinate system to obtain the actual point cloud.

其中,pcd1表示转换后在机器人基坐标系下的实际点云,pcd0表示转换前在相机坐标系下的实际点云,transform表示转换函数,toolPos表示当前机器人的位姿矩阵,位姿矩阵为一个4*4的矩阵,handEye表示机器人末端TCP到相机坐标系原点的位置关系矩阵,即手眼矩阵。Among them, pcd1 represents the actual point cloud in the robot base coordinate system after transformation, pcd0 represents the actual point cloud in the camera coordinate system before transformation, transform represents the transformation function, toolPos represents the pose matrix of the current robot, and the pose matrix is a A 4*4 matrix, handEye represents the positional relationship matrix from the TCP at the end of the robot to the origin of the camera coordinate system, that is, the hand-eye matrix.

S102,获取工件在基坐标系下对应的工件模型。S102. Obtain a workpiece model corresponding to the workpiece in the base coordinate system.

在一个示例性实施例中,如图5所示,步骤S102包括以下步骤S1021~S1023。In an exemplary embodiment, as shown in FIG. 5 , step S102 includes the following steps S1021-S1023.

S1021,获取工件多个角度的图像数据。S1021. Acquire image data from multiple angles of the workpiece.

S1022,利用三维建模软件,将多个角度的图像数据整合成三维数模。S1022, using 3D modeling software to integrate image data from multiple angles into a 3D digital model.

可以理解地,三维数模就是用三维建模软件,例如UG、CATIA等,制作的产品模型。三维数模是物体的多边形表示,通常用计算机或者其它视频设备进行显示。Understandably, a 3D digital model is a product model made with 3D modeling software, such as UG, CATIA, etc. A three-dimensional digital model is a polygonal representation of an object, usually displayed by a computer or other video equipment.

S1023,将三维数模从世界坐标系转换到基坐标系下,获得工件模型。S1023, converting the three-dimensional digital model from the world coordinate system to the base coordinate system to obtain the workpiece model.

其中,工件模型可以是例如STL图档等。Wherein, the workpiece model may be, for example, an STL image file or the like.

S103,构建虚拟相机和虚拟场景,将工件模型置于虚拟场景,并使工件模型位于虚拟相机的视场角内。S103, constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene, and making the workpiece model within the field of view of the virtual camera.

详细地,可以采用Blender等软件构建虚拟相机。Specifically, software such as Blender can be used to construct a virtual camera.

在一个示例性实施例中,如图6所示,步骤S103中,构建虚拟相机,包括以下步骤S1031~S1032。In an exemplary embodiment, as shown in FIG. 6, in step S103, constructing a virtual camera includes the following steps S1031-S1032.

S1031,根据机器人的位姿矩阵和手眼矩阵确定虚拟相机在虚拟场景中的位置。S1031. Determine the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix of the robot.

在一个示例性实施例中,如图7所示,步骤S1031包括以下步骤S10311~S10312。In an exemplary embodiment, as shown in FIG. 7, step S1031 includes the following steps S10311-S10312.

S10311,根据位姿矩阵和手眼矩阵确定相机装置相对于机器人的位置。S10311. Determine the position of the camera device relative to the robot according to the pose matrix and the hand-eye matrix.

详细地,在步骤S10311中,基于映射关系camPos=toolPos*handEye确定相机装置相对于机器人的位置。In detail, in step S10311, the position of the camera device relative to the robot is determined based on the mapping relation camPos=toolPos*handEye.

其中,camPos表示相机装置相对于机器人的位置,toolPos表示相机装置采集工件图像时机器人的位姿矩阵,handEye表示手眼矩阵。Among them, camPos represents the position of the camera device relative to the robot, toolPos represents the pose matrix of the robot when the camera device captures the image of the workpiece, and handEye represents the hand-eye matrix.

S10312,根据相机装置相对于机器人的位置,设置虚拟相机在虚拟场景中的位置,使虚拟相机的位置与相机装置的位置相对应。S10312. Set the position of the virtual camera in the virtual scene according to the position of the camera device relative to the robot, so that the position of the virtual camera corresponds to the position of the camera device.

S1032,根据相机装置的参数,配置虚拟相机的参数,构建出虚拟相机。S1032. Configure the parameters of the virtual camera according to the parameters of the camera device to construct a virtual camera.

详细地,前述参数包括视场角和像素。在步骤S1032中,配置虚拟相机的视场角与相机装置的视场角一致,配置虚拟相机的像素与相机装置的像素一致。In detail, the aforementioned parameters include viewing angle and pixels. In step S1032, the angle of view of the virtual camera is configured to be consistent with the angle of view of the camera device, and the pixels of the virtual camera are configured to be consistent with those of the camera device.

通过将虚拟相机在虚拟场景中的位置设置为与相机装置在机器人的位置相互对应,以及将虚拟相机的参数设置为与相机装置的参数一致,可以最大程度确保虚拟相机采集到的虚拟点云与相机装置采集到的实际点云的位置相对应,提高工件定位的准确性。By setting the position of the virtual camera in the virtual scene to correspond to the position of the camera device on the robot, and setting the parameters of the virtual camera to be consistent with the parameters of the camera device, it can be ensured that the virtual point cloud collected by the virtual camera is consistent with the The position of the actual point cloud collected by the camera device is corresponding to improve the accuracy of workpiece positioning.

在一个示例性实施例中,相机装置的视场角camView为0.87,相机装置的分辨率大小为x=1280,y=1024;相应地,虚拟相机的视场角camView为0.87,虚拟相机的分辨率大小为x=1280,y=1024。In an exemplary embodiment, the field angle camView of the camera device is 0.87, and the resolution of the camera device is x=1280, y=1024; correspondingly, the field angle camView of the virtual camera is 0.87, and the resolution of the virtual camera is The rate size is x=1280, y=1024.

一示例性实施例虚拟相机和工件模型如图8所示,其中,标记401表示虚拟相机,标记402表示工件模型。An exemplary embodiment of a virtual camera and a workpiece model is shown in FIG. 8 , wherein a mark 401 denotes a virtual camera, and a mark 402 denotes a workpiece model.

可以理解地,前述参数还可以包括有相机的其它参数,例如,相机的俯仰角、方位角、翻滚角等。It can be understood that the aforementioned parameters may also include other parameters of the camera, for example, the pitch angle, azimuth angle, and roll angle of the camera.

S104,采用虚拟相机拍摄工件模型,获取工件模型在基坐标系下的虚拟点云。S104, using a virtual camera to photograph the workpiece model, and obtaining a virtual point cloud of the workpiece model in the base coordinate system.

在将虚拟相机的位置设置为与相机装置相对于机器人的位置对应,且视场角等参数与相机装置一致的实施例中,步骤S104获取到的即为与实际点云同一视角下的工件模型表面点云。In the embodiment where the position of the virtual camera is set to correspond to the position of the camera device relative to the robot, and the field angle and other parameters are consistent with the camera device, what is obtained in step S104 is the workpiece model under the same viewing angle as the actual point cloud surface point cloud.

S105,匹配实际点云和虚拟点云,获得工件的位置。S105, matching the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

在一个示例性实施例中,在步骤S105中,采用迭代最近邻点算法对实际点云和虚拟点云进行配准,获得工件的位置。In an exemplary embodiment, in step S105, an iterative nearest neighbor algorithm is used to register the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

可以理解地,采用迭代最近邻点算法又称为ICP配准(Point CloudRegistration)是指输入两幅点云(源数据和需要配准数据),输出一个变换(补偿矩阵)使得源数据和需要配准数据的重合程度尽可能高。变换可以是刚性的,也可以不是,刚性变换包括旋转和平移。至于如何利用迭代最近邻点算法把虚拟点云匹配到实际点云的位置上为现有技术,在此不再赘述。Understandably, using the iterative nearest neighbor algorithm, also known as ICP registration (Point CloudRegistration), refers to inputting two point clouds (source data and registration data), and outputting a transformation (compensation matrix) to make the source data and registration data The degree of coincidence of quasi-data is as high as possible. Transformations can be rigid or not, rigid transformations include rotations and translations. How to use the iterative nearest neighbor algorithm to match the virtual point cloud to the position of the actual point cloud is a prior art, and will not be repeated here.

图9是根据一示例性实施例实际点云与虚拟点云匹配的界面图。如图9所示,虚拟点云和实际点云贴合在同一表面上,虚拟点云和实际点云能完全匹配在一起,相较于图1所示的现有技术,本申请的匹配精度明显提高,且,匹配得到的工件位置就是工件在机器人基坐标系下真实的位置。Fig. 9 is an interface diagram of matching the actual point cloud with the virtual point cloud according to an exemplary embodiment. As shown in Figure 9, the virtual point cloud and the actual point cloud are attached on the same surface, and the virtual point cloud and the actual point cloud can be completely matched together. Compared with the prior art shown in Figure 1, the matching accuracy of this application is It is obviously improved, and the position of the workpiece obtained by matching is the real position of the workpiece in the robot base coordinate system.

再请参阅图2,为了实现本申请实施例提供的工件定位方法,本申请实施例提供一种机器人100,该机器人100包括机器人本体101、相机装置102、末端工具103及控制器(图未示)。机器人本体101具有多个运动轴,相机装置102和末端工具103设置在机器人本体101的末端。控制器与机器人本体101、相机装置102及末端工具103连接,用于控制机器人本体101、相机装置102及末端工具103,以使得机器人100能够执行图3至图7任一所示的工件定位方法的全部或者部分步骤。Please refer to FIG. 2 again. In order to realize the workpiece positioning method provided by the embodiment of the present application, the embodiment of the present application provides a robot 100, which includes a robot body 101, a camera device 102, an end tool 103 and a controller (not shown in the figure). ). The robot body 101 has multiple motion axes, and the camera device 102 and the end tool 103 are arranged at the end of the robot body 101 . The controller is connected to the robot body 101, the camera device 102, and the end tool 103, and is used to control the robot body 101, the camera device 102, and the end tool 103, so that the robot 100 can execute the workpiece positioning method shown in any one of Fig. 3 to Fig. 7 all or part of the steps.

举例地,机器人本体101具有六个运动轴,即是,机器人100为六轴机器人。For example, the robot body 101 has six motion axes, that is, the robot 100 is a six-axis robot.

举例地,末端工具103为焊枪。For example, the end tool 103 is a welding gun.

图10示出了一示例性实施例的一种机器人作业方法的流程图。如图10所示,该机器人作业方法包括以下步骤S201~S203。Fig. 10 shows a flow chart of a robot working method in an exemplary embodiment. As shown in FIG. 10 , the robot working method includes the following steps S201-S203.

S201,控制相机装置采集工件图像。S201. Control the camera device to collect images of workpieces.

S202,采用前述的工件定位方法进行工件定位,获得工件的位置。S202, using the aforementioned workpiece positioning method to perform workpiece positioning to obtain the position of the workpiece.

S203,控制末端工具移动到工件的位置进行作业。S203, controlling the end tool to move to the position of the workpiece for operation.

在一个示例性实施例中,如图2所示,末端工具103为焊枪,工件为待焊接的物件,在步骤S202中,获得工件中焊缝的位置,在步骤S203,控制末端工具移动到工件中焊缝的位置,并利用末端工具103对工件的焊缝进行焊接。In an exemplary embodiment, as shown in FIG. 2, the end tool 103 is a welding torch, and the workpiece is an object to be welded. In step S202, the position of the weld seam in the workpiece is obtained. In step S203, the end tool is controlled to move to the workpiece. The position of the middle weld seam, and use the end tool 103 to weld the weld seam of the workpiece.

接下来请参阅图11,图11是根据一示例性实施例示出的一种计算机设备200的框图,该计算机设备200可以应用于机器人中,执行图2至图7任一所示的工件定位方法的全部或者部分步骤。如图11所示,该计算机设备200包括但不限于:第一获取模块201、第一构建模块202、第二构建模块203、第二获取模块204及点云匹配模块205。Please refer to FIG. 11 next. FIG. 11 is a block diagram of a computer device 200 according to an exemplary embodiment. The computer device 200 can be applied to a robot to execute the workpiece positioning method shown in any one of FIGS. 2 to 7 all or part of the steps. As shown in FIG. 11 , the computer device 200 includes but not limited to: a first acquisition module 201 , a first construction module 202 , a second construction module 203 , a second acquisition module 204 and a point cloud matching module 205 .

其中,第一获取模块201用于基于相机装置采集的工件图像,获取工件在机器人的基坐标系下的实际点云。Wherein, the first acquisition module 201 is configured to acquire the actual point cloud of the workpiece in the base coordinate system of the robot based on the image of the workpiece collected by the camera device.

第一构建模块202用于获取工件在基坐标系下对应的工件模型。The first construction module 202 is used to obtain the workpiece model corresponding to the workpiece in the base coordinate system.

第二构建模块203用于构建虚拟相机和虚拟场景,将工件模型置于虚拟场景,并使工件模型位于虚拟相机的视场角内。The second construction module 203 is used to construct a virtual camera and a virtual scene, place the workpiece model in the virtual scene, and make the workpiece model within the field of view of the virtual camera.

第二获取模块204用于采用虚拟相机拍摄工件模型,获取工件模型在基坐标系下的虚拟点云。The second acquiring module 204 is used for taking pictures of the workpiece model with a virtual camera, and acquiring a virtual point cloud of the workpiece model in the base coordinate system.

点云匹配模块205用于匹配实际点云和虚拟点云,获得工件的位置。The point cloud matching module 205 is used to match the actual point cloud and the virtual point cloud to obtain the position of the workpiece.

上述计算机设备200中各个模块的功能和作用的实现过程具体详见上述工件定位方法中对应步骤的实现过程,在此不再赘述。For the implementation process of the functions and effects of each module in the above computer device 200, please refer to the implementation process of the corresponding steps in the above workpiece positioning method for details, and will not be repeated here.

上述计算机设备200可以是任意具有信息处理功能的终端,例如台式电脑、笔记本电脑等。The above-mentioned computer device 200 may be any terminal with an information processing function, such as a desktop computer, a notebook computer, and the like.

图12示意性地示出了用于实现本申请实施例工件定位方法的计算机设备的计算机系统结构框图。Fig. 12 schematically shows a structural block diagram of a computer system of a computer device for implementing the workpiece positioning method according to the embodiment of the present application.

需要说明的是,图12示出的计算机设备的计算机系统300仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。It should be noted that the computer system 300 of the computer device shown in FIG. 12 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.

如图12所示,计算机系统300包括中央处理器301(Central Processing Unit,CPU),其可以根据存储在只读存储器302(Read-Only Memory,ROM)中的程序或者从存储部分303加载到随机访问存储器304(Random Access Memory,RAM)中的程序而执行各种适当的动作和处理。在随机访问存储器304中,还存储有系统操作所需的各种程序和数据。中央处理器301、在只读存储器302以及随机访问存储器304通过总线305彼此相连。输入/输出接口306(Input/Output接口,即I/O接口)也连接至总线305。As shown in Figure 12, the computer system 300 includes a central processing unit 301 (Central Processing Unit, CPU), which can be stored in a program in a read-only memory 302 (Read-Only Memory, ROM) or loaded from a storage part 303 to a random Various appropriate actions and processes are executed by accessing programs in the memory 304 (Random Access Memory, RAM). In the random access memory 304, various programs and data necessary for system operation are also stored. The CPU 301 , the read only memory 302 and the random access memory 304 are connected to each other through a bus 305 . An input/output interface 306 (Input/Output interface, ie, I/O interface) is also connected to the bus 305 .

以下部件连接至输入/输出接口306:包括键盘、鼠标等的输入部分307;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分308;包括硬盘等的存储部分303;以及包括诸如局域网卡、调制解调器等的网络接口卡的通信部分309。通信部分309经由诸如因特网的网络执行通信处理。驱动器310也根据需要连接至输入/输出接口306。可拆卸介质311,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器310上,以便于从其上读出的计算机程序根据需要被安装入存储部分303。The following components are connected to the input/output interface 306: an input section 307 including a keyboard, a mouse, etc.; an output section 308 including a cathode ray tube (Cathode Ray Tube, CRT), a liquid crystal display (Liquid Crystal Display, LCD), etc., and a speaker ; a storage section 303 including a hard disk or the like; and a communication section 309 including a network interface card such as a LAN card, a modem, or the like. The communication section 309 performs communication processing via a network such as the Internet. A driver 310 is also connected to the input/output interface 306 as needed. A removable medium 311 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc. is mounted on the drive 310 as necessary so that a computer program read therefrom is installed into the storage section 303 as necessary.

特别地,根据本申请的实施例,各个方法流程图中所描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分309从网络上被下载和安装,和/或从可拆卸介质311被安装。在该计算机程序被中央处理器301执行时,执行本申请的系统中限定的各种功能。In particular, according to the embodiments of the present application, the processes described in the respective method flowcharts can be implemented as computer software programs. For example, the embodiments of the present application include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via communication portion 309 and/or installed from removable media 311 . When this computer program is executed by the central processing unit 301, various functions defined in the system of the present application are performed.

需要说明的是,本申请实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the embodiment of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable The combination. In the present application, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In this application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium other than a computer readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.

本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读存储介质中或者作为计算机可读存储介质上的一个或多个指令或代码进行传输。Those skilled in the art should be aware that, in the above one or more examples, the functions described in this application may be implemented by hardware, software, firmware or any combination thereof. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable storage medium.

通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。Through the description of the above embodiments, those skilled in the art can clearly understand that for the convenience and brevity of the description, only the division of the above-mentioned functional modules is used as an example for illustration. In practical applications, the above-mentioned functions can be allocated according to needs It is completed by different functional modules, that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.

在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块地划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods may be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of modules is only a logical function division, and there may be other division methods in actual implementation. For example a plurality of units or components may be combined or may be integrated into another device, or some features may be omitted, or not implemented.

应当理解的是,本申请并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围执行各种修改和改变。本申请的范围仅由所附的权利要求来限制。It should be understood that the present application is not limited to the precise constructions which have been described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1.一种工件定位方法,用于机器人对工件作业的场景,所述机器人上设有相机装置,其特征在于,所述方法包括:1. A workpiece positioning method, which is used in the scene of a robot working on a workpiece, the robot is provided with a camera device, and it is characterized in that the method comprises: 基于所述相机装置采集的工件图像,获取所述工件在所述机器人的基坐标系下的实际点云;Acquiring an actual point cloud of the workpiece in the base coordinate system of the robot based on the image of the workpiece collected by the camera device; 获取所述工件在所述基坐标系下对应的工件模型;Obtaining a workpiece model corresponding to the workpiece in the base coordinate system; 构建虚拟相机和虚拟场景,将所述工件模型置于所述虚拟场景,并使所述工件模型位于所述虚拟相机的视场角内;Constructing a virtual camera and a virtual scene, placing the workpiece model in the virtual scene, and making the workpiece model within the field of view of the virtual camera; 采用所述虚拟相机拍摄所述工件模型,获取所述工件模型在所述基坐标系下的虚拟点云;photographing the workpiece model by using the virtual camera, and obtaining a virtual point cloud of the workpiece model in the base coordinate system; 匹配所述实际点云和所述虚拟点云,获得所述工件的位置。Matching the actual point cloud and the virtual point cloud to obtain the position of the workpiece. 2.根据权利要求1所述的工件定位方法,其特征在于,所述相机装置设置在所述机器人的末端;所述基于所述相机装置采集的工件图像,获取所述工件在所述机器人的基坐标系下的实际点云,包括:2. The workpiece positioning method according to claim 1, wherein the camera device is arranged at the end of the robot; the workpiece image collected based on the camera device is obtained to obtain the workpiece at the end of the robot. The actual point cloud in the base coordinate system, including: 获取所述相机装置采集到的工件图像及所述相机装置采集所述工件图像时所述机器人的位姿矩阵;Obtaining the workpiece image collected by the camera device and the pose matrix of the robot when the camera device collects the workpiece image; 根据所述位姿矩阵和所述机器人的手眼矩阵,将所述工件图像中的点云在相机坐标系下的坐标转换到所述基坐标系下,获得所述实际点云;所述手眼矩阵表示相机坐标系相对于所述机器人的工具坐标系的转换关系。According to the pose matrix and the hand-eye matrix of the robot, the coordinates of the point cloud in the workpiece image in the camera coordinate system are transformed into the base coordinate system to obtain the actual point cloud; the hand-eye matrix Indicates the transformation relationship between the camera coordinate system and the tool coordinate system of the robot. 3.根据权利要求2所述的工件定位方法,其特征在于,所述构建虚拟相机,包括:3. workpiece positioning method according to claim 2, is characterized in that, described construction virtual camera comprises: 根据所述位姿矩阵和所述手眼矩阵确定所述虚拟相机在所述虚拟场景中的位置;determining the position of the virtual camera in the virtual scene according to the pose matrix and the hand-eye matrix; 根据所述相机装置的参数,配置所述虚拟相机的参数,构建出所述虚拟相机;所述参数包括视场角和像素。According to the parameters of the camera device, the parameters of the virtual camera are configured to construct the virtual camera; the parameters include field angle and pixels. 4.根据权利要求3所述的工件定位方法,其特征在于,所述根据所述相机装置的参数,配置所述虚拟相机的参数,包括:4. The workpiece positioning method according to claim 3, wherein the parameter configuring the virtual camera according to the parameter of the camera device comprises: 配置所述虚拟相机的视场角与所述相机装置的视场角一致;Configuring the angle of view of the virtual camera to be consistent with the angle of view of the camera device; 配置所述虚拟相机的像素与所述相机装置的像素一致。The pixels of the virtual camera are configured to coincide with the pixels of the camera device. 5.根据权利要求4所述的工件定位方法,其特征在于,所述根据所述位姿矩阵和所述手眼矩阵确定所述虚拟相机在所述虚拟场景中的位置,包括:5. The workpiece positioning method according to claim 4, wherein said determining the position of said virtual camera in said virtual scene according to said pose matrix and said hand-eye matrix comprises: 根据所述位姿矩阵和所述手眼矩阵确定所述相机装置相对于所述机器人的位置;determining a position of the camera device relative to the robot based on the pose matrix and the hand-eye matrix; 根据所述相机装置相对于所述机器人的位置,设置所述虚拟相机在所述虚拟场景中的位置,使所述虚拟相机的位置与所述相机装置的位置相对应。According to the position of the camera device relative to the robot, the position of the virtual camera in the virtual scene is set, so that the position of the virtual camera corresponds to the position of the camera device. 6.根据权利要求1所述的工件定位方法,其特征在于,所述获取所述工件在所述基坐标系下的工件模型,包括:6. The workpiece positioning method according to claim 1, wherein the acquiring the workpiece model of the workpiece in the base coordinate system comprises: 获取所述工件多个角度的图像数据;acquiring image data from multiple angles of the workpiece; 利用三维建模软件,将所述多个角度的图像数据整合成三维数模;Using three-dimensional modeling software, the image data of the multiple angles are integrated into a three-dimensional digital model; 将所述三维数模转换到所述基坐标系下,获得所述工件模型。Transforming the three-dimensional digital-analog into the base coordinate system to obtain the workpiece model. 7.根据权利要求1至6任一项所述的工件定位方法,其特征在于,所述匹配所述实际点云和所述虚拟点云,获得所述工件的位置,包括:7. The workpiece positioning method according to any one of claims 1 to 6, wherein said matching said actual point cloud and said virtual point cloud to obtain the position of said workpiece comprises: 采用迭代最近邻点算法对所述实际点云和所述虚拟点云进行配准,获得所述工件的位置。An iterative nearest neighbor algorithm is used to register the actual point cloud and the virtual point cloud to obtain the position of the workpiece. 8.一种机器人,其特征在于,包括:8. A robot, characterized in that, comprising: 机器人本体,所述机器人本体具有多个运动轴;a robot body, the robot body has a plurality of motion axes; 相机装置,所述相机装置设置在所述机器人本体;及a camera device, the camera device is arranged on the robot body; and 控制器,所述控制器与所述机器人本体和所述相机装置连接,用于控制所述机器人本体和所述相机装置,并执行如权利要求1至7任一项所述的工件定位方法的步骤。a controller, the controller is connected with the robot body and the camera device, and is used to control the robot body and the camera device, and execute the workpiece positioning method according to any one of claims 1 to 7 step. 9.一种机器人作业方法,所述机器人设有末端工具和相机装置,其特征在于,所述作业方法包括:9. A robot operation method, the robot is provided with end tool and camera device, it is characterized in that, the operation method comprises: 控制所述相机装置采集工件图像;controlling the camera device to collect images of workpieces; 采用如权利要求1至7任一项所述的工件定位方法进行工件定位,获得工件的位置;Using the workpiece positioning method according to any one of claims 1 to 7 to perform workpiece positioning to obtain the position of the workpiece; 控制所述末端工具移动到所述工件的位置进行作业。Controlling the end tool to move to the position of the workpiece for operation. 10.根据权利要求9所述的机器人作业方法,其特征在于,所述末端工具为焊枪,所述相机装置为三维相机。10. The robot working method according to claim 9, wherein the end tool is a welding torch, and the camera device is a three-dimensional camera.
CN202211063956.7A 2022-08-31 2022-08-31 Workpiece positioning method, robot and robot operation method Active CN115284297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211063956.7A CN115284297B (en) 2022-08-31 2022-08-31 Workpiece positioning method, robot and robot operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211063956.7A CN115284297B (en) 2022-08-31 2022-08-31 Workpiece positioning method, robot and robot operation method

Publications (2)

Publication Number Publication Date
CN115284297A true CN115284297A (en) 2022-11-04
CN115284297B CN115284297B (en) 2023-12-12

Family

ID=83832337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211063956.7A Active CN115284297B (en) 2022-08-31 2022-08-31 Workpiece positioning method, robot and robot operation method

Country Status (1)

Country Link
CN (1) CN115284297B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115870981A (en) * 2022-12-16 2023-03-31 盛景智能科技(嘉兴)有限公司 Data fusion method and device based on network card time
CN116408803A (en) * 2023-04-26 2023-07-11 澳门大学 Training method and device for mechanical arm control model and mechanical arm control system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070282485A1 (en) * 2006-06-06 2007-12-06 Fanuc Ltd Robot simulation apparatus
US20180249144A1 (en) * 2017-02-28 2018-08-30 Mitsubishi Electric Research Laboratories, Inc. System and Method for Virtually-Augmented Visual Simultaneous Localization and Mapping
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN109986255A (en) * 2017-12-29 2019-07-09 深圳中集智能科技有限公司 Hybrid visual servo parallel robot and operation method
CN110634161A (en) * 2019-08-30 2019-12-31 哈尔滨工业大学(深圳) A fast and high-precision estimation method and device for workpiece pose based on point cloud data
CN110842918A (en) * 2019-10-24 2020-02-28 华中科技大学 Robot mobile processing autonomous locating method based on point cloud servo
CN113222940A (en) * 2021-05-17 2021-08-06 哈尔滨工业大学 Method for automatically grabbing workpiece by robot based on RGB-D image and CAD model
CN114202566A (en) * 2022-02-17 2022-03-18 常州铭赛机器人科技股份有限公司 Glue path guiding and positioning method based on shape coarse registration and ICP point cloud fine registration
WO2022061673A1 (en) * 2020-09-24 2022-03-31 西门子(中国)有限公司 Calibration method and device for robot

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070282485A1 (en) * 2006-06-06 2007-12-06 Fanuc Ltd Robot simulation apparatus
US20180249144A1 (en) * 2017-02-28 2018-08-30 Mitsubishi Electric Research Laboratories, Inc. System and Method for Virtually-Augmented Visual Simultaneous Localization and Mapping
CN109986255A (en) * 2017-12-29 2019-07-09 深圳中集智能科技有限公司 Hybrid visual servo parallel robot and operation method
CN108724190A (en) * 2018-06-27 2018-11-02 西安交通大学 A kind of industrial robot number twinned system emulation mode and device
CN110634161A (en) * 2019-08-30 2019-12-31 哈尔滨工业大学(深圳) A fast and high-precision estimation method and device for workpiece pose based on point cloud data
CN110842918A (en) * 2019-10-24 2020-02-28 华中科技大学 Robot mobile processing autonomous locating method based on point cloud servo
WO2022061673A1 (en) * 2020-09-24 2022-03-31 西门子(中国)有限公司 Calibration method and device for robot
CN113222940A (en) * 2021-05-17 2021-08-06 哈尔滨工业大学 Method for automatically grabbing workpiece by robot based on RGB-D image and CAD model
CN114202566A (en) * 2022-02-17 2022-03-18 常州铭赛机器人科技股份有限公司 Glue path guiding and positioning method based on shape coarse registration and ICP point cloud fine registration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵刚;郭晓康;刘德政;王中任;: "随机工件的点云场景CAD模型的快速识别与定位", 激光与红外, no. 12 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115870981A (en) * 2022-12-16 2023-03-31 盛景智能科技(嘉兴)有限公司 Data fusion method and device based on network card time
CN116408803A (en) * 2023-04-26 2023-07-11 澳门大学 Training method and device for mechanical arm control model and mechanical arm control system

Also Published As

Publication number Publication date
CN115284297B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN112122840B (en) Visual positioning welding system and welding method based on robot welding
CN110238831B (en) Robot teaching system and method based on RGB-D image and teaching device
JP4835616B2 (en) Motion teaching system and motion teaching method
CN109032348B (en) Intelligent manufacturing method and equipment based on augmented reality
CN115284297A (en) Workpiece positioning method, robot and robot operation method
CN113715016B (en) Robot grabbing method, system, device and medium based on 3D vision
CN112907682B (en) A hand-eye calibration method, device and related equipment for a five-axis motion platform
CN111801198A (en) Hand-eye calibration method, system and computer storage medium
JP2010042466A (en) Robot teaching system and method for displaying simulation result of operation of robot
CN114800574B (en) A robot automated welding system and method based on dual three-dimensional cameras
CN115213896A (en) Object grasping method, system, device and storage medium based on robotic arm
CN117020413B (en) Polar column coordinate determination method, welding method and welding system
CN114663500A (en) Vision calibration method, computer device and storage medium
CN112958974A (en) Interactive automatic welding system based on three-dimensional vision
CN117541698B (en) Method, device, terminal and medium for adaptively rendering sector diagram to 3D model
CN117140517A (en) High-precision automatic hand-eye calibration method and system for mechanical arm
CN115351482B (en) Control method and device of welding robot, welding robot and storage medium
US20230326098A1 (en) Generating a digital twin representation of an environment or object
CN117006972A (en) Object detection method, device, equipment, three-dimensional scanning system and storage medium
CN114670199A (en) Identification positioning device, system and real-time tracking system
CN115096902A (en) Motion control method and detection system for middle frame defects
CN112732075B (en) A teaching method and system for teaching experiment-oriented virtual-reality fusion machine teacher
CN109685851B (en) Hand-eye calibration method, system, equipment and storage medium of walking robot
CN111275662A (en) Two-dimensional code-based workpiece positioning method, device, equipment and storage medium
CN115294374A (en) Method for butt joint of digital-analog and actual workpiece weld joint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 201, Building A, No. 1, Qianwan Road, Qianhai-Shenzhen-Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518000

Applicant after: SHENZHEN QIANHAI RUIJI TECHNOLOGY CO.,LTD.

Applicant after: CIMC Container (Group) Co.,Ltd.

Applicant after: CHINA INTERNATIONAL MARINE CONTAINERS (GROUP) Ltd.

Address before: Room 201, Building A, No. 1, Qianwan Road, Qianhai-Shenzhen-Hong Kong Cooperation Zone, Shenzhen, Guangdong Province, 518000

Applicant before: SHENZHEN QIANHAI RUIJI TECHNOLOGY CO.,LTD.

Applicant before: CIMC CONTAINERS HOLDING Co.,Ltd.

Applicant before: CHINA INTERNATIONAL MARINE CONTAINERS (GROUP) Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant