[go: up one dir, main page]

CN113524172B - Robot, article grabbing method thereof and computer-readable storage medium - Google Patents

Robot, article grabbing method thereof and computer-readable storage medium Download PDF

Info

Publication number
CN113524172B
CN113524172B CN202110587574.3A CN202110587574A CN113524172B CN 113524172 B CN113524172 B CN 113524172B CN 202110587574 A CN202110587574 A CN 202110587574A CN 113524172 B CN113524172 B CN 113524172B
Authority
CN
China
Prior art keywords
image
target
type
robot
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110587574.3A
Other languages
Chinese (zh)
Other versions
CN113524172A (en
Inventor
欧勇盛
郭嘉欣
王琳
熊荣
郑雷雷
王志扬
江国来
刘超
刘哲强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Advanced Technology of CAS
Original Assignee
Shenzhen Institute of Advanced Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Advanced Technology of CAS filed Critical Shenzhen Institute of Advanced Technology of CAS
Priority to CN202110587574.3A priority Critical patent/CN113524172B/en
Publication of CN113524172A publication Critical patent/CN113524172A/en
Application granted granted Critical
Publication of CN113524172B publication Critical patent/CN113524172B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1682Dual arm manipulator; Coordination of several manipulators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Manipulator (AREA)

Abstract

The application relates to the technical field of robot control, and discloses a robot, an article grabbing method thereof and a computer readable storage medium. The method comprises the following steps: acquiring a first image of a target area; identifying the first image to determine a first location and a first type of the first target item in the first image; and controlling a mechanical arm of the robot to grab the first target object according to the first position and the first type of the first target object and place the first target object in a sub-area corresponding to the first type of the first target object in the target area. By the aid of the mode, sorting efficiency can be improved, labor cost is reduced, and user experience is improved.

Description

机器人及其物品抓取方法、计算机可读存储介质Robot, method for grabbing objects thereof, and computer-readable storage medium

技术领域technical field

本申请涉及机器人控制技术领域,特别是涉及机器人及其物品抓取方法、计算机可读存储介质。The present application relates to the technical field of robot control, in particular to a robot, its object grasping method, and a computer-readable storage medium.

背景技术Background technique

目前机器人的应用越来越广泛,充斥着我们生活的方方面面。但是在日常生活中却应用较少。如,在物品整理是生活中比较常见的问题。在现代社会繁重的学习与工作压力下,许多人不愿意或甚至没有时间来整理自己的物品,十分影响工作效率。且随着人口老龄化越来越严重,许多老年人都独自居住,整理物品对于他们来说并不轻松。At present, the application of robots is more and more extensive, filling all aspects of our lives. But it is rarely used in daily life. For example, organizing items is a relatively common problem in life. Under the heavy study and work pressure in modern society, many people are unwilling or even have no time to organize their belongings, which greatly affects work efficiency. And as the aging population becomes more and more serious, many elderly people live alone, and it is not easy for them to organize their belongings.

发明内容Contents of the invention

本申请主要解决的技术问题是提供机器人及其物品抓取方法、计算机可读存储介质,能够提高分类整理效率,减少人工成本,提高用户体验。The main technical problem to be solved by this application is to provide a robot and its item grabbing method, and a computer-readable storage medium, which can improve the efficiency of sorting and sorting, reduce labor costs, and improve user experience.

为了解决上述问题,本申请采用的一种技术方案是提供一种机器人的物品抓取方法,该方法包括:获取目标区域的第一图像;对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型;根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。In order to solve the above problems, a technical solution adopted by this application is to provide a method for grabbing objects by a robot. The method includes: acquiring a first image of the target area; The first position and the first type of a target item; according to the first position and the first type of the first target item, the mechanical arm of the robot is controlled to grab the first target item and place it in the target area with the first target item in the subregion corresponding to the first type.

其中,对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型,包括:将第一图像与背景图像进行对比,得到第一图像中的第一目标物品的第一位置;其中,背景图像是在目标区域没有第一目标物品时采集得到的;将第一图像输入至预先训练的第一学习模型,得到第一目标物品对应的第一类型。Wherein, identifying the first image to determine the first position and the first type of the first target item in the first image includes: comparing the first image with the background image to obtain the first target item in the first image where the background image is collected when there is no first target item in the target area; the first image is input to the pre-trained first learning model to obtain the first type corresponding to the first target item.

其中,将第一图像与背景图像进行对比,得到第一图像中的第一目标物品的第一位置,包括:将第一图像与背景图像进行对比,得到第一图像中的第一目标物品的第一轮廓信息;将第一图像输入至预先训练的第一学习模型,得到第一目标物品对应的第一类型,包括:将基于第一轮廓信息形成的图像输入至第一学习模型,得到轮廓信息对应的第一类型。Wherein, comparing the first image with the background image to obtain the first position of the first target item in the first image includes: comparing the first image with the background image to obtain the position of the first target item in the first image First contour information: inputting the first image to the pre-trained first learning model to obtain the first type corresponding to the first target item, including: inputting the image formed based on the first contour information to the first learning model to obtain the contour The first type that the message corresponds to.

其中,将基于第一轮廓信息形成的图像输入至第一学习模型,得到轮廓信息对应的第一类型,包括:将基于第一轮廓信息形成的图像进行HSV格式转换,得到第一HSV数据;获取第一HSV数据中的第一色调直方图;将第一色调直方图输入至第一学习模型,得到轮廓信息对应的第一类型。Wherein, inputting the image formed based on the first contour information to the first learning model to obtain the first type corresponding to the contour information includes: converting the image formed based on the first contour information to HSV format to obtain the first HSV data; obtaining The first tone histogram in the first HSV data; input the first tone histogram into the first learning model to obtain the first type corresponding to the contour information.

其中,根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,包括:根据第一位置和第一类型生成抓取轨迹;按照抓取轨迹控制机械臂运动至第一位置,以对第一目标物品进行抓取。Wherein, according to the first position and the first type of the first target item, controlling the mechanical arm of the robot to grab the first target item includes: generating a grasping trajectory according to the first position and the first type; controlling the mechanical arm according to the grasping trajectory Move to the first position to grab the first target item.

其中,按照抓取轨迹控制机械臂运动至第一位置,以对第一目标物品进行抓取,包括:按照抓取轨迹控制机械臂运动至第一位置;获取机械臂末端的摄像头组件采集的第二图像;对第二图像进行识别,以确定第二图像中第二目标物品的第二位置和第二类型;若第一类型与第二类型相同,则确定第一目标物品和第二目标物品同一物品,控制机械臂抓取第二目标物品。Wherein, controlling the movement of the mechanical arm to the first position according to the grasping trajectory to grasp the first target item includes: controlling the movement of the mechanical arm to the first position according to the grasping trajectory; obtaining the first position collected by the camera assembly at the end of the mechanical arm Two images: identify the second image to determine the second position and the second type of the second target item in the second image; if the first type is the same as the second type, then determine the first target item and the second target item For the same item, control the robotic arm to grab the second target item.

其中,对第二图像进行识别,以确定第二图像中第二目标物品的第二位置和第二类型,包括:识别第二图像中的第二目标物品,得到第二图像中第二目标物品的第二位置;将第二图像输入至预先训练的第二学习模型,得到第二图像中第二目标物品的第二类型。Wherein, identifying the second image to determine the second position and the second type of the second target item in the second image includes: identifying the second target item in the second image to obtain the second target item in the second image the second position of the second image; inputting the second image to the pre-trained second learning model to obtain the second type of the second target item in the second image.

其中,将第二图像输入至预先训练的第二学习模型,得到第二图像中第二目标物品的第二类型之前,包括:基于第二位置得到第二目标物品的第二轮廓信息;将基于第二轮廓信息形成的图像进行HSV格式转换,得到第二HSV数据;获取第二HSV数据中的第二色调直方图;将第二图像输入至预先训练的第二学习模型,得到第二图像中第二目标物品的第二类型,包括:将第二色调直方图输入至预先训练的第二学习模型,得到第二图像中第二目标物品对应的第二类型。Wherein, before the second image is input to the pre-trained second learning model, before obtaining the second type of the second target item in the second image, it includes: obtaining the second contour information of the second target item based on the second position; The image formed by the second contour information is converted into HSV format to obtain the second HSV data; obtain the second tone histogram in the second HSV data; input the second image to the pre-trained second learning model to obtain the second image The second type of the second target item includes: inputting the second tone histogram into the pre-trained second learning model to obtain the second type corresponding to the second target item in the second image.

其中,根据第一位置和第一类型生成抓取轨迹,包括:根据第一类型确定第一目标物品对应的子区域;根据第一位置确定第一目标物品与子区域之间的第一距离;基于第一距离生成抓取轨迹。Wherein, generating the grasping track according to the first position and the first type includes: determining the sub-area corresponding to the first target item according to the first type; determining the first distance between the first target item and the sub-area according to the first position; A grasp trajectory is generated based on the first distance.

其中,在目标区域中存在多个第一目标物品时,该方法还包括:对第一图像进行识别,以确定第一图像中每个第一目标物品的第一位置和第一类型;根据第一位置确定第一目标物品与子区域之间的第一距离,包括:分别按照每个第一目标物品的第一类型确定对应的子区域;根据每个第一目标物品的第一位置确定与对应的子区域之间的第一距离;基于第一距离生成抓取轨迹,包括:将同一第一类型的第一目标物品的第一距离按照从小到大进行排序,以得到每一第一类型对应的抓取顺序;基于抓取顺序生成抓取轨迹。Wherein, when there are multiple first target items in the target area, the method further includes: identifying the first image to determine the first position and the first type of each first target item in the first image; according to the first Determining the first distance between the first target item and the sub-area at a position includes: respectively determining the corresponding sub-area according to the first type of each first target item; determining and The first distance between the corresponding sub-regions; generating the grasping trajectory based on the first distance, including: sorting the first distances of the first target items of the same first type from small to large, so as to obtain each first type The corresponding grasping sequence; the grasping trajectory is generated based on the grasping sequence.

其中,基于抓取顺序生成抓取轨迹,包括:基于抓取顺序、同一第一类型的第一目标物品的第一位置,利用RRT算法生成抓取轨迹。Wherein, generating the grasping trajectory based on the grasping sequence includes: using the RRT algorithm to generate the grasping trajectory based on the grasping sequence and the first position of the first target item of the same first type.

其中,机器人至少包括第一机械臂和第二机械臂;在目标区域中存在多个第一目标物品时,该方法还包括:确定多个第一目标物品的第一类型是否至少存在两种;若是,则为第一机械臂和第二机械臂分别确定抓取的第一类型以及第一类型对应的第一目标物品;根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中,包括:控制第一机械臂和第二机械臂分别抓取对应的第一目标物品,并放置于第一目标物品的第一类型对应的子区域中。Wherein, the robot includes at least a first mechanical arm and a second mechanical arm; when there are multiple first target items in the target area, the method further includes: determining whether there are at least two first types of the multiple first target items; If so, determine the first type of grasping and the first target item corresponding to the first type for the first mechanical arm and the second mechanical arm respectively; according to the first position and the first type of the first target item, control the mechanical The arm grabs the first target item and places it in the sub-area corresponding to the first type of the first target item in the target area, including: controlling the first mechanical arm and the second mechanical arm to grab the corresponding first target respectively item, and placed in the sub-area corresponding to the first type of the first target item.

为了解决上述问题,本申请采用的另一种技术方案是提供一种机器人,该机器人包括:机器人主体;机械臂,设置于机器人主体;摄像头组件,设置于机器人主体和/或机械臂,用于采集目标区域的图像;In order to solve the above problems, another technical solution adopted by the present application is to provide a robot, the robot includes: a robot body; a mechanical arm arranged on the robot body; a camera assembly arranged on the robot body and/or the robot arm Capture an image of the target area;

存储器,设置于机器人主体,用于存储程序数据;处理器,设置于机器人主体,且连接机械臂、摄像头组件和存储器,用于执行程序数据,以实现如上述技术方案提供的方法。The memory is arranged on the main body of the robot to store program data; the processor is arranged on the main body of the robot and is connected to the mechanical arm, the camera assembly and the memory, and is used to execute the program data to realize the method provided by the above technical solution.

为了解决上述问题,本申请采用的另一种技术方案是提供一种计算机可读存储介质,该计算机可读存储介质中存储有程序数据,程序数据在被处理器执行时,用于实现如上述技术方案提供的方法。In order to solve the above problems, another technical solution adopted by the present application is to provide a computer-readable storage medium, in which program data is stored, and when the program data is executed by the processor, it is used to realize the above-mentioned The method provided by the technical solution.

本申请的有益效果是:区别于现有技术的情况,本申请提供的机器人及其物品抓取方法、计算机可读存储介质。该方法利用视觉图像的方式,对目标区域中的目标物品进行定位和类型识别,并控制机器人的机械臂对目标物品进行抓取并放置于对应的子区域中,能够快速对目标物品进行分类放置,相较于人工分类整理而言,能够提高分类整理效率,减少人工成本,提高用户体验。The beneficial effects of the present application are: different from the situation of the prior art, the present application provides a robot, its object grasping method, and a computer-readable storage medium. This method uses visual images to locate and identify the target items in the target area, and controls the robotic arm of the robot to grab the target items and place them in the corresponding sub-area, which can quickly classify and place the target items , compared with manual sorting, it can improve the efficiency of sorting, reduce labor costs, and improve user experience.

附图说明Description of drawings

为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。其中:In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings that need to be used in the description of the embodiments will be briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present application. For those skilled in the art, other drawings can also be obtained based on these drawings without creative effort. in:

图1是本申请提供的机器人的物品抓取方法一实施例的流程示意图;Fig. 1 is a schematic flow chart of an embodiment of a method for grabbing objects by a robot provided by the present application;

图2是本申请提供的目标区域的第一图像一实施例的示意图;Fig. 2 is a schematic diagram of an embodiment of the first image of the target area provided by the present application;

图3是本申请提供的目标区域的第一图像另一实施例的示意图;Fig. 3 is a schematic diagram of another embodiment of the first image of the target area provided by the present application;

图4是本申请提供的机器人的物品抓取方法另一实施例的流程示意图;Fig. 4 is a schematic flow chart of another embodiment of the object grabbing method of the robot provided by the present application;

图5是本申请提供的步骤43一实施例的流程示意图;FIG. 5 is a schematic flow diagram of an embodiment of step 43 provided by the present application;

图6是本申请提供的机器人的物品抓取方法另一实施例的流程示意图;Fig. 6 is a schematic flow chart of another embodiment of the object grabbing method of the robot provided by the present application;

图7是本申请提供的步骤63一实施例的流程示意图;FIG. 7 is a schematic flowchart of an embodiment of step 63 provided by the present application;

图8是本申请提供的步骤64一实施例的流程示意图;FIG. 8 is a schematic flowchart of an embodiment of step 64 provided by the present application;

图9是本申请提供的步骤643一实施例的流程示意图;FIG. 9 is a schematic flowchart of an embodiment of step 643 provided by the present application;

图10是本申请提供的步骤6432之前的一实施例的流程示意图;FIG. 10 is a schematic flowchart of an embodiment before step 6432 provided by the present application;

图11是本申请提供的机器人的物品抓取方法另一实施例的流程示意图;Fig. 11 is a schematic flowchart of another embodiment of the method for grabbing objects by a robot provided in the present application;

图12是本申请提供的目标区域的第一图像另一实施例的示意图;Fig. 12 is a schematic diagram of another embodiment of the first image of the target area provided by the present application;

图13是本申请提供的机器人的物品抓取方法另一实施例的流程示意图;Fig. 13 is a schematic flowchart of another embodiment of the method for grabbing objects by a robot provided in the present application;

图14是本申请提供的机器人一实施例的结构示意图;Fig. 14 is a schematic structural diagram of an embodiment of a robot provided by the present application;

图15是本申请提供的计算机可读存储介质一实施例的结构示意图。Fig. 15 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by the present application.

具体实施方式Detailed ways

下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。可以理解的是,此处所描述的具体实施例仅用于解释本申请,而非对本申请的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本申请相关的部分而非全部结构。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. It should be understood that the specific embodiments described here are only used to explain the present application, but not to limit the present application. In addition, it should be noted that, for the convenience of description, only some structures related to the present application are shown in the drawings but not all structures. Based on the embodiments in this application, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the scope of protection of this application.

本申请中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。The terms "first", "second", etc. in this application are used to distinguish different objects, not to describe a specific order. Furthermore, the terms "include" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, product or device comprising a series of steps or units is not limited to the listed steps or units, but optionally also includes unlisted steps or units, or optionally further includes For other steps or units inherent in these processes, methods, products or apparatuses.

在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。Reference herein to an "embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the present application. The occurrences of this phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is understood explicitly and implicitly by those skilled in the art that the embodiments described herein can be combined with other embodiments.

参阅图1,图1是本申请提供的机器人的物品抓取方法一实施例的流程示意图。该方法包括:Referring to FIG. 1 , FIG. 1 is a schematic flowchart of an embodiment of a method for grabbing objects by a robot provided in the present application. The method includes:

步骤11:获取目标区域的第一图像。Step 11: Acquire the first image of the target area.

在一些实施例中,获取目标区域的第一图像可以是利用图像采集组件采集的。如,图像采集组件设置于机器人的头部,能够采集到整个目标区域的图像。又如,图像采集组件设置于机器人的机械臂的末端,通过控制机械臂,使机械臂末端的图像采集组件能够采集到整个目标区域的图像。其中,图像采集组件可以是能够采集景深图像的摄像头。In some embodiments, acquiring the first image of the target area may be acquired using an image acquisition component. For example, the image acquisition component is arranged on the head of the robot, which can acquire images of the entire target area. In another example, the image acquisition component is arranged at the end of the robot's mechanical arm, and by controlling the mechanical arm, the image acquisition component at the end of the mechanical arm can acquire images of the entire target area. Wherein, the image collection component may be a camera capable of collecting depth-of-field images.

在一些实施例中,该目标区域可以是桌面、清理台面或分类台面等。In some embodiments, the target area may be a tabletop, a cleaning or sorting surface, and the like.

步骤12:对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型。Step 12: Recognize the first image to determine a first position and a first type of the first target item in the first image.

在一些实施例中,可以利用目标检测算法对第一图像进行识别,进而确定第一图像中第一目标物品的第一位置和第一类型。In some embodiments, a target detection algorithm may be used to identify the first image, and then determine the first position and the first type of the first target item in the first image.

可以理解,若第一图像中存在多个第一目标物品,则会对每个第一目标物品进行识别,得到每个第一目标物品对应的第一位置和第一类型。It can be understood that if there are multiple first target items in the first image, each first target item will be identified to obtain the first position and the first type corresponding to each first target item.

在其他实施例中,可以先利用图像分割算法对第一图像进行识别,进而确定第一图像中第一目标物品的第一位置,基于第一位置进行图像分割,然后将分割后的图像输入至学习模型,利用学习模型识别其第一类型。In other embodiments, the image segmentation algorithm can be used to identify the first image first, and then determine the first position of the first target item in the first image, perform image segmentation based on the first position, and then input the segmented image to A learning model is used to identify the first type thereof.

步骤13:根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Step 13: According to the first position and the first type of the first target item, control the mechanical arm of the robot to grab the first target item and place it in the sub-area corresponding to the first type of the first target item in the target area .

在一些实施例中,可以根据第一目标物品的第一位置和第一类型,为机器人的机械臂设置抓取轨迹,以使机械臂按照抓取轨迹抓取第一位置的第一目标物品。In some embodiments, according to the first position and the first type of the first target item, a grasping trajectory can be set for the mechanical arm of the robot, so that the mechanical arm grasps the first target item at the first position according to the grasping trajectory.

其中,第一位置用于确定机械臂的抓取起始位置,而第一类型用于确定机械臂的抓取结束位置。Among them, the first position is used to determine the grabbing start position of the robot arm, and the first type is used to determine the grabbing end position of the robot arm.

在一应用场景中,结合图2进行说明:In an application scenario, it will be described with reference to Figure 2:

图2是目标区域的第一图像一实施例的示意图。在图2中,通过上述步骤识别出了第一目标物品c的第一位置和第一类型。其中,目标区域中包括子区域A和子区域B。若第一目标物品c的第一类型对应子区域B,则控制机械臂抓取第一目标物品c,并放置于子区域B中。Fig. 2 is a schematic diagram of an embodiment of a first image of a target area. In FIG. 2 , the first position and the first type of the first target item c are identified through the above steps. Wherein, the target area includes sub-area A and sub-area B. If the first type of the first target item c corresponds to the sub-area B, the robotic arm is controlled to grab the first target item c and place it in the sub-area B.

在另一应用场景中,结合图3进行说明:In another application scenario, it will be described in conjunction with Figure 3:

图3是目标区域的第一图像另一实施例的示意图。在图3中,通过上述步骤识别出了第一目标物品c的第一位置和第一类型和第一目标物品d的第一位置和第一类型。其中,目标区域中包括子区域A和子区域B。第一目标物品c的第一类型对应子区域B,第一目标物品d的第一类型对应子区域a,则控制机械臂先抓取第一目标物品c,并放置于子区域B中,然后控制机械臂先抓取第一目标物品d,并放置于子区域A中。Fig. 3 is a schematic diagram of another embodiment of the first image of the target area. In FIG. 3 , the first position and first type of the first target item c and the first position and first type of the first target item d are identified through the above steps. Wherein, the target area includes sub-area A and sub-area B. The first type of the first target item c corresponds to sub-area B, and the first type of the first target item d corresponds to sub-area a, then the control robot first grabs the first target item c and places it in sub-area B, and then The robotic arm is controlled to grab the first target item d and place it in the sub-area A.

在本实施例中,利用视觉图像的方式,对目标区域中的目标物品进行定位和类型识别,并控制机器人的机械臂对目标物品进行抓取并放置于对应的子区域中,能够快速对目标物品进行分类放置,相较于人工分类整理而言,能够提高分类整理效率,减少人工成本,提高用户体验。In this embodiment, the visual image is used to locate and identify the target item in the target area, and the robot's mechanical arm is controlled to grab the target item and place it in the corresponding sub-area, so that the target item can be quickly identified. Compared with manual sorting and sorting, sorting and placing items can improve the efficiency of sorting and sorting, reduce labor costs, and improve user experience.

参阅图4,图4是本申请提供的机器人的物品抓取方法另一实施例的流程示意图。该方法包括:Referring to FIG. 4 , FIG. 4 is a schematic flowchart of another embodiment of a method for grabbing objects by a robot provided in the present application. The method includes:

步骤41:获取目标区域的第一图像。Step 41: Acquire the first image of the target area.

步骤42:将第一图像与背景图像进行对比,得到第一图像中的第一目标物品的第一位置。Step 42: Comparing the first image with the background image to obtain a first position of the first target item in the first image.

其中,背景图像是在目标区域没有第一目标物品时采集得到的。Wherein, the background image is collected when there is no first target item in the target area.

在一些实施例中,利用机器人的头部相机对目标区域进行图像采集,得到第一图像。由于目标区域和头部相机是固定的,所以可以先采集一张目标区域没有第一目标物品时的图像作为背景图像,然后使用背景相减的方法,得到目标区域中的第一目标物体,从而得到第一目标物体在第一图像中的第一位置。In some embodiments, the head camera of the robot is used to collect images of the target area to obtain the first image. Since the target area and the head camera are fixed, it is possible to first collect an image of the target area without the first target item as the background image, and then use the method of background subtraction to obtain the first target object in the target area, thus A first position of the first target object in the first image is obtained.

在其他实施例中,将第一图像与背景图像进行对比,得到第一图像中的第一目标物品的第一位置,从而根据第一位置再提取该第一目标物品的第一轮廓信息,求取第一轮廓信息的中心点作为物体的中心点。并使用矩形框包围第一目标物品,即矩形框以内可表示该第一目标物品。记录下矩形框的左上角像素的行和列,以及矩形框的长和宽,即矩形框可表示为(x,y,w,h),其中,x和y表示矩形框的左上角像素点坐标,w表示矩形框的宽,h表示矩形框的长。矩形框则可表示出第一目标物品的第一轮廓信息。In other embodiments, the first image is compared with the background image to obtain the first position of the first target item in the first image, and then the first contour information of the first target item is extracted according to the first position to obtain The center point of the first contour information is taken as the center point of the object. And use a rectangular frame to surround the first target item, that is, the first target item can be represented within the rectangular frame. Record the row and column of the upper left pixel of the rectangular frame, as well as the length and width of the rectangular frame, that is, the rectangular frame can be expressed as (x, y, w, h), where x and y represent the upper left pixel point of the rectangular frame Coordinates, w represents the width of the rectangle, h represents the length of the rectangle. The rectangular frame can represent the first outline information of the first target item.

步骤43:将第一图像输入至预先训练的第一学习模型,得到第一目标物品对应的第一类型。Step 43: Input the first image into the pre-trained first learning model to obtain the first type corresponding to the first target item.

第一学习模型可以是基于深度学习模型训练得到,如卷积神经网络模型。如支持向量机。The first learning model may be trained based on a deep learning model, such as a convolutional neural network model. such as support vector machines.

在一些实施例中,可以基于第一轮廓信息得到对应第一目标物品的图像,然后将基于第一轮廓信息形成的图像输入至第一学习模型,得到轮廓信息对应的第一类型。通过这种方式,利用基于轮廓信息分割后的图像进行类型识别,能够减少学习模型的计算量,使学习模型不必再进行图像分割。In some embodiments, an image corresponding to the first target item may be obtained based on the first contour information, and then the image formed based on the first contour information is input into the first learning model to obtain the first type corresponding to the contour information. In this way, using the image segmented based on the contour information for type recognition can reduce the amount of calculation of the learning model, so that the learning model does not need to perform image segmentation.

在其他实施例中,参阅图5,步骤43还可以是以下流程:In other embodiments, referring to FIG. 5, step 43 may also be the following process:

步骤431:将基于第一轮廓信息形成的图像进行HSV格式转换,得到第一HSV数据。Step 431: Convert the image formed based on the first contour information to HSV format to obtain first HSV data.

HSV是一种颜色模型,H表示Hue(色调、色相),S表示Saturation(饱和度、色彩纯净度),V表示Value(明度)。HSV is a color model, H means Hue (hue, hue), S means Saturation (saturation, color purity), V means Value (brightness).

可以理解,利用摄像头组件采集的第一图像是基于RGB形成的。因此在处理RGB图像时需要进行三个通道的处理,此时会增加计算量。It can be understood that the first image captured by the camera component is formed based on RGB. Therefore, processing of three channels is required when processing an RGB image, which will increase the amount of calculation.

因此,本申请将RGB图像进行HSV格式转换,以使图像更接近人们对彩色的感知经验。非常直观地表达颜色的色调、鲜艳程度和明暗程度,方便进行颜色的对比。在HSV颜色空间下,比RGB更容易跟踪某种颜色的物体。Therefore, this application converts the RGB image into HSV format to make the image closer to people's perceptual experience of color. It is very intuitive to express the hue, vividness and lightness of the color, which is convenient for color comparison. In the HSV color space, it is easier to track objects of a certain color than RGB.

其中,RGB格式转换为HSV格式的方法为:Among them, the method of converting RGB format to HSV format is:

V为R、G、B中的最大值。V is the maximum value among R, G, and B.

S为R、G、B中的最大值和最小值的差值除以最大值。S is the difference between the maximum and minimum values of R, G, and B divided by the maximum value.

H满足以下条件:若R为最大值,则H=(G-B)/(max-min)*60;H satisfies the following conditions: if R is the maximum value, then H=(G-B)/(max-min)*60;

G为最大值,H=120+(B-R)/(max-min)*60;G is the maximum value, H=120+(B-R)/(max-min)*60;

B为最大值,H=240+(R-G)/(max-min)*60。B is the maximum value, H=240+(R-G)/(max-min)*60.

若H<0,则H=H+360。If H<0, then H=H+360.

步骤432:获取第一HSV数据中的第一色调直方图。Step 432: Obtain the first tone histogram in the first HSV data.

在本实施例中,因需要进行类型识别,则可提取出HSV三个通道中的H通道,即色调通道。基于色调通道的数据形成色调直方图,以此表示第一目标物品。In this embodiment, because type identification is required, the H channel among the three HSV channels can be extracted, that is, the hue channel. A hue histogram is formed based on the data of the hue channel to represent the first target item.

在计算色调直方图使,需要将H的值均匀地划分成若干个小的区间,每个小区间成为直方图的一个条目。然后,通过计算H值落在每个小区间内的像素数量可以得到色调直方图。在本申请实施例中,可以将H均匀划分成32个值为8的小区间,对应Hue值的范围0~255。When calculating the tone histogram, the value of H needs to be evenly divided into several small intervals, and each small interval becomes an entry of the histogram. Then, the hue histogram can be obtained by calculating the number of pixels whose H value falls within each small interval. In the embodiment of the present application, H can be evenly divided into 32 cell intervals with a value of 8, corresponding to a range of Hue values from 0 to 255.

步骤433:将第一色调直方图输入至第一学习模型,得到轮廓信息对应的第一类型。Step 433: Input the first tone histogram into the first learning model to obtain the first type corresponding to the contour information.

可以理解,在使用色调直方图进行第一目标物品类型识别时,第一学习模型也是采用色调直方图作为训练样本进行训练的。It can be understood that when using the tone histogram to identify the first target item type, the first learning model is also trained using the tone histogram as a training sample.

通过色调直方图单一通道的方式进行类型的确定,能够减少学习模型的计算量,提高类型确定的效率。The type is determined by means of a single channel of the tone histogram, which can reduce the calculation amount of the learning model and improve the efficiency of type determination.

步骤44:根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Step 44: According to the first position and the first type of the first target item, control the mechanical arm of the robot to grab the first target item and place it in the sub-area corresponding to the first type of the first target item in the target area .

在本实施例中,利用视觉图像的方式,对目标区域中的目标物品进行定位,利用学习模型对目标物品进行类型识别,并控制机器人的机械臂对目标物品进行抓取并放置于对应的子区域中,能够快速对目标物品进行分类放置,相较于人工分类整理而言,能够提高分类整理效率,减少人工成本,提高用户体验。In this embodiment, the visual image is used to locate the target item in the target area, the learning model is used to identify the type of the target item, and the mechanical arm of the robot is controlled to grab the target item and place it in the corresponding sub-section. In the area, the target items can be quickly sorted and placed. Compared with manual sorting, it can improve the efficiency of sorting, reduce labor costs, and improve user experience.

参阅图6,图6是本申请提供的机器人的物品抓取方法另一实施例的流程示意图。该方法包括:Referring to FIG. 6 , FIG. 6 is a schematic flowchart of another embodiment of a method for grabbing objects by a robot provided in the present application. The method includes:

步骤61:获取目标区域的第一图像。Step 61: Acquire the first image of the target area.

步骤62:对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型。Step 62: Recognize the first image to determine a first position and a first type of the first target item in the first image.

步骤61-62与上述实施例具有相同或相似的技术方案,这里不做赘述。Steps 61-62 have the same or similar technical solutions as those in the above-mentioned embodiments, which will not be repeated here.

步骤63:根据第一位置和第一类型生成抓取轨迹。Step 63: Generate a grasping track according to the first position and the first type.

在本实施例中,可以基于第一位置确定第一目标物品的中心点,以及根据第一类型确定子区域,则可根据中心点的位置信息和子区域的位置信息确定抓取轨迹。In this embodiment, the center point of the first target item can be determined based on the first position, and the sub-area can be determined according to the first type, then the grasping trajectory can be determined according to the position information of the center point and the position information of the sub-area.

在一些实施例中,参阅图7,步骤63可以是以下流程:In some embodiments, referring to FIG. 7, step 63 may be the following process:

步骤631:根据第一类型确定第一目标物品对应的子区域。Step 631: Determine the sub-area corresponding to the first target item according to the first type.

结合图2进行说明:Combined with Figure 2 for illustration:

在确定第一目标物品c的第一类型后,可根据类型与子区域的对应关系确定第一类型对应子区域B。After the first type of the first target item c is determined, it can be determined that the first type corresponds to the sub-area B according to the correspondence between the types and the sub-areas.

步骤632:根据第一位置确定第一目标物品与子区域之间的第一距离。Step 632: Determine a first distance between the first target item and the sub-area according to the first position.

基于第一位置确定第一目标物品的中心点,利用中心点的位置信息确定第一目标物品与子区域之间的第一距离。A center point of the first target item is determined based on the first position, and a first distance between the first target item and the sub-area is determined by using position information of the center point.

步骤633:基于第一距离生成抓取轨迹。Step 633: Generate a grasping track based on the first distance.

在目标区域只有一个第一目标物品时,可以将抓取轨迹设置为直线形轨迹,以提高抓取效率。When there is only one first target item in the target area, the grasping trajectory can be set as a linear trajectory to improve the grasping efficiency.

在目标区域存在多个第一目标物品时,可以将根据每个第一目标物品的第一位置设置抓取轨迹。When there are multiple first target items in the target area, the grasping trajectory may be set according to the first position of each first target item.

步骤64:按照抓取轨迹控制机械臂运动至第一位置,以对第一目标物品进行抓取。Step 64: Control the mechanical arm to move to the first position according to the grasping trajectory, so as to grasp the first target item.

在一些实施例中,若机器人是刚性机器人,则该机器人的机械臂具有很高的定位精度,则可以根据第一目标物品的第一位置直接对第一目标物品进行抓取。In some embodiments, if the robot is a rigid robot, the robotic arm of the robot has high positioning accuracy, and can directly grasp the first target item according to the first position of the first target item.

在一些实施例中,若机器人为非刚性机器人,则该机器人的机械臂的定位精度较差,则可参阅图8进行物品抓取。具体地,步骤64可以是以下流程:In some embodiments, if the robot is a non-rigid robot, the positioning accuracy of the robotic arm of the robot is poor, and the object grasping can be performed referring to FIG. 8 . Specifically, step 64 may be the following process:

步骤641:按照抓取轨迹控制机械臂运动至第一位置。Step 641: Control the mechanical arm to move to the first position according to the grabbing track.

用于执行图8流程的机器人包括多个摄像头组件,其中,在机器人主体上设置一摄像头组件,用于采集目标区域的第一图像。在机械臂的末端设置一摄像头组件,用于在控制机械臂进行物品抓取时,利用该摄像头采集物品的第二图像。The robot used to execute the process of FIG. 8 includes a plurality of camera components, wherein a camera component is arranged on the main body of the robot to collect the first image of the target area. A camera assembly is arranged at the end of the mechanical arm, and is used to collect a second image of the object by using the camera when the mechanical arm is controlled to grasp the object.

因此,在机械臂运动至第一位置时,因该机器人为非刚性机器人,实际上该机械臂并没有准确地运动至第一位置,与第一位置之间是存在较大的误差的。机械臂并不能很好对物品进行抓取。此时则可利用机械臂末端的摄像头组件对目标区域进行图像采集。因该机械臂在第一目标物品附近,则该图像能够采集到第一目标物品。则该机器人可执行步骤642,对机械臂末端上的摄像头组件采集的图像进行处理。Therefore, when the mechanical arm moves to the first position, because the robot is a non-rigid robot, the mechanical arm does not actually move to the first position accurately, and there is a large error between the first position and the first position. The robotic arm is not very good at grabbing items. At this time, the camera assembly at the end of the robotic arm can be used to collect images of the target area. Since the robotic arm is near the first target item, the image can capture the first target item. Then the robot can execute step 642 to process the image collected by the camera assembly on the end of the robot arm.

步骤642:获取机械臂末端的摄像头组件采集的第二图像。Step 642: Obtain a second image captured by the camera assembly at the end of the robotic arm.

步骤643:对第二图像进行识别,以确定第二图像中第二目标物品的第二位置和第二类型。Step 643: Recognize the second image to determine a second position and a second type of the second target item in the second image.

在一些实施例中,步骤643可以按照上述任一实施例中对第一图像进行识别的方式处理。In some embodiments, step 643 may be processed in the manner of identifying the first image in any of the foregoing embodiments.

在一些实施例中,参阅图9,步骤643可以是以下流程:In some embodiments, referring to FIG. 9, step 643 may be the following process:

步骤6431:识别第二图像中的第二目标物品,得到第二图像中第二目标物品的第二位置。Step 6431: Identify the second target item in the second image, and obtain the second position of the second target item in the second image.

步骤6432:将第二图像输入至预先训练的第二学习模型,得到第二图像中第二目标物品的第二类型。Step 6432: Input the second image into the pre-trained second learning model to obtain the second type of the second target item in the second image.

在一些实施例中,参阅图10,步骤6432之前可以是以下流程:In some embodiments, referring to FIG. 10, the following process may be performed before step 6432:

步骤101:基于第二位置得到第二目标物品的第二轮廓信息。Step 101: Obtain second contour information of a second target item based on a second position.

步骤102:将基于第二轮廓信息形成的图像进行HSV格式转换,得到第二HSV数据。Step 102: Convert the image formed based on the second contour information to HSV format to obtain second HSV data.

步骤103:获取第二HSV数据中的第二色调直方图。Step 103: Obtain a second tone histogram in the second HSV data.

在得到第二色调直方图后,步骤6432就可以是将第二色调直方图输入至预先训练的第二学习模型,得到第二图像中第二目标物品对应的第二类型。After obtaining the second tone histogram, step 6432 may be to input the second tone histogram into the pre-trained second learning model to obtain the second type corresponding to the second target item in the second image.

可以理解,因机械臂的原因,步骤642-步骤643是一个循环的过程,在第一次的第二图像无法满足后续条件后,则会控制机械臂继续运动,并继续采集第二图像,以执行步骤642-步骤643。It can be understood that due to the mechanical arm, step 642-step 643 is a cyclic process. After the first second image fails to meet the subsequent conditions, the mechanical arm will be controlled to continue to move and continue to collect the second image. Execute step 642-step 643.

此处的第二学习模型与上述实施例中的第一学习模型是两个利用不同训练样本训练的。第一学习模型是利用第一摄像头采集的第一图像训练的或者是基于第一色调直方图训练的。第二学习模型是利用第二摄像头采集的第二图像训练的或者是基于第二色调直方图训练的。The second learning model here and the first learning model in the above embodiment are trained using different training samples. The first learning model is trained using the first image collected by the first camera or based on the first tone histogram. The second learning model is trained using the second image collected by the second camera or based on the second tone histogram.

步骤644:若第一类型与第二类型相同,则确定第一目标物品和第二目标物品同一物品,控制机械臂抓取第二目标物品。Step 644: If the first type is the same as the second type, determine that the first target item and the second target item are the same item, and control the mechanical arm to grab the second target item.

在一些实施例中,确定第一类型与第二类型相同时,则在机械臂运动过程中,可以采用Camshift算法来收敛欲追踪的区域。其中,欲追踪的区域用于表示第二图像中的第二目标物品的大小。可以理解,在欲追踪的区域满足条件时,则说明机械臂已经移动至最佳的抓取位置,可以实施抓取。In some embodiments, when it is determined that the first type is the same as the second type, the Camshift algorithm may be used to converge the area to be tracked during the movement of the robotic arm. Wherein, the area to be tracked is used to represent the size of the second target item in the second image. It can be understood that when the area to be tracked satisfies the conditions, it means that the robotic arm has moved to the optimal grasping position, and grasping can be performed.

通过上述方式,可以实时的跟踪到第一目标物品的位置。使机械臂总是往第一目标物品的中心点移动。当检测第一目标物品在第二图像中的大小达到提前预设好的值时,便可以进行抓取操作。这样,结合视觉图像的方式解决了非刚性机器人重复定位精度不高的问题,也能借助视觉图像来准确的抓取目标物品。Through the above method, the position of the first target item can be tracked in real time. Make the arm always move towards the center of the first target item. When it is detected that the size of the first target item in the second image reaches a preset value, the grabbing operation can be performed. In this way, the combination of visual images solves the problem of low repeat positioning accuracy of non-rigid robots, and can also accurately grasp target items with the help of visual images.

步骤65:放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Step 65: Place in the sub-area corresponding to the first type of the first target item in the target area.

在本实施例中,利用视觉图像的方式,对目标区域中的目标物品进行定位,利用学习模型对目标物品进行类型识别,并对非刚性机器人再次利用图像识别的方式,实现非钢性机器人对目标物品的精准抓取,能够快速对目标物品进行分类放置,相较于人工分类整理而言,能够提高分类整理效率,减少人工成本,提高用户体验。In this embodiment, the visual image is used to locate the target item in the target area, and the learning model is used to identify the type of the target item, and the non-rigid robot is re-used to image recognition to realize the non-rigid robot. Accurate grabbing of target items can quickly classify and place target items. Compared with manual sorting, it can improve sorting efficiency, reduce labor costs, and improve user experience.

参阅图11,图11是本申请提供的机器人的物品抓取方法另一实施例的流程示意图。该方法包括:Referring to FIG. 11 , FIG. 11 is a schematic flow chart of another embodiment of the object grabbing method of the robot provided in the present application. The method includes:

步骤111:获取目标区域的第一图像。Step 111: Acquire a first image of the target area.

本实施例应用于目标区域中存在多个第一目标物品的场景。This embodiment is applied to a scene where there are multiple first target items in the target area.

步骤112:对第一图像进行识别,以确定第一图像中每个第一目标物品的第一位置和第一类型。Step 112: Recognize the first image to determine a first position and a first type of each first target item in the first image.

在本实施例中,步骤112可以按照上述任一实施例中对单个第一目标物品识别的方式进行多个第一目标物品的识别,则可以分别得到每个第一目标物品的第一位置和第一类型。In this embodiment, step 112 can identify a plurality of first target items in the manner of identifying a single first target item in any of the above-mentioned embodiments, then the first position and the first position of each first target item can be obtained respectively. first type.

步骤113:分别按照每个第一目标物品的第一类型确定对应的子区域。Step 113: Determine corresponding sub-regions according to the first type of each first target item.

步骤114:根据每个第一目标物品的第一位置确定与对应的子区域之间的第一距离。Step 114: Determine the first distance to the corresponding sub-area according to the first position of each first target item.

步骤115:将同一第一类型的第一目标物品的第一距离按照从小到大进行排序,以得到每一第一类型对应的抓取顺序。Step 115: Sorting the first distances of the first target items of the same first type in descending order, so as to obtain the grabbing order corresponding to each first type.

结合图12对步骤113-步骤115进行说明:Step 113-step 115 is described in conjunction with Fig. 12:

在图12中,存在子区域A、B和C。检测到的第一目标物品包括d、e、f、g、h和i。通过识别d、e、f、g、h和i的第一类型,确定d和g对应子区域B,e和f对应子区域A,h和i对应子区域C。则可以对d和g生成一抓取顺序,对e和f生成一抓取顺序,对h和i生成一抓取顺序。In FIG. 12, subregions A, B and C exist. The detected first target items include d, e, f, g, h and i. By identifying the first type of d, e, f, g, h and i, it is determined that d and g correspond to subregion B, e and f correspond to subregion A, and h and i correspond to subregion C. Then a grab sequence can be generated for d and g, a grab sequence can be generated for e and f, and a grab sequence can be generated for h and i.

其中,在d和g中,d与子区域B的距离小于g与子区域B的距离,则可以基于距离按照从小到大进行排序,然后对应的抓取顺序可以是先抓取d然后抓取g。在其他实施例中,可以先按照从大到小的方式进行抓取,先抓取g然后抓取d。通过这种抓取方式能够减少机械臂的移动,减少对机械臂的损耗,提高机械臂的使用寿命。Among them, in d and g, if the distance between d and sub-region B is smaller than the distance between g and sub-region B, it can be sorted from small to large based on the distance, and then the corresponding grabbing order can be to grab d first and then grab g. In other embodiments, the grabbing may be performed in a descending order, first grabbing g and then grabbing d. This grasping method can reduce the movement of the mechanical arm, reduce the loss of the mechanical arm, and improve the service life of the mechanical arm.

e和f对应的抓取顺序,h和i对应的抓取顺序可以按照上述方式进行,这里不做赘述。The capture sequence corresponding to e and f, and the capture sequence corresponding to h and i can be performed in the above manner, and will not be described here.

步骤116:基于抓取顺序生成抓取轨迹。Step 116: Generate a grasping trajectory based on the grasping sequence.

在步骤116中,可以基于抓取顺序、同一第一类型的第一目标物品的第一位置,利用RRT算法生成抓取轨迹。In step 116, based on the grasping order and the first position of the first target item of the same first type, the grasping trajectory may be generated by using the RRT algorithm.

RRT(Rapidly-ExploringRandom Trees,快速搜索随机树)是一种基础的全局路径搜索算法,它具有简单、快速的特点。RRT (Rapidly-Exploring Random Trees) is a basic global path search algorithm, which is simple and fast.

第一图像可以是深度图像,则可以通过第一图像采集出目标区域的RGB-D信息,通过此信息,可以还原目标区域中的空间信息,并用空间点云来表示。空间点云可以完成目标区域的构型空间的构建,结合RRT算法,便可以规划出可避障的路径。该路径则为上述的抓取轨迹。The first image may be a depth image, and the RGB-D information of the target area may be collected through the first image. Through this information, the spatial information in the target area may be restored and represented by a spatial point cloud. The spatial point cloud can complete the construction of the configuration space of the target area, and combined with the RRT algorithm, the path that can avoid obstacles can be planned. The path is the above-mentioned grabbing track.

步骤117:按照抓取轨迹控制机械臂运动至第一位置,以对第一目标物品进行抓取,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Step 117: Control the robot arm to move to the first position according to the grabbing trajectory, so as to grab the first target item and place it in the sub-area corresponding to the first type of the first target item in the target area.

在步骤117中,若该机器人为非刚性机器人,则可按照上述非刚性机器人的方式进行物品抓取。In step 117, if the robot is a non-rigid robot, it can grasp the object in the manner of the above-mentioned non-rigid robot.

在本实施例中,利用视觉图像的方式,对目标区域中的多个目标物品进行定位,利用学习模型对目标物品进行类型识别,合理规划抓取轨迹,能够快速对目标物品进行分类放置,相较于人工分类整理而言,能够提高分类整理效率,减少人工成本,提高用户体验。In this embodiment, visual images are used to locate multiple target items in the target area, the learning model is used to identify the type of target items, and the grasping trajectory is reasonably planned to quickly classify and place the target items. Compared with manual sorting and sorting, it can improve the efficiency of sorting and sorting, reduce labor costs, and improve user experience.

参阅图13,图13是本申请提供的机器人的物品抓取方法另一实施例的流程示意图。该方法包括:Referring to FIG. 13 , FIG. 13 is a schematic flow chart of another embodiment of the object grabbing method of the robot provided in the present application. The method includes:

步骤131:获取目标区域的第一图像。Step 131: Acquire a first image of the target area.

步骤132:对第一图像进行识别,以确定第一图像中每个第一目标物品的第一位置和第一类型。Step 132: Recognize the first image to determine a first position and a first type of each first target item in the first image.

步骤131-步骤132具有与上述实施例相同或相似的技术方案,这里不做赘述。Step 131-step 132 have the same or similar technical solutions as those of the above-mentioned embodiments, which will not be repeated here.

在本实施例中,机器人至少包括第一机械臂和第二机械臂。In this embodiment, the robot at least includes a first robotic arm and a second robotic arm.

步骤133:在目标区域中存在多个第一目标物品时,确定多个第一目标物品的第一类型是否至少存在两种。Step 133: When there are multiple first target items in the target area, determine whether there are at least two first types of the multiple first target items.

在确定多个第一目标物品的类型不唯一时,则可执行步骤134。When it is determined that the types of the multiple first target items are not unique, step 134 may be performed.

若多个第一目标物品为同一第一类型,则可以只控制第一机械臂和第二机械臂其中一个进行物品抓取。If the multiple first target items are of the same first type, only one of the first mechanical arm and the second mechanical arm may be controlled to grab the item.

步骤134:为第一机械臂和第二机械臂分别确定抓取的第一类型以及第一类型对应的第一目标物品。Step 134: Determine the first type to be grasped and the first target item corresponding to the first type for the first robot arm and the second robot arm respectively.

结合图12进行说明,在图12中d和g对应子区域B,e和f对应子区域A,h和i对应子区域C。若第一机械臂靠近子区域B,第二机械臂靠近子区域A和C,则可确定第一机械臂抓取d和g,第二机械臂抓取e和f以及h和i。It will be described in conjunction with FIG. 12 . In FIG. 12 , d and g correspond to subregion B, e and f correspond to subregion A, and h and i correspond to subregion C. If the first robotic arm is close to subarea B and the second robotic arm is close to subareas A and C, it can be determined that the first robotic arm grabs d and g, and the second robotic arm grabs e and f and h and i.

步骤135:控制第一机械臂和第二机械臂分别抓取对应的第一目标物品,并放置于第一目标物品的第一类型对应的子区域中。Step 135: Control the first robot arm and the second robot arm to grab the corresponding first target item respectively, and place them in the sub-area corresponding to the first type of the first target item.

在步骤135中,第一机械臂和第二机械臂的抓取轨迹可以按照上述任一实施例的方式进行生成。若该机器人为非刚性机器人,则可按照上述非刚性机器人的方式进行物品抓取。即在第一机械臂和第二机械臂的末端分别设置一摄像头。In step 135, the grasping trajectories of the first robotic arm and the second robotic arm may be generated in the manner of any of the above-mentioned embodiments. If the robot is a non-rigid robot, it can grasp objects in the manner of the above-mentioned non-rigid robot. That is, a camera is respectively arranged at the ends of the first mechanical arm and the second mechanical arm.

在其他实施例中,获取目标区域的第一图像,对第一图像进行识别,以确定第一图像中每个第一目标物品的第一位置和第一类型。其中,若子区域中的第一目标物品与该子区域不对应,则也需要对其进行抓取,以放置于对应的子区域。In other embodiments, a first image of the target area is acquired, and the first image is identified to determine a first position and a first type of each first target item in the first image. Wherein, if the first target item in the sub-area does not correspond to the sub-area, it also needs to be grabbed to place it in the corresponding sub-area.

在对每个第一目标物体进行距离确定时,若存在距离值小于设定的阈值时,便进行碰撞检测。若存在碰撞,则控制机械臂先将其移动到目标区域上无物品的临时位置。若不存在碰撞,则按照排序,依次进行抓取。When determining the distance of each first target object, if any distance value is smaller than the set threshold, collision detection is performed. If there is a collision, the robotic arm is controlled to move it to a temporary position without objects on the target area. If there is no collision, it will be grabbed in order according to the sorting.

在一应用场景中,目标区域为桌面。如老人的桌面,小孩的桌面,上班族的桌面。则可控制机器人对桌面进行整理。以老人的桌面的为例进行说明:许多老年人都独自居住,整理桌面对于他们来说并不轻松。如可以将老人的桌面子区域分为食品区、日用品区和药品区。机器人设置于桌面一侧,机器人主体上的摄像头组件能够采集到该桌面上的全景图像。通过上述任一实施例的方法,对桌面上的物品进行识别,则识别到属于食品的有两个物品,属于日用品的有两个物品,属于药品的有两个物品,且分别确定了这些物品的位置,以及食品区、日用品区和药品区的位置。然后针对这些物品生成抓取轨迹。控制机器人的机械臂按照抓取轨迹进行物品抓取,以放置于对应的区域。In an application scenario, the target area is a desktop. Such as the desktop of the elderly, the desktop of children, the desktop of office workers. Then the robot can be controlled to organize the desktop. Take the desktop of the elderly as an example: many elderly people live alone, and it is not easy for them to organize the desktop. For example, the desktop sub-area of the elderly can be divided into food area, daily necessities area and medicine area. The robot is arranged on one side of the desktop, and the camera assembly on the main body of the robot can collect panoramic images on the desktop. Through the method of any of the above-mentioned embodiments, if the items on the desktop are identified, two items belonging to food, two items belonging to daily necessities, and two items belonging to medicines are identified, and these items are respectively determined. location, as well as the location of the food area, daily necessities area and medicine area. Grab trajectories are then generated for these items. The robotic arm that controls the robot grabs the item according to the grabbing track and places it in the corresponding area.

在一些实施例中,在应用于桌面这种日常生活场景时,机器人采用非刚性机器人,相比于刚性机器人,非刚性机器人的机械臂的关节是由弹性物质组成,在运行/非运行阶段,都具有较高的安全性。In some embodiments, when applied to daily life scenes such as desktops, the robot adopts a non-rigid robot. Compared with a rigid robot, the joints of the mechanical arm of a non-rigid robot are composed of elastic materials. During the running/non-running phase, All have high security.

在本实施例中,双臂机器人能够同时进行不同类型物品的抓取及分类,且拥有更大的抓取范围,能够提高分类整理效率,减少人工成本,提高用户体验。In this embodiment, the dual-arm robot can simultaneously grasp and classify different types of items, and has a larger grasping range, which can improve the efficiency of sorting and sorting, reduce labor costs, and improve user experience.

参阅图14,机器人140包括机器人主体141、机械臂142、摄像头组件143、存储器144和处理器145。其中,机械臂142设置于机器人主体141;摄像头组件143设置于机器人主体141和/或机械臂142,用于采集目标区域的图像;存储器144设置于机器人主体141,用于存储程序数据;处理器145设置于机器人主体141,且连接机械臂142、摄像头组件143和存储器144,用于执行程序数据,以实现如下方法:Referring to FIG. 14 , the robot 140 includes a robot body 141 , a robot arm 142 , a camera assembly 143 , a memory 144 and a processor 145 . Wherein, the mechanical arm 142 is arranged on the robot main body 141; the camera assembly 143 is arranged on the robot main body 141 and/or the mechanical arm 142, and is used to collect images of the target area; the memory 144 is arranged on the robot main body 141, and is used to store program data; the processor 145 is arranged on the robot main body 141, and connects the mechanical arm 142, the camera assembly 143 and the memory 144, and is used to execute the program data, so as to realize the following method:

获取目标区域的第一图像;对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型;根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Acquiring a first image of the target area; identifying the first image to determine a first position and a first type of a first target item in the first image; controlling the robot according to the first position and first type of the first target item The robotic arm grabs the first target item and places it in a sub-area corresponding to the first type of the first target item in the target area.

可以理解地,本实施例中的处理器145还可以实现上述任一实施例中的方法,这里不再赘述。It can be understood that the processor 145 in this embodiment may also implement the method in any of the foregoing embodiments, which will not be repeated here.

在一些实施例中,摄像头组件143包括第一摄像头(图未示)和第二摄像头(图未示),第一摄像头设置于机器人主体141,用于采集目标区域的第一图像,能够采集全局的图像;第二摄像头设置于机械臂142的末端,用于采集第一目标物品的图像数据。可以理解,第二摄像头相较于第一摄像头采集的目标动态物体的准确性较高。其中,第二摄像头的数量与机械臂142的数量相对应,即每一机械臂142的末端设置一第二摄像头。In some embodiments, the camera assembly 143 includes a first camera (not shown in the figure) and a second camera (not shown in the figure), the first camera is arranged on the robot body 141, and is used to collect the first image of the target area, and can collect the global the image of the object; the second camera is set at the end of the mechanical arm 142 for collecting the image data of the first target item. It can be understood that the accuracy of the target dynamic object captured by the second camera is higher than that of the first camera. Wherein, the number of the second cameras corresponds to the number of the mechanical arms 142 , that is, a second camera is arranged at the end of each mechanical arm 142 .

在一些实施例中,第一摄像头安装在机器人主体141在头部,观察目标区域的全局情况,可以是深度相机,第二摄像头安装在机械臂末端,可以是RGB摄像头。首先需要进行系统标定,第二摄像头的标定可以由产品说明直接得到,故只需要标定第一摄像头即可。该标定的类型属于手眼标定中的eye-to-hand,即第一摄像头与机器人主体141相对固定,与机械臂分开。在eye-to-hand的问题中,待求量为第一摄像头到机器人主体141的基座坐标系的固定转换矩阵baseTcamera,这是一个4*4的矩阵,代表以第一摄像头的相机坐标系到机器人主体141的基座坐标系的转换关系。可通过将棋盘格标定板固定在第一机械臂或第二机械臂上,第一机械臂或第二机械臂带动标定板移动到第一摄像头下方不同位置角度,第一摄像头多次拍摄标定板识别角点来计算得到。In some embodiments, the first camera is installed on the head of the robot body 141 to observe the overall situation of the target area, and may be a depth camera, and the second camera is installed at the end of the robot arm, and may be an RGB camera. First, system calibration is required. The calibration of the second camera can be directly obtained from the product description, so only the first camera needs to be calibrated. This type of calibration belongs to eye-to-hand in hand-eye calibration, that is, the first camera is relatively fixed to the robot main body 141 and separated from the mechanical arm. In the eye-to-hand problem, the quantity to be sought is the fixed transformation matrix base T camera from the first camera to the base coordinate system of the robot body 141, which is a 4*4 matrix, representing the camera of the first camera The conversion relationship from the coordinate system to the base coordinate system of the robot main body 141 . The checkerboard calibration plate can be fixed on the first mechanical arm or the second mechanical arm, the first mechanical arm or the second mechanical arm drives the calibration plate to move to different positions and angles under the first camera, and the first camera takes multiple shots of the calibration plate Identify the corner points to calculate.

当第一机械臂或第二机械臂带动标定板移动任意两个位姿时,利用该位姿求得第一摄像头到机器人主体141的基座坐标系的固定转换矩阵。When the first robot arm or the second robot arm drives the calibration plate to move any two poses, the poses are used to obtain a fixed transformation matrix from the first camera to the base coordinate system of the robot body 141 .

参阅图15,计算机可读存储介质150中存储有程序数据151,程序数据151在被处理器执行时,用于实现如下方法:Referring to FIG. 15 , program data 151 is stored in a computer-readable storage medium 150, and when the program data 151 is executed by a processor, it is used to implement the following method:

获取目标区域的第一图像;对第一图像进行识别,以确定第一图像中第一目标物品的第一位置和第一类型;根据第一目标物品的第一位置和第一类型,控制机器人的机械臂抓取第一目标物品,并放置于目标区域中与第一目标物品的第一类型相对应的子区域中。Acquiring a first image of the target area; identifying the first image to determine a first position and a first type of a first target item in the first image; controlling the robot according to the first position and first type of the first target item The robotic arm grabs the first target item and places it in a sub-area corresponding to the first type of the first target item in the target area.

可以理解地,本实施例中的计算机可读存储介质150还可以实现上述任一实施例的方法,这里不再赘述。It can be understood that the computer-readable storage medium 150 in this embodiment can also implement the method in any of the above-mentioned embodiments, which will not be repeated here.

在一些实施例中,上述任一实施例提及的第一学习模型和/或第二学习模型可以采用SVM(Support Vector Machine,支持向量机)模型。SVM基本想法是求解能够正确划分训练数据集并且几何间隔最大的分离超平面,且其优于其他分类器如KNN(K-NearestNeighbor,k最邻近分类算法)等的优点为:SVM训练完成后,大部分的训练样本都不需要保留,最终模型仅与支持向量有关。In some embodiments, the first learning model and/or the second learning model mentioned in any of the above embodiments may adopt a SVM (Support Vector Machine, support vector machine) model. The basic idea of SVM is to solve the separation hyperplane that can correctly divide the training data set and has the largest geometric interval, and its advantages over other classifiers such as KNN (K-NearestNeighbor, k-nearest neighbor classification algorithm) are as follows: After the SVM training is completed, Most of the training samples do not need to be kept, and the final model is only related to the support vectors.

在一些实施例中,采用图像处理库OpenCV中的machine learning类中的SVM方法来实现,需要设定的参数有SVM类型、SVM内核类型、内核的参数、模型迭代次数和精度,具体值依次是:C_SVC,表示SVM类型是C类支持向量分类机,允许用异常值惩罚因子C进行不完全分类;线性内核SVM::LINEAR,其计算速度最快;内核参数gamma=0.1,优化参数C设为0;迭代次数为3000,迭代精度为0.001。In some embodiments, the SVM method in the machine learning class in the image processing library OpenCV is used to implement. The parameters that need to be set include the SVM type, the SVM kernel type, the parameters of the kernel, the number of model iterations and the precision, and the specific values are in order. : C_SVC, indicating that the SVM type is a C-class support vector classification machine, which allows incomplete classification with the outlier penalty factor C; the linear kernel SVM::LINEAR, which has the fastest calculation speed; the kernel parameter gamma=0.1, and the optimization parameter C is set to 0; the number of iterations is 3000, and the iteration precision is 0.001.

在本申请所提供的几个实施方式中,应该理解到,所揭露的方法以及设备,可以通过其它的方式实现。例如,以上所描述的设备实施方式仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。In the several implementation manners provided in this application, it should be understood that the disclosed methods and devices may be implemented in other ways. For example, the device implementation described above is only illustrative. For example, the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components can be Incorporation may either be integrated into another system, or some features may be omitted, or not implemented.

所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施方式方案的目的。The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.

另外,在本申请各个实施方式中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit. The above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.

上述其他实施方式中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器(processor)执行本申请各个实施方式所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,RandomAccess Memory)、磁碟或者光盘等各种可以存储程序代码的介质。If the integrated units in the above other embodiments are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application is essentially or part of the contribution to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium , including several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) execute all or part of the steps of the methods described in various embodiments of the present application. The aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disc and other media that can store program codes.

以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above is only the implementation of the application, and does not limit the patent scope of the application. Any equivalent structure or equivalent process conversion made by using the specification and drawings of the application, or directly or indirectly used in other related technologies fields, are all included in the scope of patent protection of this application in the same way.

Claims (8)

1. A method of article grasping by a robot, the method comprising:
acquiring a first image of a target area;
comparing the first image with a background image to obtain first contour information of a first target object in the first image; wherein the background image is acquired when the target area is free of a first target item;
performing HSV format conversion on the image formed based on the first contour information to obtain first HSV data;
acquiring a first color tone histogram in the first HSV data;
inputting the first color tone histogram into a first learning model to obtain a first type corresponding to the first contour information;
when a plurality of first target items exist in the target area, identifying the first image to determine a first position and a first type of each first target item in the first image;
determining a sub-area corresponding to the first target object according to the first type;
determining a corresponding sub-area according to the first type of each first target object;
determining a first distance between each first target object and the corresponding sub-area according to the first position of the first target object;
sequencing the first distances of the first target objects of the same first type from small to large to obtain a grabbing sequence corresponding to each first type;
generating a grabbing track based on the grabbing sequence;
and controlling the mechanical arm to move to the first position according to the grabbing track so as to grab the first target object, and placing the first target object in a sub-area corresponding to the first type of the first target object in the target area.
2. The method of claim 1,
the controlling the mechanical arm to move to the first position according to the grabbing track so as to grab the first target object comprises:
controlling the mechanical arm to move to the first position according to the grabbing track;
acquiring a second image acquired by a camera assembly at the tail end of the mechanical arm;
identifying the second image to determine a second location and a second type of a second target item in the second image;
and if the first type is the same as the second type, determining that the first target object and the second target object are the same, and controlling the mechanical arm to grab the second target object.
3. The method of claim 2,
the identifying the second image to determine a second location and a second type of a second target item in the second image comprises:
identifying the second target object in the second image to obtain a second position of the second target object in the second image;
and inputting the second image into a pre-trained second learning model to obtain a second type of the second target object in the second image.
4. The method of claim 3,
before inputting the second image into a second learning model trained in advance to obtain the second type of the second target item in the second image, the method includes:
obtaining second contour information of the second target object based on the second position;
performing HSV format conversion on the image formed based on the second contour information to obtain second HSV data;
acquiring a second hue histogram in the second HSV data;
the inputting the second image into a second learning model trained in advance to obtain a second type of the second target item in the second image includes:
and inputting the second color tone histogram into a pre-trained second learning model to obtain a second type corresponding to the second target object in the second image.
5. The method of claim 1,
the generating of the grabbing track based on the grabbing sequence comprises:
generating a grabbing track by using an RRT algorithm based on the grabbing sequence and the first position of the first target object of the same first type.
6. The method of claim 1,
the robot comprises at least a first mechanical arm and a second mechanical arm;
when a plurality of first target items are present in the target area, the method further comprises:
determining whether at least two first types of the plurality of first target items are present;
if yes, respectively determining the first type of the grabbed first target object and the first target object corresponding to the first type for the first mechanical arm and the second mechanical arm;
the controlling a robot arm of the robot to grasp the first target item and place the first target item in a sub-region of the target region corresponding to the first type of the first target item according to the first position and the first type of the first target item comprises:
and controlling the first mechanical arm and the second mechanical arm to respectively grab the corresponding first target object and place the first target object in the sub-area corresponding to the first type of the first target object.
7. A robot, characterized in that the robot comprises:
a robot main body;
a robot arm provided to the robot main body;
the camera assembly is arranged on the robot main body and/or the mechanical arm and is used for acquiring images of a target area;
a memory provided in the robot main body for storing program data;
a processor disposed in the robot body and connected to the robot arm, the camera assembly and the memory for executing the program data to implement the method of any one of claims 1-6.
8. A computer-readable storage medium, in which program data are stored which, when being executed by a processor, are adapted to carry out the method of any one of claims 1-6.
CN202110587574.3A 2021-05-27 2021-05-27 Robot, article grabbing method thereof and computer-readable storage medium Active CN113524172B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110587574.3A CN113524172B (en) 2021-05-27 2021-05-27 Robot, article grabbing method thereof and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110587574.3A CN113524172B (en) 2021-05-27 2021-05-27 Robot, article grabbing method thereof and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN113524172A CN113524172A (en) 2021-10-22
CN113524172B true CN113524172B (en) 2023-04-18

Family

ID=78094811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110587574.3A Active CN113524172B (en) 2021-05-27 2021-05-27 Robot, article grabbing method thereof and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN113524172B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114029947B (en) * 2021-10-27 2022-11-18 因格(苏州)智能技术有限公司 Method and device for determining robot picking sequence
CN114140607A (en) * 2021-12-02 2022-03-04 中国科学技术大学 Machine vision positioning method and system for upper arm prosthesis control
CN115170664A (en) * 2022-07-06 2022-10-11 苏州镁伽科技有限公司 Method, device and equipment for determining position of overlapped target object

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09131575A (en) * 1995-11-10 1997-05-20 Kubota Corp Goods sorting equipment
WO2013067982A2 (en) * 2011-11-11 2013-05-16 Böwe Systec Gmbh Device and method for combining cards and card supports, for handling cards and/or for sorting cards from card holders
JP7022076B2 (en) * 2016-12-19 2022-02-17 株式会社安川電機 Image recognition processors and controllers for industrial equipment
CN109241983A (en) * 2018-09-13 2019-01-18 电子科技大学 A kind of cigarette image-recognizing method of image procossing in conjunction with neural network
JP2020062691A (en) * 2018-10-15 2020-04-23 株式会社Preferred Networks Inspection device, inspection method, robot device and inspection method and program in robot device
CN110660104A (en) * 2019-09-29 2020-01-07 珠海格力电器股份有限公司 Industrial robot visual identification positioning grabbing method, computer device and computer readable storage medium
KR102206303B1 (en) * 2019-10-31 2021-01-22 광주과학기술원 System and Method for Discriminating Status and Estimating Posture Using Deep Learning

Also Published As

Publication number Publication date
CN113524172A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN113524172B (en) Robot, article grabbing method thereof and computer-readable storage medium
CN111590611B (en) Article classification and recovery method based on multi-mode active perception
Kasaei et al. Towards lifelong assistive robotics: A tight coupling between object perception and manipulation
CN105046197B (en) Multi-template pedestrian detection method based on cluster
Zhang et al. Robotic grasp detection based on image processing and random forest
CN108510062A (en) A kind of robot irregular object crawl pose rapid detection method based on concatenated convolutional neural network
CN104766343B (en) A kind of visual target tracking method based on rarefaction representation
Sui et al. Sum: Sequential scene understanding and manipulation
CN115816460A (en) A Manipulator Grasping Method Based on Deep Learning Target Detection and Image Segmentation
CN111428731A (en) Multi-class target identification and positioning method, device and equipment based on machine vision
CN114693661A (en) Rapid sorting method based on deep learning
CN108550162A (en) A kind of object detecting method based on deeply study
Capellen et al. ConvPoseCNN: Dense convolutional 6D object pose estimation
Li et al. Learning target-oriented push-grasping synergy in clutter with action space decoupling
WO2025000778A1 (en) Gripping control method and apparatus for test tube
CN116912906A (en) Multi-dimensional identity recognition methods, devices, electronic equipment and program products
Lin et al. Robot vision to recognize both object and rotation for robot pick-and-place operation
CN114029941B (en) Robot grabbing method and device, electronic equipment and computer medium
Pot et al. Self-supervisory signals for object discovery and detection
CN117656083B (en) Seven-degree-of-freedom grabbing gesture generation method, device, medium and equipment
Zhuang et al. Lyrn (lyapunov reaching network): A real-time closed loop approach from monocular vision
Bergström et al. Integration of visual cues for robotic grasping
Zhang et al. Robotic grasp detection using effective graspable feature selection and precise classification
CN117021099A (en) Human-computer interaction method oriented to any object and based on deep learning and image processing
CN117260702A (en) Method for controlling a robot for handling, in particular picking up, objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant