[go: up one dir, main page]

CN113942009B - Robot bionic hand grabbing method - Google Patents

Robot bionic hand grabbing method Download PDF

Info

Publication number
CN113942009B
CN113942009B CN202111070054.1A CN202111070054A CN113942009B CN 113942009 B CN113942009 B CN 113942009B CN 202111070054 A CN202111070054 A CN 202111070054A CN 113942009 B CN113942009 B CN 113942009B
Authority
CN
China
Prior art keywords
target object
visual
neural network
network model
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111070054.1A
Other languages
Chinese (zh)
Other versions
CN113942009A (en
Inventor
丁梓豪
陈国栋
王振华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou University
Original Assignee
Suzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou University filed Critical Suzhou University
Priority to CN202111070054.1A priority Critical patent/CN113942009B/en
Publication of CN113942009A publication Critical patent/CN113942009A/en
Application granted granted Critical
Publication of CN113942009B publication Critical patent/CN113942009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Manipulator (AREA)

Abstract

本公开提供了一种机器人仿生手抓取方法,包括:获取目标物体图像;分析目标物体图像,获取目标物体的视觉信息;基于目标物体的视觉信息,确定机器人仿生手的视觉抓取参数;基于机器人仿生手的视觉抓取参数,进行目标物体抓取;获取目标物体的触觉信息以及抓取结果;基于目标物体的触觉信息及目标物体的视觉信息,获取目标物体的软硬属性数据;以及基于目标物体的软硬属性数据,调整机器人仿生手的触觉抓取参数并持续抓取。

Figure 202111070054

The present disclosure provides a method for grabbing a robot bionic hand, including: acquiring an image of a target object; analyzing the image of the target object to obtain visual information of the target object; based on the visual information of the target object, determining the visual grasping parameters of the robot bionic hand; The visual grasping parameters of the robot bionic hand are used to grasp the target object; the tactile information of the target object and the grasping result are obtained; the soft and hard attribute data of the target object are obtained based on the tactile information of the target object and the visual information of the target object; and based on The soft and hard attribute data of the target object, adjust the tactile grasping parameters of the robot bionic hand and continue to grasp.

Figure 202111070054

Description

机器人仿生手抓取方法Robot bionic hand grasping method

技术领域technical field

本公开涉及一种机器人仿生手抓取方法,属于机器人技术领域。The disclosure relates to a grabbing method of a robot bionic hand, which belongs to the technical field of robots.

背景技术Background technique

随着机器人技术的不断发展,机器人已经应用于各种各样的领域,代替人完成复杂的工作。With the continuous development of robot technology, robots have been used in various fields to replace people to complete complex tasks.

当机器人被用于不同领域时,其工作内容不一样,需要为机器人定制不同的手爪,且传统的机器人手爪都是基于人工示教或视觉引导的方式,无法应对不同材质的物体,容易造成软材质物体的损伤。When the robot is used in different fields, its work content is different, and different grippers need to be customized for the robot, and the traditional robot grippers are based on manual teaching or visual guidance, which cannot cope with objects of different materials, and are easy to Causes damage to soft material objects.

如何根据物体的软硬程度来规划正确的操作动作,保证所操作的物体不掉落、不破损,是当前机器人领域的难点。How to plan the correct operation action according to the softness and hardness of the object, so as to ensure that the operated object will not fall or be damaged, is the current difficulty in the field of robotics.

发明内容Contents of the invention

为了解决上述技术问题之一,本公开提供了一种机器人仿生手抓取方法。In order to solve one of the above technical problems, the present disclosure provides a grasping method of a robot bionic hand.

根据本公开的一个方面,本公开提供了一种机器人仿生手抓取方法,其包括:According to one aspect of the present disclosure, the present disclosure provides a method for grasping by a robot bionic hand, which includes:

获取目标物体图像;Obtain the image of the target object;

将目标物体图像输入视觉检测神经网络模型进行识别,获得目标物体的视觉信息,所述视觉信息包括目标物体横坐标、目标物体纵坐标、目标物体宽、目标物体高以及目标物体类别;Input the image of the target object into the visual detection neural network model for identification, and obtain the visual information of the target object, the visual information including the abscissa of the target object, the ordinate of the target object, the width of the target object, the height of the target object and the category of the target object;

基于所述目标物体的视觉信息,确定机器人仿生手的视觉抓取参数;Based on the visual information of the target object, determine the visual grasping parameters of the robot bionic hand;

基于所述机器人仿生手的视觉抓取参数,进行目标物体抓取;Grasping the target object based on the visual grasping parameters of the bionic hand of the robot;

获取所述目标物体的触觉信息以及抓取结果;Obtaining the tactile information and the grasping result of the target object;

基于所述目标物体的触觉信息及目标物体的视觉信息,获取目标物体的软硬属性数据;以及Acquiring soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object; and

基于所述目标物体的软硬属性数据,调整机器人仿生手的触觉抓取参数并持续抓取;Based on the soft and hard attribute data of the target object, adjust the tactile grasping parameters of the robot bionic hand and continue to grasp;

其中,所述视觉检测神经网络模型的参数设置,包括:卷积层数6,池化层数6,训练次数10000、每批训练样本数20、学习率0.001;Wherein, the parameter setting of the visual detection neural network model includes: 6 convolutional layers, 6 pooling layers, 10000 training times, 20 training samples per batch, and 0.001 learning rate;

所述视觉检测神经网络模型的训练过程,包括:The training process of the visual detection neural network model includes:

采集视觉样本,包括:利用深度相机采集包含目标物体的图像,对图像进行目标预处理,获取图像标签信息,所述预处理包括对所述包含目标物体的图像进行缩放,所述获取图像标签信息包括获取图像中目标物体横坐标、目标物体纵坐标、目标物体宽、目标物体高以及目标物体类别;Collecting visual samples includes: using a depth camera to collect an image containing a target object, performing target preprocessing on the image, and obtaining image label information, the preprocessing includes scaling the image containing the target object, and the obtaining image label information Including obtaining the abscissa of the target object in the image, the ordinate of the target object, the width of the target object, the height of the target object and the category of the target object;

将采集的视觉样本的训练集输入视觉检测神经网络模型进行训练,获得经训练的视觉检测神经网络模型;以及Inputting the training set of the collected visual samples into the visual detection neural network model for training to obtain the trained visual detection neural network model; and

将采集的视觉样本的测试集输入视觉检测神经网络模型进行测试验证,经验证通过后,获得可用的视觉检测神经网络模型;Input the test set of collected visual samples into the visual detection neural network model for test verification, and obtain the available visual detection neural network model after passing the verification;

所述获取所述目标物体的触觉信息以及抓取结果,包括:机器人仿生手的触觉传感器对不同的接触点位按照时间序列获取接触力数据,以及在对应时间序列中的抓取结果,所述接触力数据形成以时间和点位对应的二维数组;The acquisition of the tactile information and grasping results of the target object includes: the tactile sensor of the robot bionic hand acquires contact force data in time series for different contact points, and the grasping results in the corresponding time series, the The contact force data forms a two-dimensional array corresponding to time and point;

所述基于所述目标物体的触觉信息及目标物体的视觉信息,获取目标物体的软硬属性数据,包括:将所述目标物体的接触力数据形成以时间和点位对应的二维数组以及目标物体类别输入触觉检测神经网络模型进行识别,获得目标物体的软硬属性数据;The acquisition of soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object includes: forming the contact force data of the target object into a two-dimensional array corresponding to time and point and the target Input the object category into the tactile detection neural network model for identification, and obtain the soft and hard attribute data of the target object;

所述触觉检测神经网络模型为循环神经网络模型,所述循环神经网络模型的参数设置包括:隐含层数64、训练次数10000、每批训练样本数20、学习率0.001;所述触觉检测神经网络模型建立过程包括:The tactile detection neural network model is a cyclic neural network model, and the parameter settings of the cyclic neural network model include: 64 hidden layers, 10000 training times, 20 training samples per batch, and a learning rate of 0.001; The network model building process includes:

采集触觉样本,在时间序列范围内,对不同的接触点位,所述采集触觉样本,所述采集触觉样本在每采集一个视觉样本时进行;Collecting tactile samples, within the range of time series, for different contact points, said collecting tactile samples, said collecting tactile samples is performed every time a visual sample is collected;

将采集的触觉样本的训练集输入触觉检测神经网络模型进行训练,获得经训练的触觉检测神经网络模型;以及Inputting the training set of collected tactile samples into the tactile detection neural network model for training to obtain a trained tactile detection neural network model; and

将采集的触觉样本的测试集输入所述经训练的触觉检测神经网络模型进行测试验证,验证通过后,获得可用的触觉检测神经网络模型。The test set of collected tactile sense samples is input into the trained tactile sense detection neural network model for test verification, and after the verification is passed, an available tactile sense detection neural network model is obtained.

根据本公开至少一个实施方式的机器人仿生手抓取方法,所述获取目标物体的图像,包括:According to the robot bionic hand grasping method according to at least one embodiment of the present disclosure, the acquisition of the image of the target object includes:

通过深度相机获取目标物体图像,所述深度相机安装在机器人上方,且与机器人无接触。The image of the target object is acquired through a depth camera, which is installed above the robot and has no contact with the robot.

根据本公开至少一个实施方式的机器人仿生手抓取方法,所述基于所述目标物体的视觉信息,确定机器人仿生手的视觉抓取参数,包括:According to the grasping method of the robotic bionic hand in at least one embodiment of the present disclosure, the determination of visual grasping parameters of the robotic bionic hand based on the visual information of the target object includes:

基于所述目标物体的视觉信息,确定机器人仿生手的运动轨迹及抓取姿态。Based on the visual information of the target object, the trajectory and grasping posture of the robot bionic hand are determined.

根据本公开至少一个实施方式的机器人仿生手抓取方法,所述基于所述目标物体的软硬属性数据,调整机器人仿生手的触觉抓取参数,包括:According to the grasping method of the robotic bionic hand in at least one embodiment of the present disclosure, the adjustment of the tactile grasping parameters of the robotic bionic hand based on the soft and hard attribute data of the target object includes:

基于所述目标物体的软硬属性数据,调整机器人仿生手和目标物体的接触力大小。Based on the soft and hard attribute data of the target object, the contact force between the robot bionic hand and the target object is adjusted.

附图说明Description of drawings

附图示出了本公开的示例性实施方式,并与其说明一起用于解释本公开的原理,其中包括了这些附图以提供对本公开的进一步理解,并且附图包括在本说明书中并构成本说明书的一部分。The accompanying drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the disclosure, are included to provide a further understanding of the disclosure, and are incorporated in and constitute this specification. part of the manual.

图1是根据本公开的一个实施方式的机器人仿生手抓取方法流程示意图。Fig. 1 is a schematic flowchart of a method for grabbing a robotic bionic hand according to an embodiment of the present disclosure.

图2是根据本公开的一个实施方式的视觉检测神经网络结构示意图。Fig. 2 is a schematic diagram of a visual detection neural network structure according to an embodiment of the present disclosure.

图3是根据本公开的一个实施方式的触觉检测循环神经网络结构示意图。Fig. 3 is a schematic diagram of a structure of a cyclic neural network for touch detection according to an embodiment of the present disclosure.

图4是根据本公开的一个实施方式的视触觉样本采集方法流程示意图。Fig. 4 is a schematic flowchart of a visual-tactile sample collection method according to an embodiment of the present disclosure.

图5是根据本公开的一个实施方式的装满金属的盒子示意图。Figure 5 is a schematic illustration of a metal filled box according to one embodiment of the present disclosure.

图6是根据本公开的一个实施方式的空盒子示意图。Figure 6 is a schematic diagram of an empty box according to one embodiment of the present disclosure.

图7是根据本公开的一个实施方式的触觉数据样本示意图。FIG. 7 is a schematic diagram of a haptic data sample according to an embodiment of the present disclosure.

图8是根据本公开的一个实施方式的触觉数据样本详情示意图。FIG. 8 is a diagram illustrating details of a haptic data sample according to an embodiment of the present disclosure.

图9是根据本公开的一个实施方式的机器人仿生手抓取系统结构示意图。Fig. 9 is a schematic structural diagram of a robot bionic hand grasping system according to an embodiment of the present disclosure.

附图标记说明Explanation of reference signs

1000 机器人仿生手抓取系统1000 robot bionic hand grasping system

1002 机器人仿生手1002 Robot Bionic Hand

1004 视觉信息采集模块1004 visual information acquisition module

1006 触觉信息采集模块1006 Tactile information acquisition module

1008 视觉分析模块1008 Visual Analysis Module

1010 触觉分析模块1010 Tactile Analysis Module

1012 抓取控制模块1012 grab control module

1100 总线1100 bus

1200 处理器1200 processor

1300 存储器1300 memory

1400 其他电路。1400 Other circuits.

具体实施方式Detailed ways

下面结合附图和实施方式对本公开作进一步的详细说明。可以理解的是,此处所描述的具体实施方式仅用于解释相关内容,而非对本公开的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本公开相关的部分。The present disclosure will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific implementation manners described here are only used to explain relevant content, rather than to limit the present disclosure. It should also be noted that, for ease of description, only parts related to the present disclosure are shown in the drawings.

需要说明的是,在不冲突的情况下,本公开中的实施方式及实施方式中的特征可以相互组合。下面将参考附图并结合实施方式来详细说明本公开的技术方案。It should be noted that, in the case of no conflict, the implementation modes and the features in the implementation modes in the present disclosure can be combined with each other. The technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings and in combination with implementation manners.

除非另有说明,否则示出的示例性实施方式/实施例将被理解为提供可以在实践中实施本公开的技术构思的一些方式的各种细节的示例性特征。因此,除非另有说明,否则在不脱离本公开的技术构思的情况下,各种实施方式/实施例的特征可以另外地组合、分离、互换和/或重新布置。Unless otherwise specified, the illustrated exemplary embodiments/embodiments are to be understood as exemplary features providing various details of some manner in which the technical idea of the present disclosure can be implemented in practice. Therefore, unless otherwise stated, the features of various embodiments/embodiments may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.

在附图中使用交叉影线和/或阴影通常用于使相邻部件之间的边界变得清晰。如此,除非说明,否则交叉影线或阴影的存在与否均不传达或表示对部件的具体材料、材料性质、尺寸、比例、示出的部件之间的共性和/或部件的任何其它特性、属性、性质等的任何偏好或者要求。此外,在附图中,为了清楚和/或描述性的目的,可以夸大部件的尺寸和相对尺寸。当可以不同地实施示例性实施例时,可以以不同于所描述的顺序来执行具体的工艺顺序。例如,可以基本同时执行或者以与所描述的顺序相反的顺序执行两个连续描述的工艺。此外,同样的附图标记表示同样的部件。The use of cross-hatching and/or shading in the figures is generally used to clarify the boundaries between adjacent features. As such, unless stated otherwise, the presence or absence of cross-hatching or shading conveys or indicates no specific material, material properties, dimensions, proportions, commonality between the illustrated components, and/or any other characteristic of the components, Any preferences or requirements for attributes, properties, etc. Also, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While exemplary embodiments may be implemented differently, a specific process sequence may be performed in an order different from that described. For example, two consecutively described processes may be performed substantially simultaneously or in an order reverse to that described. In addition, the same reference numerals denote the same components.

当一个部件被称作“在”另一部件“上”或“之上”、“连接到”或“结合到”另一部件时,该部件可以直接在所述另一部件上、直接连接到或直接结合到所述另一部件,或者可以存在中间部件。然而,当部件被称作“直接在”另一部件“上”、“直接连接到”或“直接结合到”另一部件时,不存在中间部件。为此,术语“连接”可以指物理连接、电气连接等,并且具有或不具有中间部件。When an element is referred to as being "on" or "over", "connected to" or "coupled to" another element, the element may be directly on, directly connected to, or Or directly bonded to the other component, or intermediate components may be present. However, when an element is referred to as being "directly on," "directly connected to," or "directly coupled to" another element, there are no intervening elements present. To this end, the term "connected" may refer to a physical connection, an electrical connection, etc., with or without intervening components.

为了描述性目的,本公开可使用诸如“在……之下”、“在……下方”、“在……下”、“下”、“在……上方”、“上”、“在……之上”、“较高的”和“侧(例如,如在“侧壁”中)”等的空间相对术语,从而来描述如附图中示出的一个部件与另一(其它)部件的关系。除了附图中描绘的方位之外,空间相对术语还意图包含设备在使用、操作和/或制造中的不同方位。例如,如果附图中的设备被翻转,则被描述为“在”其它部件或特征“下方”或“之下”的部件将随后被定位为“在”所述其它部件或特征“上方”。因此,示例性术语“在……下方”可以包含“上方”和“下方”两种方位。此外,设备可被另外定位(例如,旋转90度或者在其它方位处),如此,相应地解释这里使用的空间相对描述语。For descriptive purposes, this disclosure may use terms such as "under", "beneath", "below", "under", "above", "on", "on" ... above", "higher" and "side (e.g., as in "side walls")", etc., to describe one component compared to another (other) component as shown in the drawings Relationship. Spatially relative terms are intended to encompass different orientations of the device in use, operation and/or manufacture in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as "below" or "beneath" other elements or features would then be oriented "above" the other elements or features. Thus, the exemplary term "below" can encompass both an orientation of "above" and "beneath". In addition, the device may be otherwise positioned (eg, rotated 90 degrees or at other orientations), and as such, the spatially relative descriptors used herein are interpreted accordingly.

这里使用的术语是为了描述具体实施例的目的,而不意图是限制性的。如这里所使用的,除非上下文另外清楚地指出,否则单数形式“一个(种、者)”和“所述(该)”也意图包括复数形式。此外,当在本说明书中使用术语“包含”和/或“包括”以及它们的变型时,说明存在所陈述的特征、整体、步骤、操作、部件、组件和/或它们的组,但不排除存在或附加一个或更多个其它特征、整体、步骤、操作、部件、组件和/或它们的组。还要注意的是,如这里使用的,术语“基本上”、“大约”和其它类似的术语被用作近似术语而不用作程度术语,如此,它们被用来解释本领域普通技术人员将认识到的测量值、计算值和/或提供的值的固有偏差。The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a" and "the" are intended to include the plural forms as well, unless the context clearly dictates otherwise. In addition, when the terms "comprising" and/or "comprising" and their variants are used in this specification, it means that the stated features, integers, steps, operations, parts, components and/or their groups exist, but do not exclude One or more other features, integers, steps, operations, parts, components and/or groups thereof are present or in addition. Note also that, as used herein, the terms "substantially," "about," and other similar terms are used as terms of approximation and not as terms of degree, and as such, they are used to explain what one of ordinary skill in the art would recognize. Inherent deviations from measured, calculated and/or supplied values.

图1是根据本公开的一个实施方式的机器人仿生手抓取方法流程示意图。Fig. 1 is a schematic flowchart of a method for grabbing a robotic bionic hand according to an embodiment of the present disclosure.

如图1所示,一种机器人仿生手抓取方法S100,包括:As shown in FIG. 1, a method S100 for grabbing a robot bionic hand includes:

S102:获取目标物体图像;S102: Acquiring an image of the target object;

S104:分析目标物体图像,获取目标物体的视觉信息;S104: Analyze the image of the target object to obtain visual information of the target object;

S106:基于目标物体的视觉信息,确定机器人仿生手的视觉抓取参数;S106: Based on the visual information of the target object, determine the visual grasping parameters of the robot bionic hand;

S108:基于机器人仿生手的视觉抓取参数,进行目标物体抓取;S108: Grasping the target object based on the visual grasping parameters of the robot bionic hand;

S110:获取目标物体的触觉信息以及,抓取结果;S110: Obtain tactile information of the target object and a grasping result;

S112:基于目标物体的触觉信息及目标物体的视觉信息,获取目标物体的软硬属性数据;以及,S112: Obtain soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object; and,

S114:基于目标物体的软硬属性数据,调整机器人仿生手的触觉抓取参数并持续抓取。S114: Based on the soft and hard attribute data of the target object, adjust the tactile grasping parameters of the robotic bionic hand and continue grasping.

其中,获取目标物体的图像,包括:Among them, the image of the target object is obtained, including:

通过深度相机获取目标物体图像,深度相机安装在机器人上方,且与机器人无接触。The image of the target object is obtained through the depth camera, which is installed above the robot and has no contact with the robot.

其中,分析目标物体图像,获取目标物体的视觉信息,包括:Among them, the image of the target object is analyzed to obtain the visual information of the target object, including:

将目标物体图像输入视觉检测神经网络模型进行识别,获得目标物体的视觉信息,视觉信息包括目标物体横坐标、目标物体纵坐标、目标物体宽、目标物体高以及,目标物体类别。The image of the target object is input into the visual detection neural network model for recognition, and the visual information of the target object is obtained. The visual information includes the abscissa of the target object, the ordinate of the target object, the width of the target object, the height of the target object, and the category of the target object.

其中,视觉检测神经网络模型的参数设置,包括:卷积层数6,池化层数6,训练次数10000、每批训练样本数20、学习率0.001;Among them, the parameter settings of the visual detection neural network model include: 6 convolutional layers, 6 pooling layers, 10,000 training times, 20 training samples per batch, and 0.001 learning rate;

视觉检测神经网络模型的训练过程,包括:The training process of the visual detection neural network model, including:

采集视觉样本,包括:利用深度相机采集包含目标物体的图像,对图像进行目标预处理,获取图像标签信息,预处理包括对包含目标物体的图像进行缩放,获取图像标签信息包括获取图像中目标物体横坐标、目标物体纵坐标、目标物体宽、目标物体高以及,目标物体类别;Collecting visual samples, including: using a depth camera to collect images containing target objects, performing target preprocessing on images, and obtaining image label information, preprocessing includes scaling images containing target objects, and obtaining image label information includes obtaining target objects in images The abscissa, the ordinate of the target object, the width of the target object, the height of the target object and the category of the target object;

将采集的视觉样本的训练集输入视觉检测神经网络模型进行训练,获得经训练的视觉检测神经网络模型;以及,Inputting the training set of the collected visual samples into the visual detection neural network model for training to obtain the trained visual detection neural network model; and,

将采集的视觉样本的测试集输入视觉检测神经网络模型进行测试验证,经验证通过后,获得可用的视觉检测神经网络模型。Input the test set of collected visual samples into the visual detection neural network model for test verification, and obtain the usable visual detection neural network model after passing the verification.

其中,基于目标物体的视觉信息,确定机器人仿生手的视觉抓取参数,包括:Among them, based on the visual information of the target object, the visual grasping parameters of the robot bionic hand are determined, including:

基于目标物体的视觉信息,确定机器人仿生手的运动轨迹及抓取姿态。Based on the visual information of the target object, the trajectory and grasping posture of the robot bionic hand are determined.

其中,获取目标物体的触觉信息以及,抓取结果,包括:Among them, the tactile information of the target object and the grasping results are obtained, including:

机器人仿生手的触觉传感器对不同的接触点位按照时间序列获取接触力数据,以及,在对应时间序列中的抓取结果,接触力数据形成以时间和点位对应的二维数组。The tactile sensor of the robot bionic hand acquires contact force data for different contact points in time series, and, for the grasping results in the corresponding time series, the contact force data forms a two-dimensional array corresponding to time and point.

其中,基于目标物体的触觉信息及目标物体的视觉信息,获取目标物体的软硬属性数据,包括:Among them, based on the tactile information of the target object and the visual information of the target object, the soft and hard attribute data of the target object are obtained, including:

将目标物体的接触力数据形成以时间和点位对应的二维数组以及,目标物体类别输入触觉检测神经网络模型进行识别,获得目标物体的软硬属性数据。The contact force data of the target object is formed into a two-dimensional array corresponding to time and point, and the category of the target object is input into the tactile detection neural network model for identification to obtain the soft and hard attribute data of the target object.

其中,触觉检测神经网络模型为循环神经网络模型,Among them, the tactile detection neural network model is a recurrent neural network model,

循环神经网络模型的参数设置包括:隐含层数64、训练次数10000、每批训练样本数20、学习率0.001;The parameter settings of the cyclic neural network model include: the number of hidden layers is 64, the number of training times is 10000, the number of training samples per batch is 20, and the learning rate is 0.001;

触觉检测神经网络模型建立过程包括:The establishment process of the tactile detection neural network model includes:

采集触觉样本,在时间序列范围内,对不同的接触点位,采集触觉样本,采集触觉样本在每采集一个视觉样本时进行。The tactile samples are collected, and the tactile samples are collected for different contact points within the range of time series, and the tactile samples are collected every time a visual sample is collected.

将采集的触觉样本的训练集输入触觉检测神经网络模型进行训练,获得经训练的触觉检测神经网络模型;以及,Input the training set of the collected tactile sense samples into the tactile detection neural network model for training to obtain the trained tactile detection neural network model; and,

将采集的触觉样本的测试集输入经训练的触觉检测神经网络模型进行测试验证,验证通过后,获得可用的触觉检测神经网络模型。The test set of collected tactile samples is input into the trained tactile detection neural network model for test verification. After the verification is passed, a usable tactile detection neural network model is obtained.

其中,基于目标物体的软硬属性数据,调整机器人仿生手的触觉抓取参数,包括:Among them, based on the soft and hard attribute data of the target object, the tactile grasping parameters of the robot bionic hand are adjusted, including:

基于目标物体的软硬属性数据,调整机器人仿生手和目标物体的接触力大小。Based on the soft and hard attribute data of the target object, the contact force between the robot bionic hand and the target object is adjusted.

综上,通过对目标物体的视觉信息的获取和分析,可以实现对目标进行检测、分类和定位,当机器人靠近物体后,通过触觉就能够得到物体的软硬属性,然后根据物体材质属性控制机器人仿生手进行不同的抓取策略,调节抓取力的大小以及抓取的位置和姿态,从而完成物体的稳定、安全的抓取。In summary, through the acquisition and analysis of the visual information of the target object, the detection, classification and positioning of the target can be realized. When the robot is close to the object, the soft and hard properties of the object can be obtained through the sense of touch, and then the robot can be controlled according to the material properties of the object. The bionic hand performs different grasping strategies, adjusts the magnitude of the grasping force, the position and posture of the grasping, so as to complete the stable and safe grasping of the object.

通过视觉信息对目标进行检测、分类和定位,当机器人靠近物体后,通过触觉就能够得到物体的软硬属性,然后根据物体材质属性控制系统进行不同的抓取策略,调节抓取力的大小以及抓取的位置和姿态,从而完成物体的稳定、安全的抓取。通过模拟人类专家的认知思想,构建视触觉状态经验知识库,通过机器人与环境交互学习,借助视觉和力觉感知机器人工作过程中的状态,累积并更新机器人经验知识库,监控机器人作业过程中的执行状态,并基于深度学习方法评价机器人工作完成状态,判断是否成功。由此可知,能够帮助机器人代替人更好完成各种复杂的工作,提高生产工作效率。Detect, classify and locate the target through visual information. When the robot is close to the object, it can obtain the soft and hard properties of the object through the sense of touch, and then control the system according to the material properties of the object. Different grasping strategies, adjusting the size of the grasping force and The position and posture of the grasping, so as to complete the stable and safe grasping of the object. By simulating the cognitive thinking of human experts, constructing a visual and tactile state experience knowledge base, through interactive learning between the robot and the environment, with the help of vision and force sense to perceive the state of the robot in the working process, accumulate and update the robot experience knowledge base, and monitor the robot's operation process The execution status of the robot, and based on the deep learning method, evaluate the completion status of the robot's work and judge whether it is successful. It can be seen from this that it can help robots replace humans to better complete various complex tasks and improve production efficiency.

图2是根据本公开的一个实施方式的视觉检测神经网络结构示意图。Fig. 2 is a schematic diagram of a visual detection neural network structure according to an embodiment of the present disclosure.

如图2所示,图像输入:[512,512,3],图片尺寸512×512,三通道,各个卷积层和池化层的输入和输出的数据维度如下:As shown in Figure 2, image input: [512,512,3], image size 512×512, three channels, the input and output data dimensions of each convolution layer and pooling layer are as follows:

第一层卷积层,参数:[5,5,3,32],卷积核5×5,输入通道3,输出通道32,激活函数使用relu,输入:512×512×3,输出:512×512×32;The first convolutional layer, parameters: [5,5,3,32], convolution kernel 5×5, input channel 3, output channel 32, activation function uses relu, input: 512×512×3, output: 512 ×512×32;

第一层池化层,最大池化处理,输入:512×512×32,输出为256×256×32;The first layer of pooling layer, maximum pooling processing, input: 512×512×32, output is 256×256×32;

第二层卷积层,参数:[5,5,32,64],卷积核5×5,输入通道32,输出通道64,激活函数使用relu,输入:256×256×32,输出:256×256×64;The second convolutional layer, parameter: [5,5,32,64], convolution kernel 5×5, input channel 32, output channel 64, activation function uses relu, input: 256×256×32, output: 256 ×256×64;

第二层池化层,最大池化处理,输入:256×256×64,输出为128×128×64;The second pooling layer, maximum pooling processing, input: 256×256×64, output 128×128×64;

第三层卷积层,参数:[3,3,64,128],卷积核5×5,输入通道64,输出通道128,激活函数使用relu,输入:128×128×64,输出:128×128×128;The third convolutional layer, parameter: [3,3,64,128], convolution kernel 5×5, input channel 64, output channel 128, activation function uses relu, input: 128×128×64, output: 128×128 ×128;

第三层池化层,最大池化处理,输入:128×128×128,输出为64×64×128;The third pooling layer, maximum pooling processing, input: 128×128×128, output 64×64×128;

第四层卷积层,参数:[3,3,64,128],卷积核5×5,输入通道128,输出通道192,激活函数使用relu,输入:64×64×128,输出:64×64×192;The fourth convolutional layer, parameter: [3,3,64,128], convolution kernel 5×5, input channel 128, output channel 192, activation function uses relu, input: 64×64×128, output: 64×64 ×192;

第四层池化层,最大池化处理,输入:64×64×192,输出为32×32×192;The fourth pooling layer, maximum pooling processing, input: 64×64×192, output 32×32×192;

第五层卷积层,参数:[3,3,192,256],卷积核3×3,输入通道192,输出通道256,激活函数使用relu,输入:32×32×192,输出:32×32×256;The fifth convolutional layer, parameters: [3,3,192,256], convolution kernel 3×3, input channel 192, output channel 256, activation function uses relu, input: 32×32×192, output: 32×32×256 ;

第五层池化层,最大池化处理,输入:32×32×256,输出为16×16×256;The fifth pooling layer, maximum pooling processing, input: 32×32×256, output 16×16×256;

第六层卷积层,参数:[3,3,256,512],卷积核3×3,输入通道256,输出通道512,激活函数使用relu,输入:16×16×256,输出:16×16×512;The sixth convolutional layer, parameter: [3,3,256,512], convolution kernel 3×3, input channel 256, output channel 512, activation function uses relu, input: 16×16×256, output: 16×16×512 ;

第六层池化层,最大池化处理,输入:16×16×512,输出为8×8×512;The sixth pooling layer, maximum pooling processing, input: 16×16×512, output 8×8×512;

全连接层1,将上一个输出结果展平,输入:8×8×512,输出:1024;Fully connected layer 1, flatten the previous output result, input: 8×8×512, output: 1024;

全连接层2,输出得物体类别,输入:1024,输出:N(自定义,等于采集样本的类别数)。Fully connected layer 2, the output object category, input: 1024, output: N (custom, equal to the number of categories collected samples).

图3是根据本公开的一个实施方式提供的触觉检测循环神经网络的结构示意图。Fig. 3 is a schematic structural diagram of a tactile detection recurrent neural network provided according to an embodiment of the present disclosure.

如图3所示,触觉检测循环神经网络结构,包括:As shown in Figure 3, the tactile detection recurrent neural network structure includes:

网络输入层:x的特征维度,这里选择的的25个连续时间序列;Network input layer: the feature dimension of x, the 25 continuous time series selected here;

隐含层:隐含层的特征维度,确定了隐含状态hidden_state的维度,可以简单的看成构造了一个权重,这里隐含层选取为64;以及,Hidden layer: The feature dimension of the hidden layer determines the dimension of the hidden state hidden_state, which can be simply regarded as constructing a weight, where the hidden layer is selected as 64; and,

LSTM层:LSTM层数,默认为1。LSTM layer: The number of LSTM layers, the default is 1.

综上,本发明通过模拟人类专家的认知思想,构建视触觉状态经验知识库,通过机器人与环境交互学习,借助视觉和力觉感知机器人工作过程中的状态,累积并更新机器人经验知识库,监控机器人作业过程中的执行状态,并基于深度学习方法评价机器人工作完成状态,判断是否成功。本发明能够帮助机器人代替人完成各种复杂的工作,提高生产工作效率。To sum up, the present invention constructs a visual and tactile state experience knowledge base by simulating the cognitive thought of human experts, through interactive learning between the robot and the environment, with the help of vision and force sense to perceive the state of the robot during the working process, and accumulate and update the robot experience knowledge base. Monitor the execution status of the robot during the operation process, and evaluate the completion status of the robot's work based on the deep learning method to judge whether it is successful or not. The invention can help the robot to replace the human to complete various complicated work and improve the production work efficiency.

图4是根据本公开的一个实施方式的机器人仿生手抓取系统结构示意图。Fig. 4 is a schematic structural diagram of a robot bionic hand grasping system according to an embodiment of the present disclosure.

该装置可以包括执行上述流程图中各个或几个步骤的相应模块。因此,可以由相应模块执行上述流程图中的每个步骤或几个步骤,并且该装置可以包括这些模块中的一个或多个模块。模块可以是专门被配置为执行相应步骤的一个或多个硬件模块、或者由被配置为执行相应步骤的处理器来实现、或者存储在计算机可读介质内用于由处理器来实现、或者通过某种组合来实现。The device may include corresponding modules for executing each or several steps in the above flow chart. Therefore, each step or several steps in the above flowcharts may be performed by corresponding modules, and the apparatus may include one or more of these modules. A module may be one or more hardware modules specifically configured to perform the corresponding steps, or be implemented by a processor configured to perform the corresponding steps, or be stored in a computer-readable medium for implementation by the processor, or be implemented by a some combination to achieve.

该硬件结构可以利用总线架构来实现。总线架构可以包括任何数量的互连总线和桥接器,这取决于硬件的特定应用和总体设计约束。总线将包括一个或多个处理器、存储器和/或硬件模块的各种电路连接到一起。总线还可以将诸如外围设备、电压调节器、功率管理电路、外部天线等的各种其它电路连接。The hardware structure can be implemented using a bus architecture. The bus architecture can include any number of interconnecting buses and bridges, depending on the specific application of the hardware and the overall design constraints. The bus connects together various circuits including one or more processors, memory and/or hardware modules. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, external antennas, and the like.

总线可以是工业标准体系结构(ISA,Industry Standard Architecture)总线、外部设备互连(PCI,Peripheral Component)总线或扩展工业标准体系结构(EISA,ExtendedIndustry Standard Component)总线等。总线可以分为地址总线、数据总线、控制总线等。为便于表示,该图中仅用一条连接线表示,但并不表示仅有一根总线或一种类型的总线。The bus can be an Industry Standard Architecture (ISA, Industry Standard Architecture) bus, a Peripheral Component Interconnect (PCI, Peripheral Component) bus or an Extended Industry Standard Architecture (EISA, Extended Industry Standard Component) bus, etc. The bus can be divided into address bus, data bus, control bus and so on. For ease of representation, only one connection line is used in this figure, but it does not mean that there is only one bus or one type of bus.

流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本公开的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本公开的实施方式所属技术领域的技术人员所理解。处理器执行上文所描述的各个方法和处理。例如,本公开中的方法实施方式可以被实现为软件程序,其被有形地包含于机器可读介质,例如存储器。在一些实施方式中,软件程序的部分或者全部可以经由存储器和/或通信接口而被载入和/或安装。当软件程序加载到存储器并由处理器执行时,可以执行上文描述的方法中的一个或多个步骤。备选地,在其他实施方式中,处理器可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行上述方法之一。Any process or method descriptions in flowcharts or otherwise described herein may be understood to represent modules, segments or portions of code comprising one or more executable instructions for implementing specific logical functions or steps of the process , and the scope of preferred embodiments of the present disclosure includes additional implementations in which functions may be performed out of the order shown or discussed, including substantially concurrently or in reverse order depending on the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present disclosure belong. The processor executes the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied on a machine-readable medium, such as memory. In some implementations, part or all of the software program may be loaded and/or installed via memory and/or a communication interface. One or more steps in the methods described above may be performed when a software program is loaded into memory and executed by a processor. Alternatively, in other implementation manners, the processor may be configured to perform one of the above-mentioned methods in any other suitable manner (for example, by means of firmware).

在流程图中表示或在此以其他方式描述的逻辑和/或步骤,可以具体实现在任何可读存储介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。The logic and/or steps shown in the flowcharts or otherwise described herein can be embodied in any readable storage medium for instruction execution systems, devices or devices (such as computer-based systems, processor-included system or other systems that may fetch and execute instructions from an instruction execution system, device, or device), or be used in conjunction with such an instruction execution system, device, or device.

就本说明书而言,“可读存储介质”可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。可读存储介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式只读存储器(CDROM)。另外,可读存储介质甚至可以是可在其上打印程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得程序,然后将其存储在存储器中。As far as this specification is concerned, a "readable storage medium" may be any device that can contain, store, communicate, propagate or transmit programs for instruction execution systems, devices or devices or use in conjunction with these instruction execution systems, devices or devices. More specific examples (non-exhaustive list) of readable storage media include the following: electrical connection with one or more wires (electronic device), portable computer disk case (magnetic device), random access memory (RAM), Read Only Memory (ROM), Erasable and Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Read Only Memory (CDROM). In addition, the readable storage medium may even be paper or other suitable medium on which the program can be printed, since the program can be scanned, for example, by optical scanning of the paper or other medium, followed by editing, interpretation or other suitable means if necessary. processing to obtain programs electronically and store them in memory.

应当理解,本公开的各部分可以用硬件、软件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of the present disclosure may be realized by hardware, software or a combination thereof. In the embodiments described above, various steps or methods may be implemented by software stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, it can be implemented by any one or combination of the following techniques known in the art: Discrete logic circuits, ASICs with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.

本技术领域的普通技术人员可以理解实现上述实施方式方法的全部或部分步骤是可以通过程序来指令相关的硬件完成,的程序可以存储于一种可读存储介质中,该程序在执行时,包括方法实施方式的步骤之一或其组合。Those of ordinary skill in the art can understand that all or part of the steps to realize the above-mentioned implementation method can be completed by instructing related hardware through a program, and the program can be stored in a readable storage medium. When the program is executed, it includes One or a combination of steps of a method embodiment.

此外,在本公开各个实施方式中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个可读存储介质中。存储介质可以是只读存储器,磁盘或光盘等。In addition, each functional unit in each embodiment of the present disclosure may be integrated into one processing module, each unit may exist separately physically, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. If the integrated modules are realized in the form of software function modules and sold or used as independent products, they can also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like.

图4是根据本公开的一个实施方式的视触觉样本采集方法流程示意图。Fig. 4 is a schematic flowchart of a visual-tactile sample collection method according to an embodiment of the present disclosure.

如图4所示,视触觉样本采集方法S200包括:As shown in FIG. 4 , the visual and tactile sample collection method S200 includes:

S201:放置目标物体后,启动执行采集任务;S201: After placing the target object, start and execute the collection task;

S202:手动控制机器人运动,至接近目标物体;S202: Manually control the movement of the robot to approach the target object;

S203:相机拍照,采集目标物体视觉信息;S203: The camera takes pictures, and collects the visual information of the target object;

S204:对目标物体进行预抓取,同时,通过触觉感应,获取触觉数据;S204: Pre-fetch the target object, and at the same time, obtain tactile data through tactile sensing;

S205:判断是否抓取成功,如果抓取成功,到S206,否则到S204;S205: Determine whether the capture is successful, if the capture is successful, go to S206, otherwise go to S204;

S206:移动目标物体,并到S207;以及,S206: move the target object, and go to S207; and,

S207:判断目标物体是否滑动,如果产生滑动,到S205,否则,到S201。S207: Determine whether the target object is sliding, if there is sliding, go to S205, otherwise, go to S201.

图5是根据本公开的一个实施方式的装满金属的盒子示意图。Figure 5 is a schematic illustration of a metal filled box according to one embodiment of the present disclosure.

图6是根据本公开的一个实施方式的空盒子示意图。Figure 6 is a schematic diagram of an empty box according to one embodiment of the present disclosure.

图7是根据本公开的一个实施方式的触觉数据样本示意图。FIG. 7 is a schematic diagram of a haptic data sample according to an embodiment of the present disclosure.

图8是根据本公开的一个实施方式的触觉数据样本详情示意图。FIG. 8 is a diagram illustrating details of a haptic data sample according to an embodiment of the present disclosure.

图9是根据本公开的一个实施方式的机器人仿生手抓取系统结构示意图。Fig. 9 is a schematic structural diagram of a robot bionic hand grasping system according to an embodiment of the present disclosure.

如图9所示,一种机器人仿生手抓取系统1000,包括:As shown in Figure 9, a robot bionic hand grasping system 1000 includes:

机器人仿生手1002,用于抓取物体;Robotic bionic hand 1002 for grasping objects;

视觉信息采集模块1004,用于采集目标物体图像,优选地,视觉信息采集模块为深度相机,深度相机安装在机器人上方,且与机器人无接触;The visual information collection module 1004 is used to collect the image of the target object. Preferably, the visual information collection module is a depth camera, and the depth camera is installed above the robot and has no contact with the robot;

触觉信息采集模块1006,用于采集目标物体触觉信息,优选地,触觉信息采集模块为阵列式触觉传感器,阵列式触觉传感器置于仿生手的手爪本体上;The tactile information collection module 1006 is used to collect the tactile information of the target object. Preferably, the tactile information collection module is an array type tactile sensor, and the array type tactile sensor is placed on the claw body of the bionic hand;

视觉分析模块1008,用于对目标物体图像进行分析,获得目标物体类别、目标物体位置以及,目标物体尺寸;A visual analysis module 1008, configured to analyze the image of the target object to obtain the category of the target object, the position of the target object, and the size of the target object;

触觉分析模块1010,与机器人仿生手通信连接,用于接收并分析触觉信息采集模块采集的数据,获得物体材质的软硬属性数据;以及,The tactile analysis module 1010 is connected to the robot bionic hand for receiving and analyzing the data collected by the tactile information collection module to obtain the soft and hard attribute data of the object material; and,

抓取控制模块1012,与机器人仿生手通信连接,基于目标物体的类别、目标物体位置、目标物体尺寸以及,目标物体软硬属性数据,控制机器人仿生手抓取物体。The grasping control module 1012 is communicated with the robot bionic hand, and controls the robot bionic hand to grasp the object based on the type of the target object, the position of the target object, the size of the target object, and the soft and hard attribute data of the target object.

在本说明书的描述中,参考术语“一个实施例/方式”、“一些实施例/方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例/方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例/方式或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例/方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例/方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例/方式或示例以及不同实施例/方式或示例的特征进行结合和组合。In the description of this specification, descriptions referring to the terms "one embodiment/mode", "some embodiments/modes", "examples", "specific examples", or "some examples" mean that the embodiments/modes are combined The specific features, structures, materials or characteristics described in or examples are included in at least one embodiment/mode or example of the present application. In this specification, the schematic representations of the above terms do not necessarily refer to the same embodiment/mode or example. Moreover, the described specific features, structures, materials or characteristics may be combined in any one or more embodiments/modes or examples in an appropriate manner. In addition, those skilled in the art may combine and combine different embodiments/modes or examples and features of different embodiments/modes or examples described in this specification without conflicting with each other.

此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, the features defined as "first" and "second" may explicitly or implicitly include at least one of these features. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless otherwise specifically defined.

本领域的技术人员应当理解,上述实施方式仅仅是为了清楚地说明本公开,而并非是对本公开的范围进行限定。对于所属领域的技术人员而言,在上述公开的基础上还可以做出其它变化或变型,并且这些变化或变型仍处于本公开的范围内。It should be understood by those skilled in the art that the above-mentioned embodiments are only for clearly illustrating the present disclosure, rather than limiting the scope of the present disclosure. For those skilled in the art, other changes or modifications can be made on the basis of the above disclosure, and these changes or modifications are still within the scope of the present disclosure.

Claims (4)

1. A robot bionic hand grabbing method is characterized by comprising the following steps:
acquiring a target object image;
inputting a target object image into a visual detection neural network model for identification, and obtaining visual information of a target object, wherein the visual information comprises a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category;
determining vision grabbing parameters of the bionic hand of the robot based on the vision information of the target object;
grabbing a target object based on the vision grabbing parameters of the bionic hand of the robot;
obtaining the tactile information and the grabbing result of the target object;
acquiring soft and hard attribute data of the target object based on the touch information of the target object and the visual information of the target object; and
based on the soft and hard attribute data of the target object, adjusting the touch grabbing parameters of the bionic hand of the robot and continuously grabbing;
wherein, the parameter setting of the visual detection neural network model comprises the following steps: the convolution layer number is 6, the pooling layer number is 6, the training times are 10000, the training sample number of each batch is 20, and the learning rate is 0.001;
the training process of the visual inspection neural network model comprises the following steps:
collecting a visual sample comprising: acquiring an image containing a target object by using a depth camera, performing target preprocessing on the image, and acquiring image tag information, wherein the preprocessing comprises zooming the image containing the target object, and the acquiring of the image tag information comprises acquiring a target object abscissa, a target object ordinate, a target object width, a target object height and a target object category in the image;
inputting a training set of the collected visual samples into a visual detection neural network model for training to obtain a trained visual detection neural network model; and
inputting the collected test set of the visual sample into a visual detection neural network model for test verification, and obtaining an available visual detection neural network model after the verification is passed;
the obtaining of the haptic information and the grasping result of the target object includes: the method comprises the steps that a touch sensor of a bionic hand of a robot acquires contact force data and a grabbing result in a corresponding time sequence from different contact point positions according to the time sequence, and the contact force data form a two-dimensional array corresponding to time and the point positions;
the acquiring soft and hard attribute data of the target object based on the tactile information of the target object and the visual information of the target object comprises: forming a two-dimensional array corresponding to time and point positions and the type of the target object into the touch detection neural network model for recognition, and obtaining soft and hard attribute data of the target object;
the tactile detection neural network model is a recurrent neural network model, and the parameter setting of the recurrent neural network model comprises the following steps: the hidden layer number is 64, the training times are 10000, the training sample number of each batch is 20, and the learning rate is 0.001; the haptic sensation detection neural network model establishing process comprises the following steps:
acquiring a touch sample, wherein the touch sample is acquired for different contact points in a time sequence range, and the acquisition of the touch sample is performed when one visual sample is acquired;
inputting a training set of the collected tactile samples into a tactile detection neural network model for training to obtain a trained tactile detection neural network model; and
and inputting the collected test set of the touch samples into the trained touch detection neural network model for test verification, and obtaining an available touch detection neural network model after the verification is passed.
2. The robotic biomimetic hand-grabbing method as recited in claim 1, wherein the acquiring an image of a target object comprises:
the target object image is acquired by a depth camera, which is mounted above the robot and is contactless with the robot.
3. The method of claim 1, wherein determining the visual grasping parameters of the biomimetic robotic hand based on the visual information of the target object comprises:
and determining the motion trail and the grabbing posture of the bionic hand of the robot based on the visual information of the target object.
4. The method according to claim 1, wherein adjusting the parameters of the robotic biomimetic hand for grabbing the tactile sensation based on the soft and hard attribute data of the target object comprises:
and adjusting the contact force of the bionic hand of the robot and the target object based on the soft and hard attribute data of the target object.
CN202111070054.1A 2021-09-13 2021-09-13 Robot bionic hand grabbing method Active CN113942009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111070054.1A CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111070054.1A CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Publications (2)

Publication Number Publication Date
CN113942009A CN113942009A (en) 2022-01-18
CN113942009B true CN113942009B (en) 2023-04-18

Family

ID=79328152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111070054.1A Active CN113942009B (en) 2021-09-13 2021-09-13 Robot bionic hand grabbing method

Country Status (1)

Country Link
CN (1) CN113942009B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115876374B (en) * 2022-12-30 2023-12-12 中山大学 Flexible tactile structure of the nose of a robot dog and identification method of soft and hard attributes of contact objects

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007041295A2 (en) * 2005-09-30 2007-04-12 Irobot Corporation Companion robot for personal interaction
CN107891448A (en) * 2017-12-25 2018-04-10 胡明建 The design method that a kind of computer vision sense of hearing tactile is mutually mapped with the time
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN111168686A (en) * 2020-02-25 2020-05-19 深圳市商汤科技有限公司 Object grabbing method, device, equipment and storage medium
CN113172629A (en) * 2021-05-06 2021-07-27 清华大学深圳国际研究生院 Object grabbing method based on time sequence tactile data processing

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4622384B2 (en) * 2004-04-28 2011-02-02 日本電気株式会社 ROBOT, ROBOT CONTROL DEVICE, ROBOT CONTROL METHOD, AND ROBOT CONTROL PROGRAM
JP2010149267A (en) * 2008-12-26 2010-07-08 Yaskawa Electric Corp Robot calibration method and device
US8706299B2 (en) * 2011-08-02 2014-04-22 GM Global Technology Operations LLC Method and system for controlling a dexterous robot execution sequence using state classification
WO2016014265A1 (en) * 2014-07-22 2016-01-28 SynTouch, LLC Method and applications for measurement of object tactile properties based on how they likely feel to humans
CN106960099B (en) * 2017-03-28 2019-07-26 清华大学 A deep learning-based recognition method for grasping stability of manipulators
KR102275520B1 (en) * 2018-05-24 2021-07-12 티엠알더블유 파운데이션 아이피 앤드 홀딩 에스에이알엘 Two-way real-time 3d interactive operations of real-time 3d virtual objects within a real-time 3d virtual world representing the real world
CN108789384B (en) * 2018-09-03 2024-01-09 深圳市波心幻海科技有限公司 Flexible driving manipulator and object recognition method based on three-dimensional modeling
CN110091331A (en) * 2019-05-06 2019-08-06 广东工业大学 Grasping body method, apparatus, equipment and storage medium based on manipulator
CN111055279B (en) * 2019-12-17 2022-02-15 清华大学深圳国际研究生院 Multi-mode object grabbing method and system based on combination of touch sense and vision
CN111444459A (en) * 2020-02-21 2020-07-24 哈尔滨工业大学 A method and system for determining the contact force of a teleoperated system
CN111590611B (en) * 2020-05-25 2022-12-02 北京具身智能科技有限公司 Article classification and recovery method based on multi-mode active perception
CN112668607B (en) * 2020-12-04 2024-09-13 深圳先进技术研究院 Multi-label learning method for identifying touch attribute of target object
CN112388655B (en) * 2020-12-04 2021-06-04 齐鲁工业大学 A Grabbed Object Recognition Method Based on Fusion of Tactile Vibration Signals and Visual Images
CN113232019A (en) * 2021-05-13 2021-08-10 中国联合网络通信集团有限公司 Mechanical arm control method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007041295A2 (en) * 2005-09-30 2007-04-12 Irobot Corporation Companion robot for personal interaction
CN107891448A (en) * 2017-12-25 2018-04-10 胡明建 The design method that a kind of computer vision sense of hearing tactile is mutually mapped with the time
CN108621159A (en) * 2018-04-28 2018-10-09 首都师范大学 A kind of Dynamic Modeling in Robotics method based on deep learning
CN110458281A (en) * 2019-08-02 2019-11-15 中科新松有限公司 The deeply study rotation speed prediction technique and system of ping-pong robot
CN111168686A (en) * 2020-02-25 2020-05-19 深圳市商汤科技有限公司 Object grabbing method, device, equipment and storage medium
CN113172629A (en) * 2021-05-06 2021-07-27 清华大学深圳国际研究生院 Object grabbing method based on time sequence tactile data processing

Also Published As

Publication number Publication date
CN113942009A (en) 2022-01-18

Similar Documents

Publication Publication Date Title
US11565407B2 (en) Learning device, learning method, learning model, detection device and grasping system
Sun et al. Retracted: Gesture recognition algorithm based on multi‐scale feature fusion in RGB‐D images
Nadon et al. Multi-modal sensing and robotic manipulation of non-rigid objects: A survey
Bicchi On the closure properties of robotic grasping
EP3007867B1 (en) Systems and methods for sensing objects
Gurin et al. MobileNetv2 Neural Network Model for Human Recognition and Identification in the Working Area of a Collaborative Robot
CN105678344A (en) Intelligent classification method for power instrument equipment
CN113942009B (en) Robot bionic hand grabbing method
Narang et al. Interpreting and predicting tactile signals via a physics-based and data-driven framework
JP2019164836A (en) Learning device, learning method, learning model, detection device, and holding system
Liu et al. Understanding multi-modal perception using behavioral cloning for peg-in-a-hole insertion tasks
CN114155940A (en) Robot autonomous ultrasonic scanning skill strategy generation method and device and storage medium
CN112025693B (en) Pixel-level target capture detection method and system of asymmetric three-finger grabber
CN109997199A (en) Tuberculosis inspection method based on deep learning
Singh et al. Human–Robot Interaction Using Learning from Demonstrations and a Wearable Glove with Multiple Sensors
Lin et al. Tiny machine learning empowers climbing inspection robots for real-time multiobject bolt-defect detection
Mateo et al. 3D visual data-driven spatiotemporal deformations for non-rigid object grasping using robot hands
Yu et al. Grasp to see—Object classification using flexion glove with support vector machine
CN113858238B (en) Robot bionic hand and grasping method and system
CN113345100A (en) Prediction method, apparatus, device, and medium for target grasp posture of object
EP4332915A1 (en) Automated selection and model training for charged particle microscope imaging
Li Touching is believing: sensing and analyzing touch information with GelSight
Uyanik et al. A deep learning approach for motion segment estimation for pipe leak detection robot
Karimirad et al. Modelling a precision loadcell using neural networks for vision-based force measurement in cell micromanipulation
CN116311521A (en) A Multi-task-Oriented Behavior Analysis Method for Rat Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant