[go: up one dir, main page]

CN110363811B - Control method and device for grabbing equipment, storage medium and electronic equipment - Google Patents

Control method and device for grabbing equipment, storage medium and electronic equipment Download PDF

Info

Publication number
CN110363811B
CN110363811B CN201910545120.2A CN201910545120A CN110363811B CN 110363811 B CN110363811 B CN 110363811B CN 201910545120 A CN201910545120 A CN 201910545120A CN 110363811 B CN110363811 B CN 110363811B
Authority
CN
China
Prior art keywords
target
grabbing
information
historical
offset information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910545120.2A
Other languages
Chinese (zh)
Other versions
CN110363811A (en
Inventor
杜国光
王恺
廉士国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shanghai Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN201910545120.2A priority Critical patent/CN110363811B/en
Publication of CN110363811A publication Critical patent/CN110363811A/en
Application granted granted Critical
Publication of CN110363811B publication Critical patent/CN110363811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及一种用于抓取设备的控制方法、装置、存储介质及电子设备,以解决抓取设备抓取准确率低的问题。所述方法包括:根据目标物体的第一RGB图像,确定抓取设备的抓手的目标抓取姿态信息;在抓手处于目标抓取姿态信息对应的姿态后,获取设置在抓手上的图像采集设备采集到的目标物体的第二RGB图像;生成至少一个偏移信息,偏移信息包括抓手以目标抓取姿态信息对应的抓取中心点为起点的移动方向和移动偏移量;针对每一偏移信息,将该偏移信息、第二RGB图像和目标物体蒙版图像输入至抓取成功率预测模型;根据对应于最大预测成功率的偏移信息,确定目标偏移信息,控制抓手按照目标偏移信息移动;在抓手移动到位后,控制抓手抓取目标物体。

Figure 201910545120

The present disclosure relates to a control method, device, storage medium and electronic equipment for grabbing equipment, so as to solve the problem of low grabbing accuracy of the grabbing equipment. The method includes: determining the target grasping posture information of the grasper of the grasping device according to the first RGB image of the target object; after the grasper is in the posture corresponding to the target grasping posture information, acquiring the image set on the grasper collecting the second RGB image of the target object collected by the device; generating at least one offset information, the offset information including the movement direction and movement offset of the gripper starting from the gripping center point corresponding to the target gripping attitude information; For each offset information, input the offset information, the second RGB image and the target object mask image into the grasping success rate prediction model; according to the offset information corresponding to the maximum predicted success rate, determine the target offset information, and control the The gripper moves according to the target offset information; after the gripper moves in place, the gripper is controlled to grab the target object.

Figure 201910545120

Description

Control method and device for grabbing equipment, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of robots and computers, and in particular, to a control method and apparatus for a capture device, a storage medium, and an electronic device.
Background
Object grasping is a widely studied problem in the field of robotics. A common object grabbing mode is a geometric analysis method, which assumes that an object and an object grabbing point are known, constructs a 3D object model for the known object in a database in advance, marks the grabbing positions of the object respectively, specifies an evaluation standard for the object grabbing positions, matches an RGB-D depth image of the object to be grabbed with the 3D object model stored in the database based on visual and geometric similarity in actual application, and determines the corresponding object grabbing point. However, the geometric analysis method has a poor grabbing effect in an actual physical grabbing scene, and often fails grabbing. In addition, when an object is grabbed, the grabbing hand is often controlled to move towards the object to be grabbed by continuously giving a next grabbing command, so that in the process of controlling the grabbing hand to move, an object image of each frame needs to be acquired for generating the grabbing command, and the method is large in calculation amount and not ideal in accuracy.
Disclosure of Invention
The disclosure aims to provide a control method and device for a grabbing device, a storage medium and an electronic device, so as to improve grabbing accuracy.
In order to achieve the above object, the present disclosure provides a control method for a grasping apparatus, the method including:
determining target grabbing posture information of a gripper of the grabbing equipment according to a first RGB image of a target object, and controlling the gripper to be in a posture corresponding to the target grabbing posture information;
after the gripper is in the posture corresponding to the target gripping posture information, acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper;
generating at least one piece of offset information, wherein the offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the target gripping posture information as a starting point, and the gripping center point is a center position gripped by the gripper;
for each offset information, inputting the offset information, the second RGB image and the mask image of the target object into a capturing success rate prediction model to obtain a prediction success rate which is output by the capturing success rate prediction model and corresponds to the offset information;
determining target offset information according to offset information corresponding to the maximum prediction success rate, and controlling the hand grip to move according to the target offset information;
and after the hand grip is moved in place, controlling the hand grip to grip the target object.
Optionally, the determining, according to the first RGB image, target grabbing posture information of the grabber of the grabbing device includes:
inputting the first RGB image into a capture pose information generation model to obtain the target capture pose information generated by the capture pose information generation model for the first RGB image, wherein the capture pose information generation model is obtained by training according to a plurality of first training samples, and the first training samples include: the method comprises the steps that a first historical RGB image of a first historical target object and first historical target grabbing posture information corresponding to the first historical RGB image are obtained.
Optionally, the grabbing success rate prediction model is trained according to a plurality of second training samples, where the second training samples include: and when the gripper is in second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical gripping result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the second historical target gripping posture as a starting point, and the historical gripping result information is used for representing the success or failure of gripping.
Optionally, the determining target offset information according to offset information corresponding to a maximum prediction success rate includes:
if the offset information corresponding to the maximum prediction success rate is one, determining the offset information as the target offset information;
and if the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as the target offset information.
Optionally, the grabbing posture information of the hand grab at least comprises the grabbing central point and a grabbing angle, wherein the grabbing angle is an included angle between a grabbing plane of the hand grab and a horizontal plane.
According to a second aspect of the present disclosure, there is provided a control apparatus for a grasping device, the apparatus including:
the first control module is used for determining target grabbing posture information of a gripper of the grabbing equipment according to a first RGB image of a target object and controlling the gripper to be in a posture corresponding to the target grabbing posture information;
the image acquisition module is used for acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper after the gripper is in the posture corresponding to the target gripping posture information;
the information generating module is used for generating at least one piece of offset information, wherein the offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the target gripping posture information as a starting point, and the gripping center point is a center position gripped by the gripper;
the information processing module is used for inputting the offset information, the second RGB image and the mask image of the target object into a grabbing success rate prediction model aiming at each offset information so as to obtain the prediction success rate which is output by the grabbing success rate prediction model and corresponds to the offset information;
the second control module is used for determining target offset information according to offset information corresponding to the maximum prediction success rate and controlling the hand grip to move according to the target offset information;
and the third control module is used for controlling the hand grip to grip the target object after the hand grip is moved in place.
Optionally, the first control module is configured to input the first RGB image into a capture pose information generation model to obtain the target capture pose information generated by the capture pose information generation model for the first RGB image, where the capture pose information generation model is trained according to a plurality of first training samples, and the first training samples include: the method comprises the steps that a first historical RGB image of a first historical target object and first historical target grabbing posture information corresponding to the first historical RGB image are obtained.
Optionally, the grabbing success rate prediction model is trained according to a plurality of second training samples, where the second training samples include: and when the gripper is in second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical gripping result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the second historical target gripping posture as a starting point, and the historical gripping result information is used for representing the success or failure of gripping.
Optionally, the second control module is configured to determine, if there is one offset information corresponding to the maximum prediction success rate, the offset information as the target offset information; and if the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as the target offset information.
Optionally, the grabbing posture information of the hand grab at least comprises the grabbing central point and a grabbing angle, wherein the grabbing angle is an included angle between a grabbing plane of the hand grab and a horizontal plane.
According to a third aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of the first aspect of the present disclosure.
According to a fourth aspect of the present disclosure, there is provided an electronic device comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method of the first aspect of the disclosure.
According to the technical scheme, the target grabbing attitude information of the grabbing hand of the grabbing equipment is determined according to the first RGB image of the target object, and the grabbing hand is controlled to be in the attitude corresponding to the target grabbing attitude information; after the gripper is in the posture corresponding to the target gripping posture information, acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper; generating at least one offset information; aiming at each offset information, inputting the offset information, the second RGB image and the mask image of the target object into a capturing success rate prediction model to obtain a prediction success rate which is output by the capturing success rate prediction model and corresponds to the offset information; determining target offset information according to the offset information corresponding to the maximum prediction success rate, and controlling the gripper to move according to the target offset information; and after the hand grip is moved in place, controlling the hand grip to grip the target object. According to the scheme provided by the disclosure, firstly, target grabbing posture information of the hand grab is determined, namely, the grabbing posture is estimated, then, a plurality of deviation information, namely a plurality of positions available for grabbing, are generated according to the target grabbing posture information, and for each position available for grabbing, a grabbing success rate prediction model is utilized to obtain a prediction success rate, so that the deviation information with the maximum prediction success rate is obtained, and the hand grab is controlled to be in place to grab an object. Therefore, the maximum prediction success rate is obtained by using the capturing success rate prediction model, and capturing is performed by using the capturing mode corresponding to the maximum prediction success rate, so that the images or the capturing instructions do not need to be optimized step by step, and the capturing success rate can be improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a flowchart of a control method for a grasping apparatus according to an embodiment of the present disclosure;
fig. 2 is an exemplary schematic diagram of grasp posture information in the control method for the grasp device provided by the present disclosure;
fig. 3 is an exemplary schematic diagram of grasp posture information in the control method for the grasp device provided by the present disclosure;
FIG. 4 is an exemplary diagram of a target range in the control method for the grasping apparatus provided by the present disclosure;
FIG. 5 is a block diagram of a control apparatus for a grasping device according to an embodiment of the present disclosure;
FIG. 6 is a block diagram illustrating an electronic device in accordance with an exemplary embodiment;
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
In order to solve the problem that the object grabbing accuracy rate is not ideal enough in the related art, the disclosure provides a control method and device for grabbing equipment, a storage medium and electronic equipment.
Fig. 1 is a flowchart of a control method for a grasping device, which can be applied to an electronic device having a capability of controlling the grasping device, such as a server, a grasping device, and the like, according to an embodiment of the present disclosure. In the following description, for convenience of explanation, the application of the method of the present disclosure to a grasping apparatus will be described as an example, and the principle is similar to the case where the method of the present disclosure is applied to other electronic apparatuses. As shown in fig. 1, the method may include the following steps.
In step 11, according to the first RGB image of the target object, target grabbing posture information of the gripper of the grabbing device is determined, and the gripper is controlled to be in a posture corresponding to the target grabbing posture information.
First, a first RGB image of a target object needs to be acquired. A device with image capture capabilities (e.g., a camera, video camera, still camera, etc.) may be provided on or around the capture device to capture a first RGB image of the target object. For example, an image capturing device may be disposed on the gripper of the capturing device, the image capturing device may capture a first RGB image of the target object, and the capturing device may acquire the first RGB image captured by the image capturing device.
In a possible implementation manner, RGB images of various objects and grasping posture information corresponding to the various objects may be stored in advance in the database, after a first RGB image of a target object is acquired, the first RGB image is matched with the RGB images stored in the database, and the grasping posture information corresponding to the RGB image with the highest matching degree is determined as target grasping posture information of the grasping device.
The grip posture information of the gripper may include at least a grip center point and a grip angle. The gripping center point is the center position of the gripping of the gripper, and can be understood as the center of a graph formed by the gripping of the gripper and each contact point of the object. The grabbing angle is an included angle between a grabbing plane of the grabbing hand and the horizontal plane. In addition, the grabbing posture information of the grabbing hand can also comprise grabbing opening. The grabbing opening is the opening length of the hand grab. Taking a gripper as a parallel tentacle gripper (two-finger gripper) as an example, as shown in fig. 2, which is a partial schematic view of the gripper 20, the gripper 20 has a gripper 21, and assuming that only two points m and n are in direct contact with an object when the gripper 21 grips the object, a gripping center point of the gripper 21 is a midpoint k of a connecting line between the two points m and n, a gripping angle is an included angle between a gripping plane h of the gripper 21 and a horizontal plane (which is equivalent to an included angle between a connecting line between the two points m and n and a horizontal axis), and a gripping opening is a distance between the two points m and n. In some possible cases, the point where the gripper is not only in contact with the object when gripping the object, but may also be a line or a plane. For example, as shown in fig. 3, a schematic grip diagram obtained from a grip angle when the grip 21 grips an object is shown, in fig. 3, when the grip 21 grips the object, contact with the object forms two parallel line segments D1D2 and D3D4 (assuming that the quadrangle D1D2D4D3 is a rectangle), a grip center point is a center D5 of the quadrangle D1D2D4D3, a grip angle is an included angle between a plane of the quadrangle D1D2D4D3 and a horizontal plane, and a grip opening is a length of D1D3 (or D2D 4).
In another possible embodiment, step 11 may include the steps of:
and inputting the first RGB image into the grabbing posture information generation model to obtain target grabbing posture information generated by the grabbing posture information generation model aiming at the first RGB image.
The grabbing posture information generation model is obtained by training according to a plurality of first training samples, and the first training samples comprise: the first history RGB image of the first history target object and the first history target grabbing posture information corresponding to the first history RGB image. The first history target object may be various known objects, and the first history RGB image of the first history target object may be an image of the first history target object at various angles in various environments.
The training mode for grasping the posture information generation model will be briefly described below. Before training the grab pose information generation model, data required for training, namely a first training sample, needs to be collected. In the data collection stage, textured 3D models of different objects are collected first, and for each object, the 3D models are projected from various angles according to the object, and corresponding RGB images are rendered to collect images of the object at various angles. And marking the grabbing posture information of each obtained RGB image in a manual marking mode. Therefore, a first training sample required by training is obtained, a first historical RGB image of a first historical target object in the first training sample is one of the RGB images obtained by rendering, and first historical target grabbing posture information corresponding to the first historical RGB image is grabbing posture information corresponding to the RGB image manual mark. After data collection is complete, model training may be performed. For example, the capture posture information generation model provided by the present disclosure may be trained through a deep learning algorithm, and the first historical RGB image is used as input data, and the first historical target capture posture information corresponding to the first historical RGB image is used as output data, and the convolutional neural network model is used for training to obtain the capture posture information generation model.
Therefore, after the first RGB image of the target object is acquired, the first RGB image is input to the capture posture information generation model, and the target capture posture information generated by the capture posture information generation model for the first RGB image can be acquired.
By adopting the mode, the grabbing posture information model is obtained through pre-training, and after the first RGB image is obtained, the first RGB image is input into the grabbing posture information model, so that the target grabbing posture information output by the model can be obtained, and the method is simple and quick. And when the first training sample is collected, the grabbing posture information of various objects in various scenes is collected as much as possible, so that the grabbing posture information model obtained by training has higher accuracy.
After the target grabbing attitude information of the grabbing hand of the grabbing equipment is determined, the grabbing equipment can control the grabbing hand to be in the attitude corresponding to the target grabbing attitude information. And if the grabbing attitude information comprises a grabbing central point and a grabbing angle, after the target grabbing attitude information is determined, the grabbing equipment controls the grabbing hand to be positioned at the grabbing central point and the grabbing angle corresponding to the target grabbing attitude information. If the grabbing attitude information comprises a grabbing center point, a grabbing angle and a grabbing opening, after the target grabbing attitude information is determined, the grabbing equipment controls the grabbing hand to be located at the grabbing center point, the grabbing angle and the grabbing opening corresponding to the target grabbing attitude information. For example, if the grabbing center point included in the target grabbing posture information is position a1 and the grabbing angle is θ 1, the grabbing device controls the movement of the hand grip so that the grabbing center point of the hand grip is located at position a1, and adjusts the grabbing angle of the hand grip so that the included angle between the grabbing plane of the hand grip and the horizontal plane is θ 1. For another example, if the grasping center point included in the target grasping posture information is position a2, the grasping angle is θ 2, and the grasping opening degree is L2, the grasping apparatus controls the movement of the hand grip such that the grasping center point of the hand grip is located at position a2, adjusts the grasping angle of the hand grip such that the angle between the grasping plane of the hand grip and the horizontal plane is θ 2, and opens the hand grip L2. It should be noted that, when the control gripper is in the posture corresponding to the target grabbing posture information, the adjustment order of each parameter included in the target grabbing posture information is not limited, and such as sequential adjustment (for example, sequential adjustment according to the grabbing center point, the grabbing angle, and the grabbing opening), simultaneous adjustment, and the like, all belong to the protection scope of the present disclosure.
Returning to fig. 1, in step 12, after the gripper is in the posture corresponding to the target gripping posture information, a second RGB image of the target object acquired by the image acquisition device disposed on the gripper is acquired.
After the gripper is in the posture corresponding to the target grabbing information, acquiring a second RGB image of the target object acquired by the image acquisition equipment arranged on the gripper, wherein the second RGB image can be understood as the RGB image of the target object which can be observed through a grabbing visual angle (from the gripper angle) when the gripper grabs the object, and a more accurate image of the target object can be obtained.
In step 13, at least one offset information is generated.
The offset information comprises the moving direction and the moving offset of the gripper with the grabbing center point corresponding to the target grabbing posture as a starting point. It can be seen that each offset information corresponds to a shifted position.
In one possible embodiment, the at least one offset information may be randomly generated.
In another possible embodiment, the at least one moved position corresponding to the at least one piece of offset information is within a target range, where the target range is a range near the grabbing center point corresponding to the target grabbing gesture, for example, the target range may be a range centered on the grabbing center point corresponding to the target grabbing gesture.
For example, the target range may be a sphere with a capture central point corresponding to the target capture pose as a center of sphere and a preset distance as a radius. FIG. 4 is a schematic diagram of a possible target range, point P in the three-dimensional coordinate system shown in FIG. 40(x0,y0,z0) Grabbing a center point, point P, corresponding to the object grabbing attitude informationi(xi,yi,zi) For the moved position (corresponding to the generated offset information), the preset distance (sphere radius) is t, and the two should satisfy the following relationship at the same time:
xi=x0+rsinαcosβ
yi=y0+rsinαsinβ
zi=z0+rcosα
wherein r is P0And PiEuclidean distance between two points, alpha being P0And PiThe angle between the line and the z-axis, beta being P0And PiThe projection of the connecting line on the xOy plane forms an included angle with the x axis, and alpha belongs to [0, pi ]],β∈[0,2π],r∈[0,t]。
At least one moved position can be obtained by randomly appointing alpha, beta and r, and corresponding offset information can also be known. In the offset information, the moving direction is from P0To PiThe direction of the shift offset is P0And PiThe euclidean distance between two points.
In step 14, for each offset information, the second RGB image, and the mask image of the target object are input to the capturing success rate prediction model.
And after the offset information, the second RGB image and the mask image of the target object are input into the capturing success rate prediction model, the prediction success rate output by the capturing success rate prediction model and corresponding to the input offset information can be obtained. The mask image of the target object is an image representing a target object region, and is used for distinguishing a target object from a non-target object, for example, in the mask image of the target object, the target body region is displayed as white pixel points, and the rest regions are displayed as black pixel points.
Wherein, snatch success rate prediction model and train according to a plurality of second training samples and obtain, and the second training sample includes: and when the gripper is in the second historical target grabbing posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical grabbing result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a grabbing center point corresponding to the second historical target grabbing posture as a starting point, and the historical grabbing result information is used for representing successful or failed grabbing. Among them, the second history target object may be various known objects.
The training mode of the capture success rate prediction model will be briefly described below. Before training the grabbing success rate prediction model, data required for training, namely a second training sample, needs to be collected. During the data collection phase, the grasping device performs operations earlier similar to the method provided above, namely steps 11 through 13. For a second historical target object, determining second historical target grabbing posture information of a gripper of the grabbing equipment through the step 11, and controlling the gripper to be in a posture corresponding to the second historical target grabbing posture information; through the step 12, after the gripper is in the posture corresponding to the second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object acquired by image acquisition equipment arranged on the gripper; through step 13, at least one piece of historical offset information is generated, wherein the historical offset information may be generated in a manner as described above with reference to step 13, and in order to make the accuracy of the trained model higher, as much as possible of the historical offset information may be generated for each second historical target object. And when the gripper moves in place, the gripper is controlled to grip a second historical target object, and historical gripping result information corresponding to the historical offset information is obtained, wherein the historical gripping result information is used for representing the success or failure of gripping. For the judgment of the success or failure of the grabbing, for example, a force feedback device may be provided on the gripper, and the success or failure of the grabbing may be known according to the feedback result of the force feedback device. By the method, a plurality of second training samples can be obtained, for the obtained plurality of second training samples, the second historical RGB image, the mask image of the second historical target object, and the historical offset information in each second training sample are used as input data, and the historical capture result information corresponding to the input historical offset information is used as output data, and a convolutional neural network model is used for training to obtain the capture success rate prediction model.
By adopting the mode, the capturing success rate prediction model is obtained through pre-training, after the offset information, the second RGB image and the mask image of the target object are obtained, the offset information, the second RGB image and the mask image are input into the capturing success rate prediction model, and the prediction success rate output by the model can be obtained, so that the corresponding prediction success rate can be obtained for each generated offset information, and the method is simple and quick. Moreover, when the second training samples are collected, the historical capture results of various second historical target objects under various historical offset information are collected as much as possible, so that the capture success rate prediction model obtained by training is higher in accuracy. When the historical offset information is generated in the process of collecting the second training sample, the mode for generating the offset information provided by the disclosure is adopted, a large amount of historical offset information can be automatically generated, manual intervention is not needed, the human input is reduced, and the efficiency can be ensured.
After at least one piece of offset information is generated in step 13, for each piece of offset information, the second RGB image, and the mask image of the target object are input to the capturing success rate prediction model, and the prediction success rate corresponding to the offset information output by the capturing success rate prediction model can be obtained.
In step 15, target offset information is determined according to the offset information corresponding to the maximum prediction success rate, and the gripper is controlled to move according to the target offset information.
In one possible embodiment, step 15 may comprise the steps of:
if the offset information corresponding to the maximum prediction success rate is one, determining the offset information as target offset information;
and if the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as target offset information.
If the offset information corresponding to the maximum prediction success rate is one, the offset information can be directly determined as the target offset information.
If the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as target offset information, wherein the moving offset is the minimum, namely the straight-line distance between the starting point and the end point is the shortest. Therefore, when the grippers are controlled to move subsequently, the moving degree of the grippers can be ensured to be smaller, the moving efficiency can be ensured, and the gripping equipment can be protected to a certain degree.
After determining the target offset information, the gripper device may control the gripper to move according to the target offset information. The manner in which the gripper movement is controlled is not limited in this disclosure. For example, the target offset information is directly and linearly moved until the target offset information is reached. For another example, the target offset information is moved in a predetermined direction (e.g., up, down, left, and right) until the target offset information is in place.
In step 16, after the gripper is moved into position, the gripper is controlled to grip the target object.
And after the hand grip is moved in place, controlling the hand grip to grip the target object. For example, if the grabbing posture information of the gripper includes a grabbing center point and a grabbing angle, when the gripper is controlled to be in the posture corresponding to the target grabbing posture information in step 11, the gripper is in the posture (the grabbing center point and the grabbing angle) corresponding to the target grabbing posture information, and therefore, after the gripper is moved in place (moved in place according to the target offset information), the gripper can be controlled to grab the target object by using the grabbing angle corresponding to the previously determined target grabbing posture information. In some cases, the parameter of the grabbing opening degree is also needed for grabbing the object, and the grabbing opening degree may be generated in the grabbing process, for example, generated according to historical grabbing experience, randomly generated, and the like, or may be preset. For another example, if the grabbing posture information of the gripper includes a grabbing center point, a grabbing angle and a grabbing opening, when the gripper is controlled to be in the posture corresponding to the target grabbing posture information in step 11, the gripper is already in the posture (the grabbing center point, the grabbing angle and the grabbing opening) corresponding to the target grabbing posture information, and therefore after the gripper is moved in place (moved in place according to the target offset information), the gripper can be controlled to grab the target object by using the grabbing angle and the grabbing opening corresponding to the target grabbing posture information determined before.
In addition, the above-mentioned control of grasping the target object by the gripper may also be implemented in combination with a depth image of the target object, which may be acquired simultaneously with the first RGB image, for example. Wherein the depth image may be acquired by a depth camera sensor, which may be disposed on the hand grip.
According to the scheme, the target grabbing attitude information of the grabbing hand of the grabbing equipment is determined according to the first RGB image of the target object, and the grabbing hand is controlled to be in the attitude corresponding to the target grabbing attitude information; after the gripper is in the posture corresponding to the target gripping posture information, acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper; generating at least one offset information; aiming at each offset information, inputting the offset information, the second RGB image and the mask image of the target object into a capturing success rate prediction model to obtain a prediction success rate which is output by the capturing success rate prediction model and corresponds to the offset information; determining target offset information according to the offset information corresponding to the maximum prediction success rate, and controlling the gripper to move according to the target offset information; and after the hand grip is moved in place, controlling the hand grip to grip the target object. According to the scheme provided by the disclosure, firstly, target grabbing posture information of the hand grab is determined, namely, the grabbing posture is estimated, then, a plurality of deviation information, namely a plurality of positions available for grabbing, are generated according to the target grabbing posture information, and for each position available for grabbing, a grabbing success rate prediction model is utilized to obtain a prediction success rate, so that the deviation information with the maximum prediction success rate is obtained, and the hand grab is controlled to be in place to grab an object. Therefore, the maximum prediction success rate is obtained by using the capturing success rate prediction model, and capturing is performed by using the capturing mode corresponding to the maximum prediction success rate, so that the images or the capturing instructions do not need to be optimized step by step, and the capturing success rate can be improved.
The shift information described above is a movement direction and a movement shift amount from the capture center point corresponding to the target capture attitude information. In the default situation, when the offset information is generated, the gripper is still in the posture corresponding to the target gripping posture information, that is, the central point of the gripper is the same as the gripping central point corresponding to the target gripping posture information, so that the generated offset information takes the gripping central point corresponding to the target gripping information as a starting point. In some cases, it is not necessary that the grasping apparatus controls the grasping apparatus to position the grasping apparatus in the posture corresponding to the target grasping posture information, and after acquiring the second RGB image of the target object acquired by the image acquisition apparatus disposed on the grasping apparatus, the grasping center point of the grasping apparatus is positioned at the grasping center point corresponding to the target grasping posture information. As an example. If the hand grip moves, the hand grip can still be controlled to move in place by combining the offset condition of the moved position relative to the gripping center point and the determined target offset information. Therefore, the situation that the position of the hand grip is moved in the process also belongs to the protection scope of the present disclosure.
Fig. 5 is a block diagram of a control apparatus for a grasping device provided according to an embodiment of the present disclosure. As shown in fig. 5, the apparatus 50 may include:
the first control module 51 is configured to determine target grabbing posture information of a gripper of the grabbing device according to a first RGB image of a target object, and control the gripper to be in a posture corresponding to the target grabbing posture information;
the image acquisition module 52 is configured to acquire a second RGB image of the target object, which is acquired by an image acquisition device arranged on the gripper, after the gripper is in the posture corresponding to the target gripping posture information;
an information generating module 53, configured to generate at least one piece of offset information, where the offset information includes a moving direction and a moving offset of the gripper with a gripping center point corresponding to the target gripping posture information as a starting point, and the gripping center point is a center position where the gripper grips;
an information processing module 54, configured to input, for each offset information, the second RGB image, and the mask image of the target object into a capturing success rate prediction model, so as to obtain a prediction success rate corresponding to the offset information output by the capturing success rate prediction model;
a second control module 55, configured to determine target offset information according to offset information corresponding to a maximum prediction success rate, and control the gripper to move according to the target offset information;
and the third control module 56 is used for controlling the hand grip to grip the target object after the hand grip is moved in place.
Optionally, the first control module 51 is configured to input the first RGB image into a capture pose information generation model to obtain the target capture pose information generated by the capture pose information generation model for the first RGB image, where the capture pose information generation model is trained according to a plurality of first training samples, and the first training samples include: the method comprises the steps that a first historical RGB image of a first historical target object and first historical target grabbing posture information corresponding to the first historical RGB image are obtained.
Optionally, the grabbing success rate prediction model is trained according to a plurality of second training samples, where the second training samples include: and when the gripper is in second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical gripping result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the second historical target gripping posture as a starting point, and the historical gripping result information is used for representing the success or failure of gripping.
Optionally, the second control module 55 is configured to determine, if there is one offset information corresponding to the maximum prediction success rate, the offset information as the target offset information; and if the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as the target offset information.
Optionally, the grabbing posture information of the hand grab at least comprises the grabbing central point and a grabbing angle, wherein the grabbing angle is an included angle between a grabbing plane of the hand grab and a horizontal plane.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 6 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 700 may be provided as a grasping device. As shown in fig. 6, the electronic device 700 may include: a processor 701 and a memory 702. The electronic device 700 may also include one or more of a multimedia component 703, an input/output (I/O) interface 704, and a communication component 705.
The processor 701 is configured to control the overall operation of the electronic device 700, so as to complete all or part of the steps in the control method for the grasping device. The memory 702 is used to store various types of data to support operation at the electronic device 700, such as instructions for any application or method operating on the electronic device 700 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 702 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk. The multimedia components 703 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 702 or transmitted through the communication component 705. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 704 provides an interface between the processor 701 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 705 is used for wired or wireless communication between the electronic device 700 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 705 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 700 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for executing the above-described control method for the grasping Device.
In another exemplary embodiment, a computer-readable storage medium is also provided, which comprises program instructions, which when executed by a processor, implement the steps of the above-described control method for a grasping apparatus. For example, the computer readable storage medium may be the memory 702 described above comprising program instructions executable by the processor 701 of the electronic device 700 to perform the control method for a grasping device described above.
FIG. 7 is a block diagram illustrating an electronic device in accordance with an example embodiment. For example, the electronic device 1900 may be provided as a server. Referring to fig. 7, an electronic device 1900 includes a processor 1922, which may be one or more in number, and a memory 1932 to store computer programs executable by the processor 1922. The computer program stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processor 1922 may be configured to execute the computer program to perform the control method for the grasping apparatus described above.
Additionally, electronic device 1900 may also include a power component 1926 and a communication component 1950, the power component 1926 may be configured to perform power management of the electronic device 1900, and the communication component 1950 may be configured to enable communication, e.g., wired or wireless communication, of the electronic device 1900. In addition, the electronic device 1900 may also include input/output (I/O) interfaces 1958. The electronic device 1900 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, etc., stored in memory 1932.
In another exemplary embodiment, a computer-readable storage medium is also provided, which comprises program instructions, which when executed by a processor, implement the steps of the above-described control method for a grasping apparatus. For example, the computer readable storage medium may be the memory 1932 described above that includes program instructions that are executable by the processor 1922 of the electronic device 1900 to perform the control method described above for the grasping device.
In another exemplary embodiment, a computer program product is also provided, which contains a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-described control method for a grasping device when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (9)

1. A control method for a grasping apparatus, characterized in that the method comprises:
determining target grabbing posture information of a gripper of the grabbing equipment according to a first RGB image of a target object, and controlling the gripper to be in a posture corresponding to the target grabbing posture information, wherein the grabbing posture information of the gripper at least comprises a grabbing central point and a grabbing angle, and the grabbing angle is an included angle between a grabbing plane of the gripper and a horizontal plane;
after the gripper is in the posture corresponding to the target gripping posture information, acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper;
generating at least one piece of offset information, wherein the offset information includes a moving direction and a moving offset of the gripper with a gripping center point corresponding to the target gripping posture information as a starting point, the gripping center point is a center position of gripping by the gripper, at least one moved position corresponding to the at least one piece of offset information is within a target range, and the target range is a sphere with the gripping center point corresponding to the target gripping posture as a sphere center and a preset distance as a radius;
for each offset information, inputting the offset information, the second RGB image and the mask image of the target object into a capturing success rate prediction model to obtain a prediction success rate which is output by the capturing success rate prediction model and corresponds to the offset information;
determining target offset information according to offset information corresponding to the maximum prediction success rate, and controlling the hand grip to move according to the target offset information;
and after the hand grip is moved in place, controlling the hand grip to grip the target object.
2. The method of claim 1, wherein determining target grabbing pose information for the grabber of the grabbing device according to the first RGB image comprises:
inputting the first RGB image into a capture pose information generation model to obtain the target capture pose information generated by the capture pose information generation model for the first RGB image, wherein the capture pose information generation model is obtained by training according to a plurality of first training samples, and the first training samples include: the method comprises the steps that a first historical RGB image of a first historical target object and first historical target grabbing posture information corresponding to the first historical RGB image are obtained.
3. The method of claim 1, wherein the grabbing-success-rate prediction model is trained according to a plurality of second training samples, and wherein the second training samples comprise: and when the gripper is in second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical gripping result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the second historical target gripping posture as a starting point, and the historical gripping result information is used for representing the success or failure of gripping.
4. The method of claim 1, wherein determining the target offset information according to the offset information corresponding to the maximum prediction success rate comprises:
if the offset information corresponding to the maximum prediction success rate is one, determining the offset information as the target offset information;
and if the offset information corresponding to the maximum prediction success rate is multiple, determining the offset information with the minimum moving offset as the target offset information.
5. A control device for a gripping apparatus, characterized in that the device comprises:
the first control module is used for determining target grabbing posture information of a gripper of the grabbing equipment according to a first RGB image of a target object and controlling the gripper to be in a posture corresponding to the target grabbing posture information, the grabbing posture information of the gripper at least comprises a grabbing central point and a grabbing angle, and the grabbing angle is an included angle between a grabbing plane of the gripper and a horizontal plane;
the image acquisition module is used for acquiring a second RGB image of the target object acquired by image acquisition equipment arranged on the gripper after the gripper is in the posture corresponding to the target gripping posture information;
the information generating module is used for generating at least one piece of offset information, wherein the offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the target gripping posture information as a starting point, the gripping center point is a center position gripped by the gripper, at least one moved position corresponding to the at least one piece of offset information is within a target range, and the target range is a sphere with the gripping center point corresponding to the target gripping posture as a sphere center and a preset distance as a radius;
the information processing module is used for inputting the offset information, the second RGB image and the mask image of the target object into a grabbing success rate prediction model aiming at each offset information so as to obtain the prediction success rate which is output by the grabbing success rate prediction model and corresponds to the offset information;
the second control module is used for determining target offset information according to offset information corresponding to the maximum prediction success rate and controlling the hand grip to move according to the target offset information;
and the third control module is used for controlling the hand grip to grip the target object after the hand grip is moved in place.
6. The apparatus of claim 5, wherein the first control module is configured to input the first RGB image into a capture pose information generation model to obtain the target capture pose information generated by the capture pose information generation model for the first RGB image, wherein the capture pose information generation model is trained according to a plurality of first training samples, and the first training samples comprise: the method comprises the steps that a first historical RGB image of a first historical target object and first historical target grabbing posture information corresponding to the first historical RGB image are obtained.
7. The apparatus of claim 5, wherein the grabbing-success-rate prediction model is trained according to a plurality of second training samples, and wherein the second training samples comprise: and when the gripper is in second historical target gripping posture information, acquiring a second historical RGB image of a second historical target object, a mask image of the second historical target object, historical offset information and historical gripping result information corresponding to the historical offset information by the image acquisition equipment, wherein the historical offset information comprises a moving direction and a moving offset of the gripper with a gripping center point corresponding to the second historical target gripping posture as a starting point, and the historical gripping result information is used for representing the success or failure of gripping.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 4.
9. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 4.
CN201910545120.2A 2019-06-21 2019-06-21 Control method and device for grabbing equipment, storage medium and electronic equipment Active CN110363811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910545120.2A CN110363811B (en) 2019-06-21 2019-06-21 Control method and device for grabbing equipment, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910545120.2A CN110363811B (en) 2019-06-21 2019-06-21 Control method and device for grabbing equipment, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN110363811A CN110363811A (en) 2019-10-22
CN110363811B true CN110363811B (en) 2022-02-08

Family

ID=68216524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910545120.2A Active CN110363811B (en) 2019-06-21 2019-06-21 Control method and device for grabbing equipment, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110363811B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112053398B (en) * 2020-08-11 2021-08-27 浙江大华技术股份有限公司 Object grabbing method and device, computing equipment and storage medium
CN111805540A (en) * 2020-08-20 2020-10-23 北京迁移科技有限公司 Method, device and equipment for determining workpiece grabbing pose and storage medium
CN112258567B (en) * 2020-10-10 2022-10-11 达闼机器人股份有限公司 Visual positioning method and device for object grabbing point, storage medium and electronic equipment
CN116051641A (en) * 2023-01-18 2023-05-02 北京天玛智控科技股份有限公司 A multi-task target object capture method, device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN109176521A (en) * 2018-09-19 2019-01-11 北京因时机器人科技有限公司 A kind of mechanical arm and its crawl control method and system
CN109598264A (en) * 2017-09-30 2019-04-09 北京猎户星空科技有限公司 Grasping body method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6822929B2 (en) * 2017-09-19 2021-01-27 株式会社東芝 Information processing equipment, image recognition method and image recognition program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107972026A (en) * 2016-10-25 2018-05-01 深圳光启合众科技有限公司 Robot, mechanical arm and its control method and device
CN109598264A (en) * 2017-09-30 2019-04-09 北京猎户星空科技有限公司 Grasping body method and device
CN109176521A (en) * 2018-09-19 2019-01-11 北京因时机器人科技有限公司 A kind of mechanical arm and its crawl control method and system

Also Published As

Publication number Publication date
CN110363811A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
CN110363811B (en) Control method and device for grabbing equipment, storage medium and electronic equipment
CN113409384A (en) Pose estimation method and system of target object and robot
JP2011110621A (en) Method of producing teaching data of robot and robot teaching system
JP2011110620A (en) Method of controlling action of robot, and robot system
CN112580582B (en) Action learning method, action learning device, action learning medium and electronic equipment
CN107030692B (en) A method and system for manipulator teleoperation based on perception enhancement
CN112109069B (en) Robot teaching device and robot system
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
WO2020190166A1 (en) Method and system for grasping an object by means of a robotic device
WO2019047415A1 (en) Trajectory tracking method and device, storage medium, processor
JP6792230B1 (en) Information processing equipment, methods and programs
CN106003036A (en) Object grabbing and placing system based on binocular vision guidance
WO2020179416A1 (en) Robot control device, robot control method, and robot control program
CN102830798A (en) Mark-free hand tracking method of single-arm robot based on Kinect
CN115220375B (en) Robot control method, device, storage medium and electronic device
CN112070835A (en) Mechanical arm pose prediction method and device, storage medium and electronic equipment
JP5609760B2 (en) Robot, robot operation method, and program
CN118071822A (en) Image processing method, device, demolition robot and computer-readable storage medium
Bensaadallah et al. Deep learning-based real-time hand landmark recognition with MediaPipe for R12 robot control
KR20230175122A (en) Method for controlling a robot for manipulating, in particular picking up, an object
CN115070761A (en) Robot teaching method, teaching apparatus, and computer-readable storage medium
JP2015174206A (en) Robot control device, robot system, robot, robot control method and robot control program
CN109934155B (en) Depth vision-based collaborative robot gesture recognition method and device
Wu et al. Kinect-based robotic manipulation: From human hand to end-effector
TW202317045A (en) Surgical robotic arm control system and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20210302

Address after: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Applicant after: Dalu Robot Co.,Ltd.

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: CLOUDMINDS (SHENZHEN) ROBOTICS SYSTEMS Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Patentee after: Dayu robot Co.,Ltd.

Address before: 201111 2nd floor, building 2, no.1508, Kunyang Road, Minhang District, Shanghai

Patentee before: Dalu Robot Co.,Ltd.

CP03 Change of name, title or address
PP01 Preservation of patent right

Effective date of registration: 20250909

Granted publication date: 20220208

PP01 Preservation of patent right