CN108858193B - Mechanical arm grabbing method and system - Google Patents
Mechanical arm grabbing method and system Download PDFInfo
- Publication number
- CN108858193B CN108858193B CN201810736694.3A CN201810736694A CN108858193B CN 108858193 B CN108858193 B CN 108858193B CN 201810736694 A CN201810736694 A CN 201810736694A CN 108858193 B CN108858193 B CN 108858193B
- Authority
- CN
- China
- Prior art keywords
- grabbing
- mechanical arm
- points
- robot
- constraint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention provides a mechanical arm grabbing method and a system for realizing the method, wherein the method comprises the following steps: s1, collecting point cloud information of the surface (the surface visible in the camera view) of the object to be grabbed; s2, further processing the data of the point cloud information, and extracting feasible grabbing points meeting constraint conditions through a grabbing planning algorithm; s3, sending a motion control command to the mechanical arm and the two-finger mechanical arm by taking the grabbing point as the input of the inverse kinematics of the mechanical arm; and S4, executing a motion control command by the mechanical arm, moving to a specified position, and opening and clamping by the two mechanical arms according to the motion time sequence relation in the motion control command to complete the grabbing task. The invention has the following beneficial effects: the invention can realize the mechanical arm grabbing function with robustness by using a single vision sensor under the condition that the shape of an object is uncertain.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a mechanical arm grabbing method and system for an object with uncertain shape.
Background
With the rise of artificial intelligence wave, the robot plays an increasingly important role in various industries. For a robot, grabbing is an indispensable skill of the robot to walk into the real world, such as sorting objects in the logistics industry, completing assembly of parts on an industrial production line, and the like. However, there are still many uncertainty issues to be studied about how the robot can complete the grabbing task. Therefore, how to deal with the uncertainty and improve the capturing success rate is a very worthy of study.
Generally, uncertainty in the grabbing process mainly includes uncertainty of the shape of an object to be grabbed, uncertainty of the posture of the object to be grabbed, uncertainty of a contact point of a manipulator, uncertainty of the quality of the object, and the like. In the practical application field, the main uncertainty problem in the grabbing process comes from the uncertainty of the shape of an object to be grabbed, and the causes of the uncertainty of the shape mainly include insufficient illumination during grabbing and difficulty in accurately identifying a target object; the precision of the observation camera is insufficient, and the position of the object exceeds the effective detection range; the observation camera can only see part of the surface of the object to be grabbed; the object to be grabbed is transparent or semitransparent and has a reflective surface, such as a transparent mineral water bottle, a latticed pen container with an incomplete surface, a deformable plush toy and the like, which are difficult to identify.
The common methods for handling the objects with uncertain shapes when the mechanical arm grips the objects mainly include two methods: one method is that besides a video camera, more object related information is fed back by other single or multiple sensors (such as a touch sensor, a force sensor, a laser sensor and the like), so that the shape error caused by a single camera is compensated, and finally, a multi-degree-of-freedom manipulator is controlled to complete the grabbing task; the other method is that a machine learning method is applied to mechanical arm grabbing, a large amount of data obtained through enough grabbing experiments is used as a training set of the feasible grabbing configurations of the mechanical arm and the mechanical arm, and a grabbing model obtained through empirical data is obtained. When the point cloud information acquired from the camera is not complete enough, the point cloud data of the part is used as a test set of a grabbing model, corresponding mechanical arm grabbing parameters are called out, and a mechanical arm is driven to complete a grabbing task.
However, the disadvantages of the two methods are obvious, the first method is to acquire more object information by adding a sensor, and finally is matched with a manipulator with multiple degrees of freedom, so that the cost is greatly increased, and the method is not suitable for industrial production and daily life. Also for cost reasons, most of the gripping operations in industry are solved by special equipment, such as moving and sorting devices like conveyor belts. Grasping is usually accomplished with a two-finger clamp or suction cup. The second method is to train a large amount of experimental data to obtain a mechanical arm grabbing model, and to obtain such a large amount of data, a long enough time is required and the mechanical arm is operated to complete enough grabbing times, so that the service life of the mechanical arm is greatly reduced.
Disclosure of Invention
The invention aims to provide a mechanical arm grabbing method and system for an object with uncertain shape.
Therefore, the invention provides a mechanical arm grabbing method, which comprises the following steps: s1, collecting point cloud information of the surface (the surface visible in the camera view) of the object to be grabbed; s2, further processing the data of the point cloud information, and extracting feasible grabbing points meeting constraint conditions through a grabbing planning algorithm; s3, sending a motion control command to the mechanical arm and the two-finger mechanical arm by taking the grabbing point as the input of the inverse kinematics of the mechanical arm; s4, the mechanical arm executes the motion control command, moves to a specified position, and then opens and clamps the two mechanical arms to complete the grabbing task according to the motion time sequence relation in the motion control command; wherein, step S2 includes: s2a, calculating the average coordinate of all data points as the centroid coordinate of the object; s2b, calculating the relative coordinates of all data points relative to the center of mass point; s2c, bringing all coordinates into the set constraint condition to obtain a set of all data points meeting the constraint condition; s2d, performing Gaussian filtering on all data point coordinates meeting the constraint to obtain a correlation coefficient between every two data points; s2 e: and (4) sorting the filtered results in descending order from big to small according to the uncertainty, and selecting a pair of grabbing points with the minimum uncertainty.
Preferably, in the embodiment of the present invention, the RGB-D observation camera, the robot arm, and the two-finger robot are initially configured for a communication function therebetween.
Further preferably, in the embodiment of the present invention, in S2d, the correlation between two data points is represented by a covariance matrix, and the specific calculation is performed by using a kernel function.
Further preferably, in the embodiment of the present invention, the computation of the kernel function adopts a linear combination of kernel functions:
cov(xt,xj)=wkG(xi,xj)+(1-w)kT(xi,xj)
where ω is any positive number between 0 and 1, xi,xjIs the distance between the first and second electrodes,
sheet kernel function: k is a radical ofT(xi,xj)=2||xi-xj||3-3||xi-xj||2,
cov(xi,xj) The result of the calculation of (b) is used as a measure of the shape uncertainty of the present invention, i.e., a quantification of the uncertainty.
Further preferably, in the embodiment of the present invention, the step S2 further includes: and after a pair of feasible grabbing points are obtained, converting the coordinates of the feasible grabbing points into a pose instruction to which the mechanical arm should move and a control instruction when the two mechanical arms start to open and close according to the calibrated pose relationship between the Kinect camera and the mechanical arm and the two mechanical arms, and respectively sending the pose instruction and the control instruction to the mechanical arm and the two mechanical arms.
Further preferably, in the embodiment of the present invention, the constraint conditions in step S2 include: hand constraint, grasp stability constraint, and grasped object constraint.
Further preferably, in the embodiment of the present invention, the hand constraint includes limitation due to mechanical structures of different manipulators; the grabbing stability constraint comprises that friction force generated by grabbing can meet the gravity of a grabbed object and the object is not touched in advance.
Further preferably, in the embodiment of the present invention, the shape uncertainties of the feasible grasping points that meet all the constraints are ranked from low to high, and only the pair of grasping points with the smallest shape uncertainty is taken.
Further preferably, in step S2d, in the present embodiment, in the gaussian filtering process, a kernel function is applied to map the three-dimensional space data points (x, y, z) to the four-dimensional space (d, nx, ny, nz), where d represents the distance between two data points, so as to quantify the shape uncertainty.
The invention further provides a mechanical arm grabbing system which comprises a control RGB-D observation camera, a central controller, a mechanical arm and two mechanical arms, wherein a program is stored in the central controller and used for controlling the RGB-D observation camera, the mechanical arm and the two mechanical arms to execute the method.
Compared with the prior art, the invention has the beneficial effects that: the invention can realize the mechanical arm grabbing function with robustness by using a single vision sensor (RGB-D observation camera) under the condition that the shape of an object is uncertain.
Drawings
FIG. 1 is a schematic view of a grasping system according to an embodiment of the present invention;
FIG. 2 is a basic flow diagram of an embodiment of the present invention;
FIG. 3 is a diagram of variables associated with grabbing according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a coordinate system according to an embodiment of the present invention.
Reference numerals: 1. a fixed RGB-D observation camera; 2. an assembled mechanical arm; 3. two fingers of parallel manipulators; 4. an object to be grabbed; 5. a supporting surface on which an object to be grabbed is located; 6. a central controller; 7. a debugging interface of the central controller; 8. a data transmission line between the observation camera and the controller; 9. and a data transmission line between the mechanical arm and the controller.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like reference numerals refer to like parts unless otherwise specified. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
Referring to fig. 1, the robot grasping system of the present embodiment includes an RGB-D observing camera 1, a central controller (desktop computer) 6 equipped with an ubuntu operating system (a Linux operating system), a robot 2, and a two- finger robot 3, 8 and 9 in fig. 1 respectively represent data transmission lines between the observing camera, the robot, and the controller. And 7, a debugging interface of the central controller is used for facilitating interaction. 4 is the object to be gripped, and 5 is the supporting surface on which the object to be gripped is located.
The mechanical arm grabbing method comprises two parts of off-line planning and on-line execution. The offline planning mainly comprises mechanical arm grabbing planning and mechanical arm configuration. The grabbing plan comprises two parts of object surface modeling and feature extraction of feasible grabbing points. The configuration of the manipulator is mainly the posture of the manipulator when grabbing.
The RGB-D observation camera 1 is used for collecting point cloud information of the surface (the surface visible in the camera view) of an object to be grabbed, and storing point cloud data (three-dimensional coordinates and normal vectors of all data points) into an OBJ file format, so that the point cloud data can be conveniently processed in the central controller 6 at a later stage.
The central controller 6 reads the content of the OBJ file in which the point cloud data information is stored, further processes the data in the OBJ file in an ROS (Robot Operating System), extracts a feasible grabbing point satisfying a constraint condition through a grabbing planning algorithm, and sends a motion control instruction to the Robot arm 2 and the two-finger Robot arm 3 by the ROS by taking the grabbing point as the input of the inverse kinematics of the Robot arm.
The mechanical arm executes a motion command sent by a computer of the central controller 6 after inverse kinematics settlement, moves to a specified position, and then is opened and clamped by the two mechanical arms to complete a grabbing task according to a motion time sequence relation in the motion control command.
The central controller 6 is installed with point cloud data processing software (ROS indigo software). The method comprises the steps of collecting cloud information of object surface points of a visible part through an RGB-D observation camera, using the cloud information as raw data, processing the data on a central computer based on ROS software, wherein the processing work comprises extracting three-dimensional coordinates and normal vectors of each data point, and the software runs in a Linux operating system (in the embodiment, an ubuntu operating system).
The working process is as follows:
the basic flow is shown in fig. 2.
And 2, placing the object to be grabbed in the visual field range of the RGB-D observation camera, and sending an instruction by the central controller (the desktop computer) to enable the RGB-D observation camera to acquire surface information (mainly three-dimensional coordinates of surface points of the object and normal vectors corresponding to the surface points, wherein only the information of the surface of the object visible in the visual field of the camera can be acquired at the moment).
And 3, transmitting the data acquired by the RGB-D observation camera to a central controller (desktop computer), and processing the data in the ROS by the central controller (desktop computer), wherein the feasible grabbing point coordinates of the two mechanical hands are finally obtained.
The main work in step 3 includes:
a. the average coordinate of all data points is calculated as the centroid coordinate of the object.
b. The relative coordinates of all data points with respect to the centroid point are calculated.
c. And (4) bringing all coordinates into the set constraint condition to obtain a set of all data points meeting the constraint condition.
d. And performing Gaussian filtering (Gaussian filtering process: all data points are substituted into corresponding kernel functions for calculation) on all data point coordinates meeting the constraint to obtain a correlation coefficient between every two data points.
The correlation coefficient is described below:
here, we consider that two adjacent coordinate points in the point cloud of the object have correlation and conform to the gaussian distribution rule. The correlation between two data points can be represented by a covariance matrix, and a kernel function is used for calculation. The linear combination of the kernel functions is still kernel functions according to the linear combination of the two kernel functions, so the linear combination of the kernel functions is innovatively adopted in the invention.
cov(xi,xj)=ωkG(xi,xj)+(1-ω)kT(xi,xj)
Where ω is any positive number between 0 and 1, xi,xjIs the distance.
sheet kernel function: k is a radical ofT(xi,xj)=2||xi-xj||3-3||xi-xj||2。
cov(xi,xj) The result of the calculation of (b) is used as a measure of the shape uncertainty of the present invention, i.e., a quantification of the uncertainty.
And then, sorting the filtered results in descending order from large to small according to the uncertainty, and selecting a pair of grabbing points with the minimum uncertainty (because the system uses a two-finger manipulator, only two grabbing points can be used as contact points of a paw and an object to be grabbed).
And 4, after a pair of feasible grabbing points are obtained, converting the coordinates of the feasible grabbing points into a position command to which the mechanical arm moves and a control command to which the two mechanical arms start to open and close according to the calibrated position and posture relation between the Kinect camera and the UR5 mechanical arm and the two mechanical arms in the ROS, and respectively sending the position and posture commands to the mechanical arm and the two mechanical arms.
And 5, after receiving a motion control command of a central controller (desktop computer), preferentially responding by the mechanical arm, moving to a specified spatial position and adjusting the corresponding tail end posture, so that the two mechanical arms can conveniently grab. After the mechanical arm finishes the control instruction of the mechanical arm, the two mechanical arms start to execute the control instruction, open the two fingers and clamp the target object.
The related concepts mentioned in the above steps 1-5 are explained in detail below:
(1) constraint conditions are as follows: in the process of grabbing objects with uncertain shapes, three constraint conditions mainly exist, namely hand constraint, grabbing stability constraint and grabbed object constraint.
Fig. 3 clearly shows the variables associated with grasping, C1C2 is a pair of feasible grasping points that meet the constraints, n1n2 is the normal vector of C1C2 points, respectively, g1g2 represents the point on the manipulator that coincides with the feasible grasping point, such that g1g2 represents the direction of grasping, and W represents the width of the manipulator opening.
a. The hand constraint is mainly caused by the fact that different mechanical hands cannot complete certain grabbing actions due to the limitation of the mechanical structures of the mechanical hands. In this embodiment, we define that the distance between a pair of possible gripping points cannot exceed the maximum opening distance of the ROBOTIQ two-finger robot. In addition, because the two manipulators grasp in parallel, the grasping direction of the manipulator is required to be parallel to the normal vector of the selected feasible grasping point. This is to prevent the oblique grasping from failing to grasp the object.
b. And (5) grabbing stability constraint. The precondition for stable grabbing is that the friction force generated by grabbing can meet the gravity of the object to be grabbed, so that the friction angle of the paw at the grabbing point is required to be larger than the included angle of the normal vector between two feasible grabbing points. In addition, one of the feasible grabbing points is located in the concave part, so that the paw is easy to touch an object in advance when approaching the object to be grabbed, the object is overturned, and grabbing failure is caused.
c. The grasped object is restrained: obviously a feasible gripping point is necessary to grip the surface of the object with the belt.
In addition, there may be more than one pair of possible grabbing points meeting the above constraint conditions, so we arrange the shape uncertainty of the possible grabbing points meeting all the above constraint conditions from low to high, and require that only the pair of grabbing points with the smallest shape uncertainty is taken.
(2) The gaussian filtering is to use a linear combination kernel function (a gaussian kernel function with robustness) and a thin-plate kernel function with very good adaptability to smooth continuous surfaces to ensure that the present invention can adapt to as many grasped objects as possible and eliminate noise (when object surface information is acquired, because of the accuracy of an RGB-D observation camera and the objective existence of environmental noise, the greatest problem of noise is that errors are transmitted accumulatively).
(3) The kernel function defines a mapping from a low dimension to a high dimension. In the present invention, to reconstruct the object surface grabbing model using the obtained point cloud data, we apply a kernel function to map the three-dimensional space data points (x, y, z) to the four-dimensional space (d, nx, ny, nz) (see the following description for details), where d represents the distance between two data points, to quantify the shape uncertainty.
The distance d is further explained as follows: we are right for allAnd respectively averaging the three-dimensional coordinate values of the observed point cloud data, and considering the average as the centroid position of the target object. We define the distance of the centroid position as-1, which is used as a measure of the degree of distribution of the entire observable object point cloud. Then cov (d)i,dj) By measuring the correlation of the two data points, the entire set of metric values can reflect the shape uncertainty of the entire object. (Note: cov (d) hereini,dj) Cov (x) abovei,xj))。
Mapping to a four-dimensional space: it is noted here that for a particular data point in the point cloud, it has (xyz) spatial coordinates, and also has normal vector (nx, ny nz) information as the object surface point. This is to keep the integrity and one-to-one correspondence of data information during data processing, and to provide accurate values for later use of normal vector data as a constraint criterion.
(4) The calibration process among the RGB-D observation camera, the mechanical arm and the two-finger mechanical arm is as follows: the calibration is to determine the relative relationship (relative position and posture) between the camera coordinate system and the robot, and the installation mode of the camera adopts 'eyes outside the hands', so that the camera and the base of the mechanical arm are relatively fixed, and the system error is reduced.
In fig. 4, a system B is a fixed coordinate system of the mechanical arm, a system E is a coordinate system of the two-finger mechanical arm, a system C is a coordinate system of an RGB-D observation camera, and a system D is a calibration plate (the coordinate system is fixed with the tail end of the mechanical arm, calibration is to drive the mechanical arm body to a plurality of different spatial positions (the calibration plate can be seen as required by the positions), so that the calibration plate can be seen by the RGB-D observation camera in the process.
In the process, a D system and an E system are used, the space position and the posture of a camera coordinate system and a terminal manipulator coordinate system are fixed, namely a T2 posture matrix is invariable, so that a position matrix T4 of the RGB-D observation camera in a manipulator fixed coordinate system is T1 × T2 × T3, T1T 4 is a manipulator link coordinate system and is used for describing the position and posture transformation relation of the manipulator in the space, T1 is the space position and posture transformation matrix of the manipulator joint 1 relative to the B system, namely the fixed coordinate system of the manipulator, namely a base, T2 is the space position and posture transformation matrix relative to T1, E system is used for describing the space position and posture relation of the two manipulators of the terminal actuator, and E is the space position and posture transformation matrix relative to the T4.
For the two processes (namely, the calibration plate is moved twice, and then the space position of the observation camera is calculated, because the camera is fixed, the multiplication result of the whole connecting rod transformation matrix is the same), T1 × T2 × T3 is T1 '× T2 × T3', so that the specific numerical value of the B system can be obtained, and the pose relation T4 of the RGB-D observation camera in the fixed coordinate system of the mechanical arm can be further obtained.
The method is based on the Gaussian process, the linear combination kernel function is applied to carry out surface reconstruction on the object with uncertain shape, feasible grabbing points are extracted under the corresponding constraint condition, and the feasible grabbing points are used as the input of a mechanical arm control system to complete the grabbing task. This embodiment has: 1. robust grabbing; 2. the shape uncertainty encountered in the grabbing process can be well processed. The method needs small data volume, uses a camera, and has low cost and good practicability.
The grasping of robotic arms involves many aspects including mechanics, controls, computers, artificial intelligence, and the like. Therefore, the invention can be applied to the fields of office automation, automatic selling, logistics storage and transportation, small workshop processing, robot education, industrial production and the like, and can complete the tasks of logistics sorting and transportation, part assembly, automobile surface polishing, 3D printing, laser engraving, circuit board welding and the like.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.
Claims (8)
1. A mechanical arm grabbing method is characterized by comprising the following steps:
s1, collecting point cloud information of the surface of the object to be grabbed;
s2, further processing the data of the point cloud information, and extracting feasible grabbing points meeting constraint conditions through a grabbing planning algorithm;
s3, sending a motion control command to the mechanical arm and the two-finger mechanical arm by taking the grabbing point as the input of the inverse kinematics of the mechanical arm;
s4, the mechanical arm executes the motion control command, moves to a specified position, and then opens and clamps the two mechanical arms to complete the grabbing task according to the motion time sequence relation in the motion control command;
wherein, step S2 includes:
s2a, calculating the average coordinate of all data points as the centroid coordinate of the object;
s2b, calculating the relative coordinates of all data points relative to the center of mass point;
s2c, bringing all coordinates into the set constraint condition to obtain a set of all data points meeting the constraint condition;
s2d, performing Gaussian filtering on all data point coordinates meeting the constraint to obtain a correlation coefficient between every two data points;
s2e, performing descending order arrangement on the filtered results from large to small according to the uncertainty, and selecting a pair of grabbing points with the minimum uncertainty;
in S2d, the correlation between two data points is represented by a covariance matrix, and the specific calculation is performed by using a kernel function;
the kernel function is calculated by adopting a linear combination of kernel functions:
cov(xt,xf)=ωkG(xt,xf)+(1-ω)kT(xt,xf)
where ω is any positive number between 0 and 1, xi,xjIs the distance between the first and second electrodes,
sheet kernel function: k is a radical ofT(xi,xj)=2||xi-xj||3-3||xi-xj||2,
cov(xi,xj) The result of the calculation of (b) is used as a measure of the shape uncertainty of the present invention, i.e., a quantification of the uncertainty.
2. The robot arm gripping method according to claim 1, wherein the RGB-D vision camera and the robot arm are initially configured for a communication function therebetween, and the RGB-D vision camera and the two-finger robot arm are initially configured for a communication function therebetween.
3. The robot arm gripping method according to claim 2, wherein the step S2 further comprises: and after a pair of feasible grabbing points are obtained, converting the coordinates of the feasible grabbing points into a pose instruction to which the mechanical arm should move and a control instruction when the two mechanical arms start to open and close according to the calibrated pose relationship between the Kinect camera and the mechanical arm and the two mechanical arms, and respectively sending the pose instruction and the control instruction to the mechanical arm and the two mechanical arms.
4. The robot arm gripping method according to claim 1, wherein the constraint conditions in step S2 include: hand constraint, grasp stability constraint, and grasped object constraint.
5. The robot arm gripping method according to claim 4, wherein the hand constraint includes a maximum opening distance and a gripping direction configured by different robot arms according to their own mechanical structure limitations; the grabbing stability constraint comprises that friction force generated by grabbing can meet the gravity of a grabbed object and the object is not touched in advance.
6. The robot arm grasping method according to claim 4, wherein the shape uncertainties of the feasible grasping points that meet all the constraints are ranked from low to high, and only the pair of grasping points with the smallest shape uncertainty is taken.
7. The method of claim 6, wherein in step S2d, a kernel function is applied to map the three-dimensional data points (x, y, z) to the four-dimensional space (d, nx, ny, nz) during Gaussian filtering, wherein d represents the distance between two data points, thereby quantifying the shape uncertainty.
8. A robot grasping system comprising a control RGB-D observing camera, a central controller, a robot and a two-finger robot, wherein the central controller stores therein a program for controlling the RGB-D observing camera, the robot and the two-finger robot to perform the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810736694.3A CN108858193B (en) | 2018-07-06 | 2018-07-06 | Mechanical arm grabbing method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810736694.3A CN108858193B (en) | 2018-07-06 | 2018-07-06 | Mechanical arm grabbing method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108858193A CN108858193A (en) | 2018-11-23 |
CN108858193B true CN108858193B (en) | 2020-07-03 |
Family
ID=64299559
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810736694.3A Active CN108858193B (en) | 2018-07-06 | 2018-07-06 | Mechanical arm grabbing method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108858193B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110271000B (en) * | 2019-06-18 | 2020-09-22 | 清华大学深圳研究生院 | Object grabbing method based on elliptical surface contact |
CN110103231B (en) * | 2019-06-18 | 2020-06-02 | 王保山 | Precise grabbing method and system for mechanical arm |
CN111784218B (en) * | 2019-08-15 | 2024-09-24 | 北京京东乾石科技有限公司 | Method and device for processing information |
CN110580725A (en) * | 2019-09-12 | 2019-12-17 | 浙江大学滨海产业技术研究院 | A kind of box sorting method and system based on RGB-D camera |
CN110842984A (en) * | 2019-11-22 | 2020-02-28 | 江苏铁锚玻璃股份有限公司 | Power mechanical arm with radiation resistance and high-precision positioning operation |
CN111112885A (en) * | 2019-11-26 | 2020-05-08 | 福尼斯智能装备(珠海)有限公司 | Welding system with vision system for feeding and discharging workpieces and self-adaptive positioning of welding seams |
CN111216124B (en) * | 2019-12-02 | 2020-11-06 | 广东技术师范大学 | Robot vision guiding method and device based on integration of global vision and local vision |
CN112589795B (en) * | 2020-12-04 | 2022-03-15 | 中山大学 | Vacuum chuck mechanical arm grabbing method based on uncertainty multi-frame fusion |
CN113305847B (en) * | 2021-06-10 | 2022-11-01 | 上海大学 | Building 3D printing mobile mechanical arm station planning method and system |
CN113500017B (en) * | 2021-07-16 | 2023-08-25 | 上海交通大学烟台信息技术研究院 | An intelligent system and method for material sorting in unstructured scenarios |
CN113771045B (en) * | 2021-10-15 | 2022-04-01 | 广东工业大学 | Vision-guided right-angle robot mobile phone middle frame height adaptive positioning and grasping method |
CN117549338B (en) * | 2024-01-09 | 2024-03-29 | 北京李尔现代坦迪斯汽车系统有限公司 | Grabbing robot for automobile cushion production workshop |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3876234B2 (en) * | 2003-06-17 | 2007-01-31 | ファナック株式会社 | Connector gripping device, connector inspection system and connector connection system equipped with the same |
CN102527643B (en) * | 2010-12-31 | 2014-04-09 | 东莞理工学院 | Sorting manipulator structure and product sorting system |
WO2012129251A2 (en) * | 2011-03-23 | 2012-09-27 | Sri International | Dexterous telemanipulator system |
US9452531B2 (en) * | 2014-02-04 | 2016-09-27 | Microsoft Technology Licensing, Llc | Controlling a robot in the presence of a moving object |
CN104048607A (en) * | 2014-06-27 | 2014-09-17 | 上海朗煜电子科技有限公司 | Visual identification and grabbing method of mechanical arms |
US9687983B1 (en) * | 2016-05-11 | 2017-06-27 | X Development Llc | Generating a grasp pose for grasping of an object by a grasping end effector of a robot |
-
2018
- 2018-07-06 CN CN201810736694.3A patent/CN108858193B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN108858193A (en) | 2018-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108858193B (en) | Mechanical arm grabbing method and system | |
CN108453743B (en) | Mechanical arm grabbing method | |
Zhu et al. | Dual-arm robotic manipulation of flexible cables | |
Barbieri et al. | Design, prototyping and testing of a modular small-sized underwater robotic arm controlled through a Master-Slave approach | |
Suárez-Ruiz et al. | A framework for fine robotic assembly | |
Calli et al. | Grasping of unknown objects via curvature maximization using active vision | |
Huebner et al. | Grasping known objects with humanoid robots: A box-based approach | |
JP2015213973A (en) | Picking device and picking method | |
CN105184019A (en) | Robot grabbing method and system | |
US20220331964A1 (en) | Device and method for controlling a robot to insert an object into an insertion | |
US12131483B2 (en) | Device and method for training a neural network for controlling a robot for an inserting task | |
CN106003036A (en) | Object grabbing and placing system based on binocular vision guidance | |
Shahverdi et al. | A simple and fast geometric kinematic solution for imitation of human arms by a NAO humanoid robot | |
Tsarouchi et al. | Vision system for robotic handling of randomly placed objects | |
Harada et al. | Project on development of a robot system for random picking-grasp/manipulation planner for a dual-arm manipulator | |
Bierbaum et al. | Grasp affordances from multi-fingered tactile exploration using dynamic potential fields | |
Sampath et al. | Review on human-like robot manipulation using dexterous hands. | |
Vithanage et al. | Autonomous rolling-stock coupler inspection using industrial robots | |
Lee et al. | A robot teaching framework for a redundant dual arm manipulator with teleoperation from exoskeleton motion data | |
Schiebener et al. | Discovery, segmentation and reactive grasping of unknown objects | |
Wang et al. | Design of a voice control 6DoF grasping robotic arm based on ultrasonic sensor, computer vision and Alexa voice assistance | |
Chang et al. | Model-based manipulation of linear flexible objects with visual curvature feedback | |
Almeida et al. | Bimanual folding assembly: Switched control and contact point estimation | |
CN113894774A (en) | A robot grasping control method, device, storage medium and robot | |
Kawasaki et al. | Virtual robot teaching for humanoid hand robot using muti-fingered haptic interface |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |