[go: up one dir, main page]

CN102848388A - Multi-sensor based positioning and grasping method for service robot - Google Patents

Multi-sensor based positioning and grasping method for service robot Download PDF

Info

Publication number
CN102848388A
CN102848388A CN2012100967434A CN201210096743A CN102848388A CN 102848388 A CN102848388 A CN 102848388A CN 2012100967434 A CN2012100967434 A CN 2012100967434A CN 201210096743 A CN201210096743 A CN 201210096743A CN 102848388 A CN102848388 A CN 102848388A
Authority
CN
China
Prior art keywords
robot
coordinate system
target object
positioning
arm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012100967434A
Other languages
Chinese (zh)
Inventor
刘路
李昕
吕小听
张德兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SHANGHAI UNIVERSITY
Original Assignee
SHANGHAI UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SHANGHAI UNIVERSITY filed Critical SHANGHAI UNIVERSITY
Priority to CN2012100967434A priority Critical patent/CN102848388A/en
Publication of CN102848388A publication Critical patent/CN102848388A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The invention discloses a service robot positioning and grabbing method based on multiple sensors. The method comprises the following steps: the labels of the RFID transceiver modules are placed on the ground according to a certain rule to construct a gridding coordinate system; the robot adopts a method of combining Radio Frequency and binocular vision, and freely walks, positions and searches a target in the constructed gridding environment; the space coordinates of the object acquired by the binocular vision system are converted into an arm coordinate system, the arm is mathematically modeled by using an improved D-H model, and then each angle of the humanoid arm is solved by using a new inverse solution method, so that the following and grabbing of the object are realized. The method realizes self-positioning and target following and grabbing of the service robot based on the multiple sensors.

Description

Service robot location and grasping means based on multisensor
Technical field
The invention belongs to the Robotics field, what be specifically related to is a kind of service robot location and grasping means based on multisensor.
Background technology
Five or six during the decade in the past, and researchers are endeavouring to study the correlation technique of robot application always.Early stage in the sixties in last century, along with industrial expansion, robot realizes dangerous operation and task with helping people, however these robots mainly under structural environment, work, can only carry out work according to specific pattern.But along with the demand of the development of technology and human daily life, robot but faces the challenge of destructuring, the series of problems such as complicated.Therefore in order to make robot can simulate certain home environment, be again people's service in the daily life, it should possess certain interactive capability, can be according to phonetic order Intelligent Recognition object, then carry out autonomous localization and navigation, and can keep away barrier according to the environment of self perception, gripping finger earnest product and give the people of appointment then.
Summary of the invention
The object of the invention is to the deficiency for present technology, propose a kind of service robot location and grasping means based on many sensings, make robot in a kind of structurized environment, provide more intelligentized service for the mankind.Robot can carry out intelligent interaction by the sensors such as vision, radio frequency, ultrasonic, photoelectricity and external environment condition in this method, in the gridding environment that RF tag makes up, finish walking freely, target search, target object hand and grab and follow and the flexible intelligent behavior such as crawl.
For achieving the above object, design of the present invention is:
Service robot location and grasping means based on multisensor involved in the present invention, its experiment porch comprises the binocular image acquisition module, RFID transceiver module, return pulley motion module, and apery mechanical arm control module.The label of radio frequency (RFID) transceiver module is placed on ground according to certain rule, makes up the gridding environment; The method that robot adopts radio frequency (RFID) to combine with binocular vision, robot walking freely, location and searching target in the gridding environment that makes up; The space coordinates of the object that binocular vision system is obtained is transformed into the arm coordinate system; Utilize improved D-H model that arm is carried out modeling, obtain the contrary of copy man arm and separate, realize location and the crawl of object; Utilize feedforward and the feedback strategy of machine vision constantly to adjust the position that hand is grabbed grasping device, realize following and accurately crawl target object.
Service robot location and grasping means based on multisensor involved in the present invention comprise following functions:
(1) independent navigation and locating module: combining RFID and binocular camera shooting header, realize crawl, independent navigation and location to target object.
(2) apery mechanical arm open loop handling module: at first binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object; Obtain target object with respect to the coordinate figure of right hand mechanical arm through after the coordinate transform; 4+1 degree-of-freedom manipulator after utilizing improved D-H model to degeneration proposes a new inverse arithmetic according to the author again through the line number modeling, can obtain the angle in each joint of arm, according to specific track crawl target object.
(3) apery mechanical arm closed loop handling module: machine vision can calculate the error that paw and object exist between the two, utilize the closed loop strategy constantly to adjust the position that hand is grabbed grasping device, to reduce the two error, realize following and accurately crawl target object.
Described binocular image acquisition module refers to that native system is based on binocular stereo vision; Described RFID R-T unit is the passive RF tag, contains the coordinate information of environment; Described return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree, carry out mathematical modeling by improved D-H model, again according to a kind of inverse arithmetic of new proposition, ask for the inverse kinematics solution of equation after the simplification, so that robotic arm can be realized following in real time and flexible crawl for target object.
Described radio frequency (RFID) R-T unit, comprise RF tag and radio frequency induction dispatch coil, wherein RF tag is installed in the bottom of robot, and label is placed on ground according to certain distance, make up the environment of gridding, the situation that robot is roamed in the gridding environment as shown in Figure 5.
The described gridding environment that utilizes RF tag to make up carries out determining of initial position to robot, the judgement of course and planning, and the adjustment of the position before the target object, and wherein, the judgement of course and the flow process of planning are shown in 6.
Suppose the calculating diagram of robot steering angle as shown in Figure 7, the path of the required walking of robot is A-B-C, needs the angle θ of rotation at the B point, can solve by the cosine law:
(1)
When robot wide object, can be by mutual through the lang sound with robot, obtain the final destination of robot, therefrom just can cook up the course of robot, when distance objective object near (about 1 meter), adjust own attitude about robot passes through, control and target between distance, can realize accurate location, its algorithm flow chart as shown in Figure 7.
Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree process, and wherein the equivalent-simplification figure of the arm models of the 6+1 free degree and the 4+1 free degree is shown in Fig. 8,9.
In the simplified model, cumbersome in the calculating if according to original D-H model solution, and restriction
Figure 502987DEST_PATH_IMAGE002
Freely be orientated.The present invention adopts improved D-H model, increases the Y row on original D-H parameter list basis, and along the Y direction translation, specific practice is as follows:
(1) robot arm modeling
If: a represents the length (joint skew) of the common vertical line of adjacent two joints axes directions;
Figure 111823DEST_PATH_IMAGE003
Represent the angle (joint is reversed) between two adjacent z axles; D is illustrated on the z axle distance between two adjacent common vertical lines; θ represents the anglec of rotation around the z axle.By Selecting All Parameters substitution A matrix from parameter list, can write out the conversion between per two adjacent segments.Wherein, the A matrix is the transition matrix that a rear joint transforms to previous joint, shown in formula (2)-(5).
Figure 718833DEST_PATH_IMAGE005
Figure 686789DEST_PATH_IMAGE006
(2) ask normal solution
Figure 494525DEST_PATH_IMAGE008
Known each joint rotation angle can be in the hope of the robot arm forward kinematics solution shown in formula (6) by interarticular pose matrix.
Figure 422030DEST_PATH_IMAGE009
(6)
Wherein, the 4th of T the classify as
Figure 560887DEST_PATH_IMAGE010
(7)
In formula 7, s1, c1, s2, the expressions such as c2 are sin, cos, sin, cos.
(3) solution of inverting
If the mechanical arm tail end attitude matrix is:
Figure 144315DEST_PATH_IMAGE011
(8)
In formula (8), P[1], P[2] and, P[3] refer to respectively the last paw angle of pitch, roll angle, yaw angle is at the direction cosines of arm coordinate system, P[4] be the last present position of finger paw.The contrary solution of mechanical arm is exactly to find the solution unknown quantity ,
Figure 192354DEST_PATH_IMAGE013
,
Figure 502113DEST_PATH_IMAGE014
,
Figure 572837DEST_PATH_IMAGE015
The contrary solution procedure of separating is:
1. in flow chart 7, can learn, target object is almost in the center of Robot Binocular Vision, in order to make things convenient for the crawl of robot gripper, can be with the plane of clamper perpendicular to object, clamper is maximum in the face of the opening of target object like this, be convenient to most grasp target, as shown in figure 10, it be easy to show that
Figure 688560DEST_PATH_IMAGE016
, wherein
Figure 528340DEST_PATH_IMAGE017
Be joint 2 to the angle of inner rotary,
Figure 9000DEST_PATH_IMAGE018
The angle that joint three rotates inwards, for last paw perpendicular to objective plane, can allow
Figure 567020DEST_PATH_IMAGE016
2. suppose that the final position of paw is
Figure 968658DEST_PATH_IMAGE019
, by top analysis 1. as can be known, the locus of joint of robot five is:
Figure 662945DEST_PATH_IMAGE020
(9)
3. order
Figure 580085DEST_PATH_IMAGE021
=T[4], with T[4] the second row divided by the first row, can obtain a result into:
Figure 422139DEST_PATH_IMAGE022
(10)
4. as shown in Figure 6, learn from method of geometry:
Figure 817349DEST_PATH_IMAGE023
(11)
5. T[4] the formula of the third line
Figure 631721DEST_PATH_IMAGE024
(12)
Wherein
Figure 454183DEST_PATH_IMAGE018
Value learn in formula (12), to only have
Figure 534266DEST_PATH_IMAGE017
The unknown, can suppose like this:
(13)
Figure 402045DEST_PATH_IMAGE026
(14)
Figure 395409DEST_PATH_IMAGE027
(15)
Figure 212055DEST_PATH_IMAGE028
(16)
(4) conversion of trick coordinate
For the depth information that vision is obtained is converted to the position that mechanical paw will arrive, must set up the conversion of visual coordinate system and mechanical arm coordinate system.The robot phantom eye as shown in figure 11.Wherein, the performing step of trick conversion] as follows:
Figure 949067DEST_PATH_IMAGE029
(17)
Wherein, x, y, z are the coordinates of targets values of obtaining of binocular.After the head that this curl of premultiplication is equivalent to robot lifts, the position of target object in binocular vision.
2. be that initial point overlaps with the initial point of right mechanical arm coordinate system with the visual coordinate after the step 1 conversion, transformation for mula is as follows:
Figure 472452DEST_PATH_IMAGE030
(18)
3. make the x of visual coordinate system, y, the z direction of principal axis is fully identical with the mechanical arm coordinate system, and transformation for mula is as follows:
Figure 715346DEST_PATH_IMAGE031
(19)
Wherein,
Figure 956971DEST_PATH_IMAGE032
Be the position that paw will arrive, θ is the angle that robot head turns forward, shown in figure eight:
Figure 232095DEST_PATH_IMAGE033
,
Figure 875566DEST_PATH_IMAGE034
,
Figure 273049DEST_PATH_IMAGE035
Value can be by measure obtaining.
The real-time target of the apery mechanical arm that the present invention relates to follows and gripping portion, refers to that robot utilizes machine vision
Feedforward and feedback strategy are constantly adjusted the position of paw grasping device, realize that paw grasps following with accurate of target object, and specific practice is as follows:
(1) FEEDFORWARD CONTROL of machine vision
Gain knowledge according to visual information and arm motion, the desired locations that calculates the paw grasping device is the angle in each joint relatively, and robot hand is placed near the position target object, and concrete realization flow as shown in figure 12.
(2) feedback of machine vision
Owing to there being mechanical clearance, can produce transformed error between vision system and the arm coordinate, often cause first the crawl unsuccessfully, the present invention is used as compensating error by the distance of extracting between two kinds of different colours in the binocular, wherein, hand is grabbed and is posted red-ticket, and target object posts green-ticket, and the kinematic error function is:
Figure 1971DEST_PATH_IMAGE036
, the specific implementation flow process as shown in figure 13.
The apery mechanical arm location that the present invention relates to and crawl target object part refer to that the robot navigation controls mechanical arm crawl target object behind assigned address, and key step is as follows:
(1) binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object.
(2) obtain target object with respect to the coordinate figure of right arm through after the coordinate transform.
(3) right hand arm is found the solution contrary the solution, obtain each joint angles value of arm, according to specific track crawl target object.
Based on above-mentioned 6 points, performing step of the present invention is as follows:
(1) utilize voice to learn the final destination of robot, the recycling RF receiving/transmission device, the label that will contain the environment coordinate figure is placed on ground according to certain rule, makes up the gridding environment, realizes the first location of robot;
(2) utilize binocular vision to extract the depth information of target, in the gridding environment that makes up, realize the accurate location to robot;
(3) utilize improved D-H model that arm is carried out modeling, obtain the normal solution of copy man arm, in order to be that the flexible crawl of next step realize target object is ready;
The space coordinates of the object that (4) binocular vision system is obtained is transformed into the arm coordinate system, utilizes the new inverse arithmetic that proposes of this paper, calculates the angle in each joint.
(5) utilize the feedback control strategy of machine vision constantly to adjust the position of paw grasping device, the following of realize target object.
(6) tag addresses that radio frequency is received for the first time is as new point of destination, and robot advances to this point of destination.
According to the foregoing invention design, the present invention adopts following technical proposals:
A kind of service robot location and grasping means based on multisensor is characterized in that: utilize the robot of radio-frequency module to locate for the first time; Utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The model that utilizes improved D-H to robot arm through the line number modeling; Robot grasps target for the first time according to given path planning and new inverse arithmetic; When first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Compared with prior art, the present invention has following apparent outstanding substantive distinguishing features and marked improvement: with machine vision and radio frequency ID fusion application in service robot, give full play to the advantage of two kinds of sensors, realization is walking freely in the gridding environment, target search, self-align, the target object hand is grabbed and is followed, the multiple intelligent behaviors such as flexible crawl have improved the degree of accuracy of grasping greatly.
Description of drawings
Fig. 1 is based on the service robot location of multisensor and the flowsheet of grasping manipulation method
Fig. 2 is system architecture diagram of the present invention;
Fig. 3 is the robot external view;
Fig. 4 is the experimental result picture of the embodiment of the invention
Fig. 5 is the environment of the gridding of structure
Fig. 6 is first positioning flow figure
Fig. 7 is the steering angle figure of robot
Fig. 8 is the accurate positioning flow figure of robot
Fig. 9 is 6+1 degree of freedom robot arm models
Figure 10 is 4+1 degree of freedom robot arm models
Figure 11 is for finding the solution
Figure 80785DEST_PATH_IMAGE013
, ,
Figure 163459DEST_PATH_IMAGE015
Relations Among figure
Figure 12 is the transition diagram of trick coordinate
Figure 13 robot vision feed forward principle figure
Figure 14 robot vision feedback principle figure.
The specific embodiment
Below in conjunction with accompanying drawing preferred enforcement of the present invention is elaborated:
Example one:
Referring to Fig. 1, this sensor-based service robot location and grasping means is characterized in that concrete operation step is as follows:
(1) utilize the robot of radio-frequency module to locate for the first time.RF receiving/transmission device comprises antenna, receiver and passive RF tag, wherein all contains the information such as coordinate figure of environment in every label;
(2) utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed;
(3) model that utilizes improved D-H to robot arm through the line number modeling;
(4) robot grasps target for the first time according to given path planning and new inverse arithmetic;
(5) when first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Example two:
4, the present embodiment and embodiment one are basic identical, special character is as follows: RF receiving/transmission device in the described step (1), comprise RF tag and radio frequency induction dispatch coil, wherein the radio frequency induction dispatch coil is installed in the bottom of robot, label is placed on ground according to certain distance, makes up the coordinate system of gridding, with the binocular information fusion, by in the reading tag data piece constantly coordinate information is judged the route of advancing and plan, realize locating for the first time; The accurate location implementing method of the robot of described step (2) is: the colouring information that utilizes binocular vision system to extract object carries out HSV(Hue-Saturation-Value) Threshold segmentation, obtain the three-dimensional coordinate of target object; By the current label position of robot and the calculating of obtaining the three-dimensional coordinate Relations Among of target, cook up the course of robot, before the object that arrival will be grasped; The essence of the improved D-H model of described step (3) is that previous coordinate origin can pass through some variations, can translation or rotation, overlap with a rear coordinate origin, as long as meet this principle; If increase the Y row on D-H parameter list basis, can along the Y direction translation, can reduce like this because the mistake in computation that the trigonometric function conversion brings; Although increase a matrix multiple, homogeneous transformation matrices is comparatively easy when multiplying each other more; The kinematic inverse arithmetic of robotic arm of described step (4); The mechanical arm location of described step (5) and crawl target object, again grasp two parts after comprising first crawl and failure, at first utilize the first crawl of the feedforward realization paw of machine vision, the recycling feedback strategy is constantly adjusted the position of paw, with gradually near target object, until grasp successfully.
Example three:
As shown in Figure 2, the location of the service robot that the present embodiment relates to and grasping system, by the binocular image acquisition module, sound identification module, RFID transceiver module, return pulley motion module, and two apery mechanical arm control modules.
As shown in Figure 3, the experiment porch robot of this example has the binocular vision camera, 3 anterior ultrasonic sensors, 2 sidepiece ultrasonic sensors, the barrier sensor is kept away on 7 chassis, 2 loudspeakers, 2 mechanical arms, 1 touch-screen, the user can finish by the button of man-machine interface the control of robot.The user can external microphone, directly and robot engage in the dialogue, conversation content user can oneself design.In addition, can also pass through remote controller, finish the functions such as machine human motion, information and amusement are chosen.
As shown in Figure 4, the present embodiment is with the analog family environment, and what the realization robot can be intelligent is human service.The present embodiment mainly may further comprise the steps:
The first step, the mandator inputs voice by microphone, identifies via sound identification module, and gives robot controlling platform recognition result, and robot then carries out related command according to recognition result.In the present embodiment, we do as issuing orders robot: " green tea being taken back to me ".Robot can ask: " green tea where? " we answer robot again: " green tea is located at (x, y)." wherein (x, y) be to space coordinates information by our eye-observation.Robot can learn from above interchange: the article that take are green tea, and the address is (x, y), and send it back to original place.Robot can implement figure shown in (a) among Fig. 3.Robot is according to the phonetic order of the first step, robot utilizes the return pulley motion module to walk along any direction, when robot runs into first label in the process of walking, can learn the present residing absolute position of robot, but do not know the direction of advancing, robot continues forward walking.Robot can run into second label, therefrom read the absolute position of label, the final destination that obtains by the voice in the analytical procedure one, angle and forward travel distance that can planning robot's rotation, robot walks forward like this, stops until running into the label of destination.Shown in (b) among Fig. 4.
Second step, after robot arrived label position, this was to utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, the accurate location through the row robot is so that the grasping manipulation of robot arm.Shown in (c) among Fig. 4.
The 3rd step, utilize improved D-H model to robot arm through the line number modeling, and set up normal solution.
The 4th step, after robot arrives the destination, at first utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, obtain target object with respect to the coordinate figure of right mechanical arm through after the coordinate transform, then right hand mechanical arm is found the solution contrary the solution, obtain each joint angles of mechanical arm, grasp for the first time target object according to specific track.Shown in (d) among Fig. 4.
The 5th step, if robot grasps unsuccessfully in step 4, binocular vision can according to the error of the depth information of the color on the paw and color of object, utilize PI to regulate algorithm control paw constantly near object, when its error within the specific limits the time, the control paw goes crawl.Shown in (e) among Fig. 4.
In the 6th step, the label that originally robot is run into for the first time at second step is as new point of destination, and robot advances to point of destination according to the algorithm shown in step 2.Shown in Fig. 4 (f).
The present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to the above embodiments.

Claims (6)

1.一种基于多传感器的服务机器人定位与抓取的方法,其特征在于具体操作步骤如下:  1. A method for positioning and grabbing a service robot based on multiple sensors, characterized in that the specific steps are as follows:       (1)利用射频模块的机器人初次定位;射频收发装置包括天线、接收器和无源型射频标签,其中每张标签中都含有环境的坐标值等信息; (1) The initial positioning of the robot using the radio frequency module; the radio frequency transceiver device includes an antenna, a receiver and a passive radio frequency tag, each of which contains information such as the coordinate value of the environment;       (2)利用双目视觉经行机器人的精确定位,底轮运动模块采用双轮差动方式,并在底部安装有两个色标传感器; (2) Using the precise positioning of the binocular vision robot, the bottom wheel motion module adopts a two-wheel differential method, and two color sensors are installed at the bottom;       (3)利用改进的D-H的模型对机器人手臂经行数学建模; (3) Use the improved D-H model to mathematically model the robot arm;       (4)机器人按照给定的路径规划和新的逆解算法,进行初次抓取目标; (4) The robot grabs the target for the first time according to the given path planning and the new inverse solution algorithm;       (5)当初次抓取失败后,可通过双目视觉获得手爪上的红色标签的深度信息,与目标物的深度信息进行对比,通过存在的误差,利用PD算法,最终实现手爪的成功抓取。 (5) After the first grasping failure, the depth information of the red label on the claw can be obtained through binocular vision, compared with the depth information of the target object, and the success of the claw can be finally realized by using the PD algorithm through the existing error crawl. 2.根据权利要求1所述的基于多传感器的服务机器人定位与抓取方法,其特征在于:所述的步骤(1)中射频收发装置,包括射频标签和射频感应收发线圈,其中射频感应收发线圈安装在机器人的底部,标签按照一定的距离放置在地面,构建网格化的坐标系,与双目信息融合,通过不断地读取标签数据块中得坐标信息对行进的路线进行判断和规划,实现初次定位。 2. The multi-sensor based service robot positioning and grasping method according to claim 1, characterized in that: the radio frequency transceiver device in the step (1) includes a radio frequency tag and a radio frequency induction transceiver coil, wherein the radio frequency induction transceiver The coil is installed on the bottom of the robot, and the tags are placed on the ground at a certain distance to build a grid coordinate system, which is fused with the binocular information, and the traveling route is judged and planned by continuously reading the coordinate information in the tag data block , to achieve initial positioning. 3.根据权利要求1所述的基于多传感器的服务机器人定位与抓取方法,其特征在于:所述步骤(2)的机器人的精确定位实现方法是:利用双目视觉系统提取目标物的颜色信息进行HSV(Hue-Saturation-Value)阈值分割,得到目标物体的三维坐标;通过机器人当前标签位置和获取目标的三维坐标之间关系的计算,规划出机器人的行进路线,到达要抓取的物体前。 3. The multi-sensor based service robot positioning and grasping method according to claim 1, characterized in that: the precise positioning method of the robot in the step (2) is: using the binocular vision system to extract the color of the target object The information is segmented by HSV (Hue-Saturation-Value) threshold to obtain the three-dimensional coordinates of the target object; through the calculation of the relationship between the robot's current label position and the three-dimensional coordinates of the acquired target, the robot's travel route is planned to reach the object to be grasped forward. 4.根据权利要求1所述的基于多传感器的服务机器人定位与抓取方法,其特征在于:所述步骤(3)的改进的D-H模型的实质在于前一个坐标系原点可经过若干变化—可以平移或旋转,与后一个坐标系原点重合,只要符合这一原理即可;若在D-H参数表基础上增加Y列,可以沿着Y轴方向平移,这样可以减少因为三角函数变换带来的计算错误;尽管多增加一个矩阵相乘,但齐次变化矩阵在相乘时较为简便。 4. The multi-sensor based service robot positioning and grasping method according to claim 1, characterized in that: the essence of the improved D-H model in step (3) is that the origin of the previous coordinate system can undergo several changes—can Translation or rotation coincides with the origin of the next coordinate system, as long as it conforms to this principle; if the Y column is added on the basis of the D-H parameter table, it can be translated along the Y axis, which can reduce the calculation caused by the trigonometric function transformation Incorrect; homogeneous transition matrices are easier to multiply, despite adding one more matrix to multiply. 5.根据权利要求1所述的基于多传感器的服务机器人定位与抓取方法,其特征在于:所述步骤(4)的机器臂运动学的逆解算法,求解过程包括以下步骤: 5. The multi-sensor based service robot positioning and grasping method according to claim 1, characterized in that: the inverse solution algorithm of the robot arm kinematics in the step (4), the solution process includes the following steps: (1)首先双目通过目标物体的颜色、形状、纹理特征获得目标物体的三维坐标信息; (1) First, the binoculars obtain the three-dimensional coordinate information of the target object through the color, shape, and texture features of the target object; (2)再建立视觉坐标系和机械臂坐标系的转换,将视觉获得的深度信息转换为机械手爪要到达的位置信息;将变换后的视觉坐标系原点与右机械臂坐标系的原点重合,并且视觉坐标系的x, y, z轴方向完全和机械臂坐标系相同; (2) Establish the conversion between the visual coordinate system and the coordinate system of the manipulator, and convert the depth information obtained by vision into the position information to be reached by the manipulator claw; coincide the origin of the transformed visual coordinate system with the origin of the right manipulator coordinate system, And the x, y, z axis directions of the visual coordinate system are exactly the same as the robot arm coordinate system; (3)利用4+1自由度的逆解算法,计算出机器人手臂各个关节的角度; (3) Using the inverse algorithm of 4+1 degrees of freedom to calculate the angle of each joint of the robot arm; (4)最后对机器人手臂各个关节运动控制进行路径规划,以便在初次抓取过程不要让其碰到桌子等。 (4) Finally, path planning is carried out for the motion control of each joint of the robot arm, so as not to let it touch the table during the initial grasping process. 6.根据权利要求1或2所述的基于多传感器的服务机器人定位与抓取方法,其特征在于:所述步骤(5)的机械臂定位并抓取目标物体,包括初次抓取和失败后再次抓取两部分,首先利用机器视觉的前馈实现手爪的初次抓取,再利用反馈策略不断调整手爪的位置,以逐渐靠近目标物体,直到抓取成功。 6. The multi-sensor based service robot positioning and grasping method according to claim 1 or 2, characterized in that: the robotic arm in step (5) locates and grasps the target object, including initial grasping and failure To grasp the two parts again, first use the feedforward of machine vision to realize the initial grasp of the claw, and then use the feedback strategy to continuously adjust the position of the claw to gradually approach the target object until the grasp is successful.
CN2012100967434A 2012-04-05 2012-04-05 Multi-sensor based positioning and grasping method for service robot Pending CN102848388A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012100967434A CN102848388A (en) 2012-04-05 2012-04-05 Multi-sensor based positioning and grasping method for service robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012100967434A CN102848388A (en) 2012-04-05 2012-04-05 Multi-sensor based positioning and grasping method for service robot

Publications (1)

Publication Number Publication Date
CN102848388A true CN102848388A (en) 2013-01-02

Family

ID=47395566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012100967434A Pending CN102848388A (en) 2012-04-05 2012-04-05 Multi-sensor based positioning and grasping method for service robot

Country Status (1)

Country Link
CN (1) CN102848388A (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105014666A (en) * 2015-07-13 2015-11-04 广州霞光技研有限公司 Multi-DOF manipulator independent grabbing inverse solution engineering algorithm
CN105372622A (en) * 2015-11-09 2016-03-02 深圳市中科鸥鹏智能科技有限公司 Intelligent positioning floor
CN105751220A (en) * 2016-05-13 2016-07-13 齐鲁工业大学 Walking human-shaped robot and fusion method for multiple sensors thereof
CN106372552A (en) * 2016-08-29 2017-02-01 北京理工大学 Human body target identification and positioning method
CN106625687A (en) * 2016-10-27 2017-05-10 安徽马钢自动化信息技术有限公司 Kinematics modeling method for articulated robot
CN106708028A (en) * 2015-08-04 2017-05-24 范红兵 Intelligent prediction and automatic planning system for action trajectory of industrial robot
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN107015193A (en) * 2017-04-18 2017-08-04 中国矿业大学(北京) A kind of binocular CCD vision mine movable object localization methods and system
CN107618031A (en) * 2016-07-13 2018-01-23 本田技研工业株式会社 The engagement confirmation method performed by robot
CN107862716A (en) * 2017-11-29 2018-03-30 合肥泰禾光电科技股份有限公司 Mechanical arm localization method and positioning mechanical arm
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN108657534A (en) * 2017-03-28 2018-10-16 晓视自动化科技(上海)有限公司 Automatic packaging equipment based on machine vision
CN109916352A (en) * 2017-12-13 2019-06-21 北京柏惠维康科技有限公司 A kind of method and apparatus obtaining robot TCP coordinate
CN110666820A (en) * 2019-10-12 2020-01-10 合肥泰禾光电科技股份有限公司 High-performance industrial robot controller
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 A grab-type flexible sorting method based on RFID spatial positioning technology
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN111746313A (en) * 2020-06-02 2020-10-09 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN111815683A (en) * 2019-04-12 2020-10-23 北京京东尚科信息技术有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN111832702A (en) * 2016-03-03 2020-10-27 谷歌有限责任公司 Deep machine learning method and apparatus for robotic grasping
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN113352289A (en) * 2021-06-04 2021-09-07 山东建筑大学 Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle
CN113766997A (en) * 2019-03-21 2021-12-07 斯夸尔迈德公司 Method for guiding a robot arm, guiding system
CN114734466A (en) * 2022-06-14 2022-07-12 中国科学技术大学 A mobile robot chemical experiment operating system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050005151A (en) * 2003-07-04 2005-01-13 주식회사유진로보틱스 Method of home security service using robot and robot thereof
KR20080090150A (en) * 2007-04-04 2008-10-08 삼성전자주식회사 Service system using service robot and service robot and control method of service system using service robot
JP2009045692A (en) * 2007-08-20 2009-03-05 Saitama Univ Communication robot and its operating method
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
US20090265133A1 (en) * 2005-08-01 2009-10-22 Moonhong Baek Localization system and method for mobile object using wireless communication
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20050005151A (en) * 2003-07-04 2005-01-13 주식회사유진로보틱스 Method of home security service using robot and robot thereof
US20090265133A1 (en) * 2005-08-01 2009-10-22 Moonhong Baek Localization system and method for mobile object using wireless communication
KR20080090150A (en) * 2007-04-04 2008-10-08 삼성전자주식회사 Service system using service robot and service robot and control method of service system using service robot
JP2009045692A (en) * 2007-08-20 2009-03-05 Saitama Univ Communication robot and its operating method
CN101559600A (en) * 2009-05-07 2009-10-21 上海交通大学 Service robot grasp guidance system and method thereof
CN101661098A (en) * 2009-09-10 2010-03-03 上海交通大学 Multi-robot automatic locating system for robot restaurant
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李瑞峰等: "《基于双目视觉的双臂作业型服务机器人的研制》", 《机械设计与制造》 *
贾东永等: "《基于视觉前馈和视觉反馈的仿人机器人抓取操作》", 《北京理工大学学报》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529856A (en) * 2013-08-27 2014-01-22 浙江工业大学 5-joint robot end tool position and posture control method
CN103529856B (en) * 2013-08-27 2016-04-13 浙江工业大学 5 rotary joint robot end instrument posture control methods
CN103481288A (en) * 2013-08-27 2014-01-01 浙江工业大学 5-joint robot end-of-arm tool pose controlling method
CN104827470A (en) * 2015-05-25 2015-08-12 山东理工大学 Mobile manipulator control system based on GPS and binocular vision positioning
CN104842362A (en) * 2015-06-18 2015-08-19 厦门理工学院 Method for grabbing material bag by robot and robot grabbing device
CN105014666A (en) * 2015-07-13 2015-11-04 广州霞光技研有限公司 Multi-DOF manipulator independent grabbing inverse solution engineering algorithm
CN106708028A (en) * 2015-08-04 2017-05-24 范红兵 Intelligent prediction and automatic planning system for action trajectory of industrial robot
CN105372622A (en) * 2015-11-09 2016-03-02 深圳市中科鸥鹏智能科技有限公司 Intelligent positioning floor
CN111832702B (en) * 2016-03-03 2025-01-28 谷歌有限责任公司 Deep machine learning method and device for robotic grasping
CN111832702A (en) * 2016-03-03 2020-10-27 谷歌有限责任公司 Deep machine learning method and apparatus for robotic grasping
CN105751220A (en) * 2016-05-13 2016-07-13 齐鲁工业大学 Walking human-shaped robot and fusion method for multiple sensors thereof
CN107618031A (en) * 2016-07-13 2018-01-23 本田技研工业株式会社 The engagement confirmation method performed by robot
CN106372552A (en) * 2016-08-29 2017-02-01 北京理工大学 Human body target identification and positioning method
CN106372552B (en) * 2016-08-29 2019-03-26 北京理工大学 Human body target recognition positioning method
CN106625687A (en) * 2016-10-27 2017-05-10 安徽马钢自动化信息技术有限公司 Kinematics modeling method for articulated robot
CN106945037A (en) * 2017-03-22 2017-07-14 北京建筑大学 A kind of target grasping means and system applied to small scale robot
CN108657534A (en) * 2017-03-28 2018-10-16 晓视自动化科技(上海)有限公司 Automatic packaging equipment based on machine vision
CN107015193A (en) * 2017-04-18 2017-08-04 中国矿业大学(北京) A kind of binocular CCD vision mine movable object localization methods and system
CN107862716A (en) * 2017-11-29 2018-03-30 合肥泰禾光电科技股份有限公司 Mechanical arm localization method and positioning mechanical arm
CN109916352A (en) * 2017-12-13 2019-06-21 北京柏惠维康科技有限公司 A kind of method and apparatus obtaining robot TCP coordinate
CN109916352B (en) * 2017-12-13 2020-09-25 北京柏惠维康科技有限公司 Method and device for acquiring TCP (Transmission control protocol) coordinates of robot
CN108115688A (en) * 2017-12-29 2018-06-05 深圳市越疆科技有限公司 Crawl control method, system and the mechanical arm of a kind of mechanical arm
CN113766997A (en) * 2019-03-21 2021-12-07 斯夸尔迈德公司 Method for guiding a robot arm, guiding system
CN111815683A (en) * 2019-04-12 2020-10-23 北京京东尚科信息技术有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN111815683B (en) * 2019-04-12 2024-05-17 北京京东乾石科技有限公司 Target positioning method and device, electronic equipment and computer readable medium
CN110711701A (en) * 2019-09-16 2020-01-21 华中科技大学 A grab-type flexible sorting method based on RFID spatial positioning technology
CN110666820A (en) * 2019-10-12 2020-01-10 合肥泰禾光电科技股份有限公司 High-performance industrial robot controller
CN111612823A (en) * 2020-05-21 2020-09-01 云南电网有限责任公司昭通供电局 Robot autonomous tracking method based on vision
CN111746313B (en) * 2020-06-02 2022-09-20 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN111746313A (en) * 2020-06-02 2020-10-09 上海理工大学 Unmanned charging method and system based on mechanical arm guidance
CN112589809A (en) * 2020-12-03 2021-04-02 武汉理工大学 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN113352289A (en) * 2021-06-04 2021-09-07 山东建筑大学 Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle
CN114734466A (en) * 2022-06-14 2022-07-12 中国科学技术大学 A mobile robot chemical experiment operating system and method

Similar Documents

Publication Publication Date Title
CN102848388A (en) Multi-sensor based positioning and grasping method for service robot
CN108838991B (en) An autonomous humanoid dual-arm robot and its tracking operating system for moving targets
CN109108942B (en) Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS
JP7581320B2 (en) Systems and methods for augmenting visual output from robotic devices
CN106346485B (en) The Non-contact control method of bionic mechanical hand based on the study of human hand movement posture
CN106774309B (en) A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously
CN102902271A (en) Binocular vision-based robot target identifying and gripping system and method
Hebert et al. Combined shape, appearance and silhouette for simultaneous manipulator and object tracking
Stückler et al. Mobile manipulation, tool use, and intuitive interaction for cognitive service robot cosero
Lee The study of mechanical arm and intelligent robot
CN106863307A (en) A kind of view-based access control model and the robot of speech-sound intelligent control
CN114954723B (en) Humanoid robot
Kragic et al. A framework for visual servoing
Silva et al. Navigation and obstacle avoidance: A case study using Pepper robot
Wang et al. A visual servoing system for interactive human-robot object transfer
Tokuda et al. Neural Network based Visual Servoing for Eye-to-Hand Manipulator
TWI788253B (en) Adaptive mobile manipulation apparatus and method
Wang et al. Object Grabbing of Robotic Arm Based on OpenMV Module Positioning
CN112757274B (en) A Dynamic Fusion Behavioral Safety Algorithm and System for Human-Machine Collaborative Operation
Liang et al. Visual reconstruction and localization-based robust robotic 6-DoF grasping in the wild
Mukai et al. Application of Object Grasping Using Dual-Arm Autonomous Mobile Robot—Path Planning by Spline Curve and Object Recognition by YOLO—
Regal et al. Using single demonstrations to define autonomous manipulation contact tasks in unstructured environments via object affordances
Song et al. Object pose estimation for grasping based on robust center point detection
Bodenstedt et al. Learned Partial Automation for Shared Control in Tele-Robotic Manipulation.
Huh et al. Self-supervised Wide Baseline Visual Servoing via 3D Equivariance

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130102