CN102848388A - Multi-sensor based positioning and grasping method for service robot - Google Patents
Multi-sensor based positioning and grasping method for service robot Download PDFInfo
- Publication number
- CN102848388A CN102848388A CN2012100967434A CN201210096743A CN102848388A CN 102848388 A CN102848388 A CN 102848388A CN 2012100967434 A CN2012100967434 A CN 2012100967434A CN 201210096743 A CN201210096743 A CN 201210096743A CN 102848388 A CN102848388 A CN 102848388A
- Authority
- CN
- China
- Prior art keywords
- robot
- coordinate system
- target object
- positioning
- arm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Manipulator (AREA)
Abstract
The invention discloses a service robot positioning and grabbing method based on multiple sensors. The method comprises the following steps: the labels of the RFID transceiver modules are placed on the ground according to a certain rule to construct a gridding coordinate system; the robot adopts a method of combining Radio Frequency and binocular vision, and freely walks, positions and searches a target in the constructed gridding environment; the space coordinates of the object acquired by the binocular vision system are converted into an arm coordinate system, the arm is mathematically modeled by using an improved D-H model, and then each angle of the humanoid arm is solved by using a new inverse solution method, so that the following and grabbing of the object are realized. The method realizes self-positioning and target following and grabbing of the service robot based on the multiple sensors.
Description
Technical field
The invention belongs to the Robotics field, what be specifically related to is a kind of service robot location and grasping means based on multisensor.
Background technology
Five or six during the decade in the past, and researchers are endeavouring to study the correlation technique of robot application always.Early stage in the sixties in last century, along with industrial expansion, robot realizes dangerous operation and task with helping people, however these robots mainly under structural environment, work, can only carry out work according to specific pattern.But along with the demand of the development of technology and human daily life, robot but faces the challenge of destructuring, the series of problems such as complicated.Therefore in order to make robot can simulate certain home environment, be again people's service in the daily life, it should possess certain interactive capability, can be according to phonetic order Intelligent Recognition object, then carry out autonomous localization and navigation, and can keep away barrier according to the environment of self perception, gripping finger earnest product and give the people of appointment then.
Summary of the invention
The object of the invention is to the deficiency for present technology, propose a kind of service robot location and grasping means based on many sensings, make robot in a kind of structurized environment, provide more intelligentized service for the mankind.Robot can carry out intelligent interaction by the sensors such as vision, radio frequency, ultrasonic, photoelectricity and external environment condition in this method, in the gridding environment that RF tag makes up, finish walking freely, target search, target object hand and grab and follow and the flexible intelligent behavior such as crawl.
For achieving the above object, design of the present invention is:
Service robot location and grasping means based on multisensor involved in the present invention, its experiment porch comprises the binocular image acquisition module, RFID transceiver module, return pulley motion module, and apery mechanical arm control module.The label of radio frequency (RFID) transceiver module is placed on ground according to certain rule, makes up the gridding environment; The method that robot adopts radio frequency (RFID) to combine with binocular vision, robot walking freely, location and searching target in the gridding environment that makes up; The space coordinates of the object that binocular vision system is obtained is transformed into the arm coordinate system; Utilize improved D-H model that arm is carried out modeling, obtain the contrary of copy man arm and separate, realize location and the crawl of object; Utilize feedforward and the feedback strategy of machine vision constantly to adjust the position that hand is grabbed grasping device, realize following and accurately crawl target object.
Service robot location and grasping means based on multisensor involved in the present invention comprise following functions:
(1) independent navigation and locating module: combining RFID and binocular camera shooting header, realize crawl, independent navigation and location to target object.
(2) apery mechanical arm open loop handling module: at first binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object; Obtain target object with respect to the coordinate figure of right hand mechanical arm through after the coordinate transform; 4+1 degree-of-freedom manipulator after utilizing improved D-H model to degeneration proposes a new inverse arithmetic according to the author again through the line number modeling, can obtain the angle in each joint of arm, according to specific track crawl target object.
(3) apery mechanical arm closed loop handling module: machine vision can calculate the error that paw and object exist between the two, utilize the closed loop strategy constantly to adjust the position that hand is grabbed grasping device, to reduce the two error, realize following and accurately crawl target object.
Described binocular image acquisition module refers to that native system is based on binocular stereo vision; Described RFID R-T unit is the passive RF tag, contains the coordinate information of environment; Described return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree, carry out mathematical modeling by improved D-H model, again according to a kind of inverse arithmetic of new proposition, ask for the inverse kinematics solution of equation after the simplification, so that robotic arm can be realized following in real time and flexible crawl for target object.
Described radio frequency (RFID) R-T unit, comprise RF tag and radio frequency induction dispatch coil, wherein RF tag is installed in the bottom of robot, and label is placed on ground according to certain distance, make up the environment of gridding, the situation that robot is roamed in the gridding environment as shown in Figure 5.
The described gridding environment that utilizes RF tag to make up carries out determining of initial position to robot, the judgement of course and planning, and the adjustment of the position before the target object, and wherein, the judgement of course and the flow process of planning are shown in 6.
Suppose the calculating diagram of robot steering angle as shown in Figure 7, the path of the required walking of robot is A-B-C, needs the angle θ of rotation at the B point, can solve by the cosine law:
(1)
When robot wide object, can be by mutual through the lang sound with robot, obtain the final destination of robot, therefrom just can cook up the course of robot, when distance objective object near (about 1 meter), adjust own attitude about robot passes through, control and target between distance, can realize accurate location, its algorithm flow chart as shown in Figure 7.
Described apery mechanical arm is the 6+1 free degree, mechanical arm is deteriorated to the 4+1 free degree from the 6+1 free degree process, and wherein the equivalent-simplification figure of the arm models of the 6+1 free degree and the 4+1 free degree is shown in Fig. 8,9.
In the simplified model, cumbersome in the calculating if according to original D-H model solution, and restriction
Freely be orientated.The present invention adopts improved D-H model, increases the Y row on original D-H parameter list basis, and along the Y direction translation, specific practice is as follows:
(1) robot arm modeling
If: a represents the length (joint skew) of the common vertical line of adjacent two joints axes directions;
Represent the angle (joint is reversed) between two adjacent z axles; D is illustrated on the z axle distance between two adjacent common vertical lines; θ represents the anglec of rotation around the z axle.By Selecting All Parameters substitution A matrix from parameter list, can write out the conversion between per two adjacent segments.Wherein, the A matrix is the transition matrix that a rear joint transforms to previous joint, shown in formula (2)-(5).
(2) ask normal solution
Known each joint rotation angle can be in the hope of the robot arm forward kinematics solution shown in formula (6) by interarticular pose matrix.
Wherein, the 4th of T the classify as
In formula 7, s1, c1, s2, the expressions such as c2 are sin, cos, sin, cos.
(3) solution of inverting
If the mechanical arm tail end attitude matrix is:
In formula (8), P[1], P[2] and, P[3] refer to respectively the last paw angle of pitch, roll angle, yaw angle is at the direction cosines of arm coordinate system, P[4] be the last present position of finger paw.The contrary solution of mechanical arm is exactly to find the solution unknown quantity
,
,
,
The contrary solution procedure of separating is:
1. in flow chart 7, can learn, target object is almost in the center of Robot Binocular Vision, in order to make things convenient for the crawl of robot gripper, can be with the plane of clamper perpendicular to object, clamper is maximum in the face of the opening of target object like this, be convenient to most grasp target, as shown in figure 10, it be easy to show that
, wherein
Be joint 2 to the angle of inner rotary,
The angle that joint three rotates inwards, for last paw perpendicular to objective plane, can allow
2. suppose that the final position of paw is
, by top analysis 1. as can be known, the locus of joint of robot five is:
4. as shown in Figure 6, learn from method of geometry:
5. T[4] the formula of the third line
(13)
(4) conversion of trick coordinate
For the depth information that vision is obtained is converted to the position that mechanical paw will arrive, must set up the conversion of visual coordinate system and mechanical arm coordinate system.The robot phantom eye as shown in figure 11.Wherein, the performing step of trick conversion] as follows:
Wherein, x, y, z are the coordinates of targets values of obtaining of binocular.After the head that this curl of premultiplication is equivalent to robot lifts, the position of target object in binocular vision.
2. be that initial point overlaps with the initial point of right mechanical arm coordinate system with the visual coordinate after the step 1 conversion, transformation for mula is as follows:
3. make the x of visual coordinate system, y, the z direction of principal axis is fully identical with the mechanical arm coordinate system, and transformation for mula is as follows:
Wherein,
Be the position that paw will arrive, θ is the angle that robot head turns forward, shown in figure eight:
,
,
Value can be by measure obtaining.
The real-time target of the apery mechanical arm that the present invention relates to follows and gripping portion, refers to that robot utilizes machine vision
Feedforward and feedback strategy are constantly adjusted the position of paw grasping device, realize that paw grasps following with accurate of target object, and specific practice is as follows:
(1) FEEDFORWARD CONTROL of machine vision
Gain knowledge according to visual information and arm motion, the desired locations that calculates the paw grasping device is the angle in each joint relatively, and robot hand is placed near the position target object, and concrete realization flow as shown in figure 12.
(2) feedback of machine vision
Owing to there being mechanical clearance, can produce transformed error between vision system and the arm coordinate, often cause first the crawl unsuccessfully, the present invention is used as compensating error by the distance of extracting between two kinds of different colours in the binocular, wherein, hand is grabbed and is posted red-ticket, and target object posts green-ticket, and the kinematic error function is:
, the specific implementation flow process as shown in figure 13.
The apery mechanical arm location that the present invention relates to and crawl target object part refer to that the robot navigation controls mechanical arm crawl target object behind assigned address, and key step is as follows:
(1) binocular obtains the three-dimensional coordinate information of target object by color, shape, the Texture eigenvalue of target object.
(2) obtain target object with respect to the coordinate figure of right arm through after the coordinate transform.
(3) right hand arm is found the solution contrary the solution, obtain each joint angles value of arm, according to specific track crawl target object.
Based on above-mentioned 6 points, performing step of the present invention is as follows:
(1) utilize voice to learn the final destination of robot, the recycling RF receiving/transmission device, the label that will contain the environment coordinate figure is placed on ground according to certain rule, makes up the gridding environment, realizes the first location of robot;
(2) utilize binocular vision to extract the depth information of target, in the gridding environment that makes up, realize the accurate location to robot;
(3) utilize improved D-H model that arm is carried out modeling, obtain the normal solution of copy man arm, in order to be that the flexible crawl of next step realize target object is ready;
The space coordinates of the object that (4) binocular vision system is obtained is transformed into the arm coordinate system, utilizes the new inverse arithmetic that proposes of this paper, calculates the angle in each joint.
(5) utilize the feedback control strategy of machine vision constantly to adjust the position of paw grasping device, the following of realize target object.
(6) tag addresses that radio frequency is received for the first time is as new point of destination, and robot advances to this point of destination.
According to the foregoing invention design, the present invention adopts following technical proposals:
A kind of service robot location and grasping means based on multisensor is characterized in that: utilize the robot of radio-frequency module to locate for the first time; Utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed; The model that utilizes improved D-H to robot arm through the line number modeling; Robot grasps target for the first time according to given path planning and new inverse arithmetic; When first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Compared with prior art, the present invention has following apparent outstanding substantive distinguishing features and marked improvement: with machine vision and radio frequency ID fusion application in service robot, give full play to the advantage of two kinds of sensors, realization is walking freely in the gridding environment, target search, self-align, the target object hand is grabbed and is followed, the multiple intelligent behaviors such as flexible crawl have improved the degree of accuracy of grasping greatly.
Description of drawings
Fig. 1 is based on the service robot location of multisensor and the flowsheet of grasping manipulation method
Fig. 2 is system architecture diagram of the present invention;
Fig. 3 is the robot external view;
Fig. 4 is the experimental result picture of the embodiment of the invention
Fig. 5 is the environment of the gridding of structure
Fig. 6 is first positioning flow figure
Fig. 7 is the steering angle figure of robot
Fig. 8 is the accurate positioning flow figure of robot
Fig. 9 is 6+1 degree of freedom robot arm models
Figure 10 is 4+1 degree of freedom robot arm models
Figure 12 is the transition diagram of trick coordinate
Figure 13 robot vision feed forward principle figure
Figure 14 robot vision feedback principle figure.
The specific embodiment
Below in conjunction with accompanying drawing preferred enforcement of the present invention is elaborated:
Example one:
Referring to Fig. 1, this sensor-based service robot location and grasping means is characterized in that concrete operation step is as follows:
(1) utilize the robot of radio-frequency module to locate for the first time.RF receiving/transmission device comprises antenna, receiver and passive RF tag, wherein all contains the information such as coordinate figure of environment in every label;
(2) utilize binocular vision through the accurate location of row robot, the return pulley motion module adopts the differential mode of two-wheel, and in the bottom two color mark sensors is installed;
(3) model that utilizes improved D-H to robot arm through the line number modeling;
(4) robot grasps target for the first time according to given path planning and new inverse arithmetic;
(5) when first grasp unsuccessfully after, can obtain by binocular vision the depth information of the red-ticket on the paw, compare with the depth information of object, by the error that exists, utilize the PD algorithm, finally realize the successful crawl of paw.
Example two:
4, the present embodiment and embodiment one are basic identical, special character is as follows: RF receiving/transmission device in the described step (1), comprise RF tag and radio frequency induction dispatch coil, wherein the radio frequency induction dispatch coil is installed in the bottom of robot, label is placed on ground according to certain distance, makes up the coordinate system of gridding, with the binocular information fusion, by in the reading tag data piece constantly coordinate information is judged the route of advancing and plan, realize locating for the first time; The accurate location implementing method of the robot of described step (2) is: the colouring information that utilizes binocular vision system to extract object carries out HSV(Hue-Saturation-Value) Threshold segmentation, obtain the three-dimensional coordinate of target object; By the current label position of robot and the calculating of obtaining the three-dimensional coordinate Relations Among of target, cook up the course of robot, before the object that arrival will be grasped; The essence of the improved D-H model of described step (3) is that previous coordinate origin can pass through some variations, can translation or rotation, overlap with a rear coordinate origin, as long as meet this principle; If increase the Y row on D-H parameter list basis, can along the Y direction translation, can reduce like this because the mistake in computation that the trigonometric function conversion brings; Although increase a matrix multiple, homogeneous transformation matrices is comparatively easy when multiplying each other more; The kinematic inverse arithmetic of robotic arm of described step (4); The mechanical arm location of described step (5) and crawl target object, again grasp two parts after comprising first crawl and failure, at first utilize the first crawl of the feedforward realization paw of machine vision, the recycling feedback strategy is constantly adjusted the position of paw, with gradually near target object, until grasp successfully.
Example three:
As shown in Figure 2, the location of the service robot that the present embodiment relates to and grasping system, by the binocular image acquisition module, sound identification module, RFID transceiver module, return pulley motion module, and two apery mechanical arm control modules.
As shown in Figure 3, the experiment porch robot of this example has the binocular vision camera, 3 anterior ultrasonic sensors, 2 sidepiece ultrasonic sensors, the barrier sensor is kept away on 7 chassis, 2 loudspeakers, 2 mechanical arms, 1 touch-screen, the user can finish by the button of man-machine interface the control of robot.The user can external microphone, directly and robot engage in the dialogue, conversation content user can oneself design.In addition, can also pass through remote controller, finish the functions such as machine human motion, information and amusement are chosen.
As shown in Figure 4, the present embodiment is with the analog family environment, and what the realization robot can be intelligent is human service.The present embodiment mainly may further comprise the steps:
The first step, the mandator inputs voice by microphone, identifies via sound identification module, and gives robot controlling platform recognition result, and robot then carries out related command according to recognition result.In the present embodiment, we do as issuing orders robot: " green tea being taken back to me ".Robot can ask: " green tea where? " we answer robot again: " green tea is located at (x, y)." wherein (x, y) be to space coordinates information by our eye-observation.Robot can learn from above interchange: the article that take are green tea, and the address is (x, y), and send it back to original place.Robot can implement figure shown in (a) among Fig. 3.Robot is according to the phonetic order of the first step, robot utilizes the return pulley motion module to walk along any direction, when robot runs into first label in the process of walking, can learn the present residing absolute position of robot, but do not know the direction of advancing, robot continues forward walking.Robot can run into second label, therefrom read the absolute position of label, the final destination that obtains by the voice in the analytical procedure one, angle and forward travel distance that can planning robot's rotation, robot walks forward like this, stops until running into the label of destination.Shown in (b) among Fig. 4.
Second step, after robot arrived label position, this was to utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, the accurate location through the row robot is so that the grasping manipulation of robot arm.Shown in (c) among Fig. 4.
The 3rd step, utilize improved D-H model to robot arm through the line number modeling, and set up normal solution.
The 4th step, after robot arrives the destination, at first utilize the binocular vision image capture module to obtain the three-dimensional coordinate information of green tea, obtain target object with respect to the coordinate figure of right mechanical arm through after the coordinate transform, then right hand mechanical arm is found the solution contrary the solution, obtain each joint angles of mechanical arm, grasp for the first time target object according to specific track.Shown in (d) among Fig. 4.
The 5th step, if robot grasps unsuccessfully in step 4, binocular vision can according to the error of the depth information of the color on the paw and color of object, utilize PI to regulate algorithm control paw constantly near object, when its error within the specific limits the time, the control paw goes crawl.Shown in (e) among Fig. 4.
In the 6th step, the label that originally robot is run into for the first time at second step is as new point of destination, and robot advances to point of destination according to the algorithm shown in step 2.Shown in Fig. 4 (f).
The present embodiment is implemented under take technical solution of the present invention as prerequisite, provided detailed embodiment and concrete operating process, but protection scope of the present invention is not limited to the above embodiments.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100967434A CN102848388A (en) | 2012-04-05 | 2012-04-05 | Multi-sensor based positioning and grasping method for service robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2012100967434A CN102848388A (en) | 2012-04-05 | 2012-04-05 | Multi-sensor based positioning and grasping method for service robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102848388A true CN102848388A (en) | 2013-01-02 |
Family
ID=47395566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2012100967434A Pending CN102848388A (en) | 2012-04-05 | 2012-04-05 | Multi-sensor based positioning and grasping method for service robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102848388A (en) |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103481288A (en) * | 2013-08-27 | 2014-01-01 | 浙江工业大学 | 5-joint robot end-of-arm tool pose controlling method |
CN103529856A (en) * | 2013-08-27 | 2014-01-22 | 浙江工业大学 | 5-joint robot end tool position and posture control method |
CN104827470A (en) * | 2015-05-25 | 2015-08-12 | 山东理工大学 | Mobile manipulator control system based on GPS and binocular vision positioning |
CN104842362A (en) * | 2015-06-18 | 2015-08-19 | 厦门理工学院 | Method for grabbing material bag by robot and robot grabbing device |
CN105014666A (en) * | 2015-07-13 | 2015-11-04 | 广州霞光技研有限公司 | Multi-DOF manipulator independent grabbing inverse solution engineering algorithm |
CN105372622A (en) * | 2015-11-09 | 2016-03-02 | 深圳市中科鸥鹏智能科技有限公司 | Intelligent positioning floor |
CN105751220A (en) * | 2016-05-13 | 2016-07-13 | 齐鲁工业大学 | Walking human-shaped robot and fusion method for multiple sensors thereof |
CN106372552A (en) * | 2016-08-29 | 2017-02-01 | 北京理工大学 | Human body target identification and positioning method |
CN106625687A (en) * | 2016-10-27 | 2017-05-10 | 安徽马钢自动化信息技术有限公司 | Kinematics modeling method for articulated robot |
CN106708028A (en) * | 2015-08-04 | 2017-05-24 | 范红兵 | Intelligent prediction and automatic planning system for action trajectory of industrial robot |
CN106945037A (en) * | 2017-03-22 | 2017-07-14 | 北京建筑大学 | A kind of target grasping means and system applied to small scale robot |
CN107015193A (en) * | 2017-04-18 | 2017-08-04 | 中国矿业大学(北京) | A kind of binocular CCD vision mine movable object localization methods and system |
CN107618031A (en) * | 2016-07-13 | 2018-01-23 | 本田技研工业株式会社 | The engagement confirmation method performed by robot |
CN107862716A (en) * | 2017-11-29 | 2018-03-30 | 合肥泰禾光电科技股份有限公司 | Mechanical arm localization method and positioning mechanical arm |
CN108115688A (en) * | 2017-12-29 | 2018-06-05 | 深圳市越疆科技有限公司 | Crawl control method, system and the mechanical arm of a kind of mechanical arm |
CN108657534A (en) * | 2017-03-28 | 2018-10-16 | 晓视自动化科技(上海)有限公司 | Automatic packaging equipment based on machine vision |
CN109916352A (en) * | 2017-12-13 | 2019-06-21 | 北京柏惠维康科技有限公司 | A kind of method and apparatus obtaining robot TCP coordinate |
CN110666820A (en) * | 2019-10-12 | 2020-01-10 | 合肥泰禾光电科技股份有限公司 | High-performance industrial robot controller |
CN110711701A (en) * | 2019-09-16 | 2020-01-21 | 华中科技大学 | A grab-type flexible sorting method based on RFID spatial positioning technology |
CN111612823A (en) * | 2020-05-21 | 2020-09-01 | 云南电网有限责任公司昭通供电局 | Robot autonomous tracking method based on vision |
CN111746313A (en) * | 2020-06-02 | 2020-10-09 | 上海理工大学 | Unmanned charging method and system based on mechanical arm guidance |
CN111815683A (en) * | 2019-04-12 | 2020-10-23 | 北京京东尚科信息技术有限公司 | Target positioning method and device, electronic equipment and computer readable medium |
CN111832702A (en) * | 2016-03-03 | 2020-10-27 | 谷歌有限责任公司 | Deep machine learning method and apparatus for robotic grasping |
CN112589809A (en) * | 2020-12-03 | 2021-04-02 | 武汉理工大学 | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method |
CN113352289A (en) * | 2021-06-04 | 2021-09-07 | 山东建筑大学 | Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle |
CN113766997A (en) * | 2019-03-21 | 2021-12-07 | 斯夸尔迈德公司 | Method for guiding a robot arm, guiding system |
CN114734466A (en) * | 2022-06-14 | 2022-07-12 | 中国科学技术大学 | A mobile robot chemical experiment operating system and method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050005151A (en) * | 2003-07-04 | 2005-01-13 | 주식회사유진로보틱스 | Method of home security service using robot and robot thereof |
KR20080090150A (en) * | 2007-04-04 | 2008-10-08 | 삼성전자주식회사 | Service system using service robot and service robot and control method of service system using service robot |
JP2009045692A (en) * | 2007-08-20 | 2009-03-05 | Saitama Univ | Communication robot and its operating method |
CN101559600A (en) * | 2009-05-07 | 2009-10-21 | 上海交通大学 | Service robot grasp guidance system and method thereof |
US20090265133A1 (en) * | 2005-08-01 | 2009-10-22 | Moonhong Baek | Localization system and method for mobile object using wireless communication |
CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
CN102323817A (en) * | 2011-06-07 | 2012-01-18 | 上海大学 | A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior |
-
2012
- 2012-04-05 CN CN2012100967434A patent/CN102848388A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20050005151A (en) * | 2003-07-04 | 2005-01-13 | 주식회사유진로보틱스 | Method of home security service using robot and robot thereof |
US20090265133A1 (en) * | 2005-08-01 | 2009-10-22 | Moonhong Baek | Localization system and method for mobile object using wireless communication |
KR20080090150A (en) * | 2007-04-04 | 2008-10-08 | 삼성전자주식회사 | Service system using service robot and service robot and control method of service system using service robot |
JP2009045692A (en) * | 2007-08-20 | 2009-03-05 | Saitama Univ | Communication robot and its operating method |
CN101559600A (en) * | 2009-05-07 | 2009-10-21 | 上海交通大学 | Service robot grasp guidance system and method thereof |
CN101661098A (en) * | 2009-09-10 | 2010-03-03 | 上海交通大学 | Multi-robot automatic locating system for robot restaurant |
CN102323817A (en) * | 2011-06-07 | 2012-01-18 | 上海大学 | A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior |
Non-Patent Citations (2)
Title |
---|
李瑞峰等: "《基于双目视觉的双臂作业型服务机器人的研制》", 《机械设计与制造》 * |
贾东永等: "《基于视觉前馈和视觉反馈的仿人机器人抓取操作》", 《北京理工大学学报》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103529856A (en) * | 2013-08-27 | 2014-01-22 | 浙江工业大学 | 5-joint robot end tool position and posture control method |
CN103529856B (en) * | 2013-08-27 | 2016-04-13 | 浙江工业大学 | 5 rotary joint robot end instrument posture control methods |
CN103481288A (en) * | 2013-08-27 | 2014-01-01 | 浙江工业大学 | 5-joint robot end-of-arm tool pose controlling method |
CN104827470A (en) * | 2015-05-25 | 2015-08-12 | 山东理工大学 | Mobile manipulator control system based on GPS and binocular vision positioning |
CN104842362A (en) * | 2015-06-18 | 2015-08-19 | 厦门理工学院 | Method for grabbing material bag by robot and robot grabbing device |
CN105014666A (en) * | 2015-07-13 | 2015-11-04 | 广州霞光技研有限公司 | Multi-DOF manipulator independent grabbing inverse solution engineering algorithm |
CN106708028A (en) * | 2015-08-04 | 2017-05-24 | 范红兵 | Intelligent prediction and automatic planning system for action trajectory of industrial robot |
CN105372622A (en) * | 2015-11-09 | 2016-03-02 | 深圳市中科鸥鹏智能科技有限公司 | Intelligent positioning floor |
CN111832702B (en) * | 2016-03-03 | 2025-01-28 | 谷歌有限责任公司 | Deep machine learning method and device for robotic grasping |
CN111832702A (en) * | 2016-03-03 | 2020-10-27 | 谷歌有限责任公司 | Deep machine learning method and apparatus for robotic grasping |
CN105751220A (en) * | 2016-05-13 | 2016-07-13 | 齐鲁工业大学 | Walking human-shaped robot and fusion method for multiple sensors thereof |
CN107618031A (en) * | 2016-07-13 | 2018-01-23 | 本田技研工业株式会社 | The engagement confirmation method performed by robot |
CN106372552A (en) * | 2016-08-29 | 2017-02-01 | 北京理工大学 | Human body target identification and positioning method |
CN106372552B (en) * | 2016-08-29 | 2019-03-26 | 北京理工大学 | Human body target recognition positioning method |
CN106625687A (en) * | 2016-10-27 | 2017-05-10 | 安徽马钢自动化信息技术有限公司 | Kinematics modeling method for articulated robot |
CN106945037A (en) * | 2017-03-22 | 2017-07-14 | 北京建筑大学 | A kind of target grasping means and system applied to small scale robot |
CN108657534A (en) * | 2017-03-28 | 2018-10-16 | 晓视自动化科技(上海)有限公司 | Automatic packaging equipment based on machine vision |
CN107015193A (en) * | 2017-04-18 | 2017-08-04 | 中国矿业大学(北京) | A kind of binocular CCD vision mine movable object localization methods and system |
CN107862716A (en) * | 2017-11-29 | 2018-03-30 | 合肥泰禾光电科技股份有限公司 | Mechanical arm localization method and positioning mechanical arm |
CN109916352A (en) * | 2017-12-13 | 2019-06-21 | 北京柏惠维康科技有限公司 | A kind of method and apparatus obtaining robot TCP coordinate |
CN109916352B (en) * | 2017-12-13 | 2020-09-25 | 北京柏惠维康科技有限公司 | Method and device for acquiring TCP (Transmission control protocol) coordinates of robot |
CN108115688A (en) * | 2017-12-29 | 2018-06-05 | 深圳市越疆科技有限公司 | Crawl control method, system and the mechanical arm of a kind of mechanical arm |
CN113766997A (en) * | 2019-03-21 | 2021-12-07 | 斯夸尔迈德公司 | Method for guiding a robot arm, guiding system |
CN111815683A (en) * | 2019-04-12 | 2020-10-23 | 北京京东尚科信息技术有限公司 | Target positioning method and device, electronic equipment and computer readable medium |
CN111815683B (en) * | 2019-04-12 | 2024-05-17 | 北京京东乾石科技有限公司 | Target positioning method and device, electronic equipment and computer readable medium |
CN110711701A (en) * | 2019-09-16 | 2020-01-21 | 华中科技大学 | A grab-type flexible sorting method based on RFID spatial positioning technology |
CN110666820A (en) * | 2019-10-12 | 2020-01-10 | 合肥泰禾光电科技股份有限公司 | High-performance industrial robot controller |
CN111612823A (en) * | 2020-05-21 | 2020-09-01 | 云南电网有限责任公司昭通供电局 | Robot autonomous tracking method based on vision |
CN111746313B (en) * | 2020-06-02 | 2022-09-20 | 上海理工大学 | Unmanned charging method and system based on mechanical arm guidance |
CN111746313A (en) * | 2020-06-02 | 2020-10-09 | 上海理工大学 | Unmanned charging method and system based on mechanical arm guidance |
CN112589809A (en) * | 2020-12-03 | 2021-04-02 | 武汉理工大学 | Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method |
CN113352289A (en) * | 2021-06-04 | 2021-09-07 | 山东建筑大学 | Mechanical arm track planning control system of overhead ground wire hanging and dismounting operation vehicle |
CN114734466A (en) * | 2022-06-14 | 2022-07-12 | 中国科学技术大学 | A mobile robot chemical experiment operating system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102848388A (en) | Multi-sensor based positioning and grasping method for service robot | |
CN108838991B (en) | An autonomous humanoid dual-arm robot and its tracking operating system for moving targets | |
CN109108942B (en) | Mechanical arm motion control method and system based on visual real-time teaching and adaptive DMPS | |
JP7581320B2 (en) | Systems and methods for augmenting visual output from robotic devices | |
CN106346485B (en) | The Non-contact control method of bionic mechanical hand based on the study of human hand movement posture | |
CN106774309B (en) | A kind of mobile robot visual servo and adaptive depth discrimination method simultaneously | |
CN102902271A (en) | Binocular vision-based robot target identifying and gripping system and method | |
Hebert et al. | Combined shape, appearance and silhouette for simultaneous manipulator and object tracking | |
Stückler et al. | Mobile manipulation, tool use, and intuitive interaction for cognitive service robot cosero | |
Lee | The study of mechanical arm and intelligent robot | |
CN106863307A (en) | A kind of view-based access control model and the robot of speech-sound intelligent control | |
CN114954723B (en) | Humanoid robot | |
Kragic et al. | A framework for visual servoing | |
Silva et al. | Navigation and obstacle avoidance: A case study using Pepper robot | |
Wang et al. | A visual servoing system for interactive human-robot object transfer | |
Tokuda et al. | Neural Network based Visual Servoing for Eye-to-Hand Manipulator | |
TWI788253B (en) | Adaptive mobile manipulation apparatus and method | |
Wang et al. | Object Grabbing of Robotic Arm Based on OpenMV Module Positioning | |
CN112757274B (en) | A Dynamic Fusion Behavioral Safety Algorithm and System for Human-Machine Collaborative Operation | |
Liang et al. | Visual reconstruction and localization-based robust robotic 6-DoF grasping in the wild | |
Mukai et al. | Application of Object Grasping Using Dual-Arm Autonomous Mobile Robot—Path Planning by Spline Curve and Object Recognition by YOLO— | |
Regal et al. | Using single demonstrations to define autonomous manipulation contact tasks in unstructured environments via object affordances | |
Song et al. | Object pose estimation for grasping based on robust center point detection | |
Bodenstedt et al. | Learned Partial Automation for Shared Control in Tele-Robotic Manipulation. | |
Huh et al. | Self-supervised Wide Baseline Visual Servoing via 3D Equivariance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20130102 |