[go: up one dir, main page]

CN112589809A - Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method - Google Patents

Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method Download PDF

Info

Publication number
CN112589809A
CN112589809A CN202011405666.7A CN202011405666A CN112589809A CN 112589809 A CN112589809 A CN 112589809A CN 202011405666 A CN202011405666 A CN 202011405666A CN 112589809 A CN112589809 A CN 112589809A
Authority
CN
China
Prior art keywords
teacup
tea
control system
potential field
main control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011405666.7A
Other languages
Chinese (zh)
Inventor
李向舜
张祺彬
王金艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Technology WUT
Original Assignee
Wuhan University of Technology WUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Technology WUT filed Critical Wuhan University of Technology WUT
Priority to CN202011405666.7A priority Critical patent/CN112589809A/en
Publication of CN112589809A publication Critical patent/CN112589809A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/008Manipulators for service tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/003Controls for manipulators by means of an audio-responsive input
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/022Optical sensing devices using lasers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, which comprises a master control system, wherein the master control system comprises a main control system and a plurality of tea pouring robots; the human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm; when the human-computer interaction system receives a tea pouring instruction, the main control system controls the binocular camera to search and identify the teacup, and position information and distance information of the teacup relative to the robot are obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup. According to the invention, the three-dimensional space coordinate of the teacup is obtained through the binocular camera and the laser ranging, the artificial potential field method is combined with the mechanical arm movement, the tea pouring action is completed, the working efficiency is improved, and the use by a user is convenient.

Description

Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
Technical Field
The invention belongs to the technical field of robot control, and particularly relates to a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method.
Background
At present, the quality of life of people is gradually improved, the demand for service in life is higher and higher, and with the improvement of the manual service fee, a service robot with relatively low price, high durability and low use error rate is needed to meet the increasing demands of people.
Machine vision is a rapid development direction in the field of current artificial intelligence, and with the continuous progress of machine vision technology, the machine vision technology promotes the progress of industries such as industrial automation, intelligent security, artificial intelligence and the like, and also brings more development potentials and opportunities for the application in various fields, wherein binocular vision is an important branch of machine vision. The application of vision to the mechanical arm is a product combining a robot control technology and a mechanical vision technology, can better meet the requirements of people, and is integrated into the daily life of people.
At present, the application of tea serving and pouring robots in the market is not many, and most service robots in the market are wheel robots and can only walk on a horizontal ground and perform service. It has a significant disadvantage of being oversized and expensive to manufacture. Although the functions are complete, the method is not suitable for household popularization.
Disclosure of Invention
The invention aims to provide a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, and solves the problems of miniaturization, family, universality and automation of a tea pouring service robot.
The invention provides a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method, which comprises a human-computer interaction system, a master control system, an identification and positioning system and a mechanical arm execution system, wherein the human-computer interaction system is connected with the master control system through a network; the human-computer interaction system, the identification and positioning system and the mechanical arm execution system are all connected with the master control system;
the human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm;
when the human-computer interaction system receives a tea pouring instruction, the instruction is sent to the main control system; the main control system controls the binocular camera to search and identify the teacup, and feeds back position information of the teacup relative to the robot to the main control system; the main control system controls the laser ranging module to face the teacup, and distance information of the teacup relative to the robot is obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup.
Further, when the binocular camera identifies the teacup, the color of the teacup is identified firstly, and image binarization is carried out according to the RGB value of the characteristic color block of the teacup, so that the teacup is highlighted.
Further, when the binocular cameras identify the teacup, the two cameras respectively identify image coordinates of the teacup in each picture, similar triangle operation is carried out through parallax of the teacup in the two images, and position information of the teacup is calculated.
Further, the main control system controls the laser ranging module to rotate by utilizing a PID algorithm, so that the laser ranging module is opposite to the tea cup.
Further, when the positive directions of the binocular camera and the laser ranging module are consistent, the main control system controls the binocular camera to be over against the teacup, and then the laser ranging module is over against the teacup.
Furthermore, the human-computer interaction system comprises a voice module, wherein the voice module receives and decodes a voice command of a user and sends a decoding signal to the main control system.
Furthermore, the voice module is provided with a first-level password, and the first-level password is valid within the preset time after the tea pouring instruction is responded.
Furthermore, the human-computer interaction system prompts a user through voice or light after the robot finishes the action of pouring tea.
Further, the robot also comprises a power supply system for supplying power to the robot.
The invention has the beneficial effects that: according to the tea pouring robot based on the machine binocular vision and the artificial potential field obstacle avoidance method, the machine binocular vision is combined with the mechanical arm artificial potential field obstacle avoidance method, instructions of a user are obtained through a man-machine interaction system, three-dimensional space coordinates of a tea cup are obtained through a binocular camera and laser ranging, the artificial potential field method is combined with mechanical arm movement, values of all axes of the mechanical arm can be obtained by matching with an inverse solution algorithm of the mechanical arm movement, tea pouring is accurately captured and finished, working efficiency is improved, and the tea pouring robot is convenient for the user to use.
Furthermore, color feature recognition and threshold binarization are adopted in image processing, and the image binarization is performed according to the RGB value of the teacup, so that the teacup is highlighted, the influence of the external environment is favorably reduced, the accuracy is high, and meanwhile, the subsequent processing of the teacup is favorably realized.
Furthermore, the robot is operated by using a voice control method, so that the use of a user is facilitated, the voice module is provided with a primary password, and only when a tea pouring command is input within a specific time after the password responds, the robot can execute a tea pouring action, so that the interference can be effectively avoided. The robot prompts the user through voice or light after finishing the tea pouring action, so that the robot is convenient for the user to use.
Drawings
Fig. 1 is a structural block diagram of a tea pouring robot based on machine binocular vision and an artificial potential field obstacle avoidance method.
Fig. 2 is a control flow chart of the tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method.
Detailed Description
The invention will be further described with reference to the accompanying drawings in which:
the tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method comprises a man-machine interaction system, a main control system, an identification and positioning system and a mechanical arm execution system, as shown in figure 1; the human-computer interaction system, the identification and positioning system and the mechanical arm execution system are all connected with the master control system; further, the robot also comprises a power supply system for supplying power to the robot. The human-computer interaction system is used for acquiring a tea pouring instruction; the identification and positioning system comprises a binocular camera and a laser ranging module; the mechanical arm execution system comprises a steering engine and a mechanical arm;
when the human-computer interaction system receives a tea pouring instruction, the instruction is sent to the main control system; the main control system controls the binocular camera to search and identify the teacup, and feeds back position information of the teacup relative to the robot to the main control system; the main control system controls the laser ranging module to face the teacup, and distance information of the teacup relative to the robot is obtained; the main control system reversely solves the rotation angle of the steering engine according to the position and distance information of the teacup by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, controls the steering engine to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab the teacup, and then carries out tea making operation after grabbing the teacup.
Furthermore, when the binocular camera identifies the teacup, the color of the teacup is identified firstly, and image binarization is carried out according to the object characteristic color block, namely the RGB value of the teacup, so that the teacup is highlighted, and external interference is reduced. When the binocular cameras identify the teacup, the two cameras respectively identify image coordinates of the teacup in each picture, similar triangle operation is carried out through parallax of the teacup in the two images, and position information of the teacup is calculated.
Further, the main control system controls the laser ranging module to rotate by utilizing a PID algorithm, so that the laser ranging module is opposite to the tea cup. When the positive directions of the binocular camera and the laser ranging module are consistent, the master control system controls the binocular camera to be over against the teacup, so that the laser ranging module is over against the teacup, and distance information is measured.
Furthermore, the human-computer interaction system comprises a voice module, wherein the voice module receives and decodes a voice command of a user and sends a decoding signal to the main control system. The voice module is provided with a first-level password, and the tea pouring instruction is effective within the preset time after the first-level password responds, so that interference is effectively avoided. The human-computer interaction system prompts a user through voice or light after the robot finishes the action of pouring tea.
The tea pouring robot based on the binocular vision of the machine and the artificial potential field obstacle avoidance method as shown in fig. 1 comprises a main control system, a power supply system, a man-machine interaction system, a recognition and positioning system and a mechanical arm execution system. Wherein, the main control system part selects STM32F407ZGT6 singlechip. The power supply system comprises a BOOST circuit and a stabilized voltage power supply circuit. The man-machine interaction system mainly comprises an LD3320 voice recognition module part. The identification and positioning system mainly comprises a binocular camera and a laser ranging module, wherein the binocular camera is composed of two OPENMV cameras. The mechanical arm execution system mainly comprises an execution mechanism for controlling the movement of the six-axis mechanical arm.
Fig. 2 shows the work flow of the robot in one control cycle. The robot firstly judges the current working state, and if the initialization is not successful, the system returns to carry out the initialization again. After the initialization is successful, the robot enters a waiting state, at the moment, a user sends an instruction to the robot through voice, the LD3320 module decodes after receiving the instruction, and the decoded information is transmitted to the main control chip through a serial port for mode selection. When the object grabbing mode is started, the binocular camera searches for a target object in a visual field, and if the target object is not searched, the chassis of the mechanical arm is rotated until the object appears in the visual field. After the object is found, the position of the target object is firstly identified by the binocular cameras, the image coordinates of the tea cups in the respective pictures are respectively identified by the binocular cameras, similar triangle operation is carried out through the parallax of the tea cups in the two images, and the position information of the tea cups relative to the robot is calculated. And sending the position coordinates x and y to a main control chip, controlling the mechanical arm chassis to rotate by using a PID algorithm, enabling the camera to face the object, and obtaining the distance z of the teacup by using a laser ranging module. After the three-dimensional space coordinates of the target object are solved, the motion position of the mechanical arm is solved reversely, the rotation angle of the steering engine is solved reversely by combining an artificial potential field obstacle avoidance method and a D-H coordinate system of the robot, the steering engine is controlled to rotate by utilizing a PID algorithm so as to control the mechanical arm to grab a teacup, an LED is lightened after the teacup is grabbed, and the completion of the task is marked.
The LD3320 module of the man-machine interaction system decodes the information input by the voice of a user, sends the information to the main control chip through a serial port at a specific frequency to realize the real-time feedback of the information, the control system performs binocular camera recognition and image processing according to the information obtained by feedback, and the information obtained by feedback is used for mechanical arm control. The LD3320 module uses a password with a primary instruction that is valid within the primary password response 15s, which avoids interference.
When the binocular camera identifies an object, the camera identifies the color in the image, particularly the color of a teacup, and then binaryzation is carried out on the image according to the RGB value of the object characteristic color block to highlight the target object. And then, the x and y coordinate values of the target object are obtained, so that the mechanical arm chassis is convenient to control to be turned over to be opposite to the teacup. The x and y coordinate values of the object are combined with the z value obtained by the laser ranging module, the x and y coordinate values are sent to a main control chip through a serial port at a specific frequency, and the rotation angle of each axis of the mechanical arm is obtained after operation, so that the further processing is facilitated.
In the process of grabbing objects by the mechanical arm, a D-H coordinate system is established by the mechanical arm, the position of a target object and the position of an obstacle are identified by the camera, the motion path of the mechanical arm is planned by combining a manual potential field method and a mechanical arm motion inverse solution formula, the angle of each shaft needing to be rotated is obtained, and the steering engine is controlled to rotate to the corresponding position to grab the mechanical gripper. And after the object is grabbed, the mechanical arm is controlled to return to carry out tea making work, and the lamp is turned on to display after the work is finished.
In summary, an acoustic control tea pouring robot system based on machine binocular vision, an artificial potential field obstacle avoidance method and mechanical arm motion analysis takes an STM32F407ZGT6 as a core, a binocular OPENMV camera and a laser ranging module as sensors, six-axis mechanical arms as motion modules, and an LD3320 voice module to communicate with a user. The robot combines the artificial intelligence at the frontier of modern robotics, and has the advantages of high cost performance, high efficiency, high reliability, and capability of listening to commands and quickly executing the commands through a binocular vision technology and an artificial potential field robot obstacle avoidance method. The robot applies an improved machine vision binocular recognition algorithm and an artificial potential field method to the tea-holding robot, combines the control of a sound control module and a six-axis mechanical arm, and runs fully automatically. The robot can be applied to daily special services for each user in future, can also be applied to beverage stores, restaurants and the like, liberates labor force, improves working efficiency and promotes the development of the service industry in China.
It will be understood by those skilled in the art that the foregoing is merely a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included within the scope of the present invention.

Claims (9)

1.一种基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,包括人机交互系统、主控系统、识别与定位系统和机械臂执行系统;人机交互系统、识别与定位系统和机械臂执行系统均与主控系统连接;1. a tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method, is characterized in that, comprises human-computer interaction system, main control system, identification and positioning system and mechanical arm execution system; The identification and positioning system and the robotic arm execution system are connected with the main control system; 人机交互系统,用于获取倒茶指令;识别与定位系统,包括双目摄像头和激光测距模块;机械臂执行系统,包括舵机和机械臂;Human-computer interaction system, used to obtain tea pouring instructions; identification and positioning system, including binocular camera and laser ranging module; robotic arm execution system, including steering gear and robotic arm; 当人机交互系统接收到倒茶指令时,将指令信号发送给主控系统;主控系统控制双目摄像头搜寻并识别茶杯,识别完成后将茶杯相对机器人的位置信息反馈给主控系统;主控系统控制激光测距模块正对茶杯,获取茶杯相对机器人的距离信息;主控系统根据茶杯的位置和距离信息,结合人工势场避障法和机器人的D-H坐标系反解出舵机旋转角度,利用PID算法控制舵机旋转进而控制机械臂抓取茶杯,抓取茶杯后进行沏茶操作。When the human-computer interaction system receives the tea pouring instruction, it sends the instruction signal to the main control system; the main control system controls the binocular camera to search and identify the tea cup, and after the identification is completed, the position information of the tea cup relative to the robot is fed back to the main control system; the main control system; The control system controls the laser ranging module to face the teacup and obtains the distance information of the teacup relative to the robot; the main control system inversely solves the rotation angle of the steering gear according to the position and distance information of the teacup, combined with the artificial potential field obstacle avoidance method and the robot's D-H coordinate system , use the PID algorithm to control the rotation of the steering gear and then control the robotic arm to grab the tea cup, and then perform the tea brewing operation after grabbing the tea cup. 2.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,双目摄像头识别茶杯时,先识别茶杯的颜色,根据茶杯特征色块的RGB值进行图像二值化,突出茶杯。2. the tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1, is characterized in that, when binocular camera identifies teacup, first identifies the color of teacup, according to the RGB of teacup characteristic color block value to binarize the image to highlight the teacup. 3.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,双目摄像头识别茶杯时,两个摄像头分别识别出各自画面中茶杯的图像坐标,通过茶杯在两幅图像中的视差进行相似三角形运算,计算出茶杯的位置信息。3. the tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1, is characterized in that, when binocular camera identifies teacup, two cameras identify the image coordinates of teacup in respective picture respectively , and calculate the position information of the teacup by performing similar triangle operation through the disparity of the teacup in the two images. 4.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,主控系统利用PID算法控制激光测距模块旋转,进而使激光测距模块正对茶杯。4. the tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1, is characterized in that, main control system utilizes PID algorithm to control laser ranging module to rotate, and then makes laser ranging module positive. To a teacup. 5.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,当双目摄像头与激光测距模块的正方向一致时,主控系统通过控制双目摄像头正对茶杯,进而使激光测距模块正对茶杯。5. the tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1, is characterized in that, when the binocular camera is consistent with the positive direction of the laser ranging module, the main control system controls the The binocular camera is facing the teacup, so that the laser ranging module is facing the teacup. 6.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,人机交互系统包括语音模块,语音模块接收使用者的语音命令并进行解码,将解码信号发送给主控系统。6. the tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1, is characterized in that, the human-computer interaction system comprises a voice module, and the voice module receives the user's voice command and decodes, Send the decoded signal to the main control system. 7.根据权利要求6所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,语音模块设有一级口令,倒茶指令一级口令响应后的预设时间内有效。7. The tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 6, wherein the voice module is provided with a first-level password, and the tea pouring command responds within a preset time after the first-level password responds. efficient. 8.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,人机交互系统在机器人完成倒茶动作后通过语音或灯光提示使用者。8 . The tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1 , wherein the human-computer interaction system prompts the user through voice or light after the robot completes the tea pouring action. 9 . 9.根据权利要求1所述的基于机器双目视觉和人工势场避障法的倒茶机器人,其特征在于,该机器人还包括电源供电系统,用于给机器人供电。9 . The tea pouring robot based on machine binocular vision and artificial potential field obstacle avoidance method according to claim 1 , wherein the robot further comprises a power supply system for supplying power to the robot. 10 .
CN202011405666.7A 2020-12-03 2020-12-03 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method Pending CN112589809A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011405666.7A CN112589809A (en) 2020-12-03 2020-12-03 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011405666.7A CN112589809A (en) 2020-12-03 2020-12-03 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method

Publications (1)

Publication Number Publication Date
CN112589809A true CN112589809A (en) 2021-04-02

Family

ID=75188152

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011405666.7A Pending CN112589809A (en) 2020-12-03 2020-12-03 Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method

Country Status (1)

Country Link
CN (1) CN112589809A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146651A (en) * 2021-04-15 2021-07-23 华中科技大学 Tea making robot and control method thereof
CN113867412A (en) * 2021-11-19 2021-12-31 中国工程物理研究院电子工程研究所 Multi-unmanned aerial vehicle track planning method based on virtual navigation
CN114536323A (en) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 Classification robot based on image processing
CN116834006A (en) * 2023-07-12 2023-10-03 山东大学 Hierarchical communication-based method and system for optimizing identification of dumping area of maximized robot

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165880A (en) * 2011-01-19 2011-08-31 南京农业大学 Automatic-navigation crawler-type mobile fruit picking robot and fruit picking method
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior
CN102848388A (en) * 2012-04-05 2013-01-02 上海大学 Multi-sensor based positioning and grasping method for service robot
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN103503639A (en) * 2013-09-30 2014-01-15 常州大学 Double-manipulator fruit and vegetable harvesting robot system and fruit and vegetable harvesting method thereof
US20160184990A1 (en) * 2014-12-26 2016-06-30 National Chiao Tung University Robot and control method thereof
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN111258311A (en) * 2020-01-17 2020-06-09 青岛北斗天地科技有限公司 Obstacle avoidance method of underground mobile robot based on intelligent vision

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102165880A (en) * 2011-01-19 2011-08-31 南京农业大学 Automatic-navigation crawler-type mobile fruit picking robot and fruit picking method
CN102323817A (en) * 2011-06-07 2012-01-18 上海大学 A service robot control platform system and its method for realizing multi-mode intelligent interaction and intelligent behavior
CN102848388A (en) * 2012-04-05 2013-01-02 上海大学 Multi-sensor based positioning and grasping method for service robot
CN102902271A (en) * 2012-10-23 2013-01-30 上海大学 Binocular vision-based robot target identifying and gripping system and method
CN103503639A (en) * 2013-09-30 2014-01-15 常州大学 Double-manipulator fruit and vegetable harvesting robot system and fruit and vegetable harvesting method thereof
US20160184990A1 (en) * 2014-12-26 2016-06-30 National Chiao Tung University Robot and control method thereof
CN108171796A (en) * 2017-12-25 2018-06-15 燕山大学 A kind of inspection machine human visual system and control method based on three-dimensional point cloud
CN111258311A (en) * 2020-01-17 2020-06-09 青岛北斗天地科技有限公司 Obstacle avoidance method of underground mobile robot based on intelligent vision

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146651A (en) * 2021-04-15 2021-07-23 华中科技大学 Tea making robot and control method thereof
CN113146651B (en) * 2021-04-15 2023-03-10 华中科技大学 Tea making robot and control method thereof
CN113867412A (en) * 2021-11-19 2021-12-31 中国工程物理研究院电子工程研究所 Multi-unmanned aerial vehicle track planning method based on virtual navigation
CN114536323A (en) * 2021-12-31 2022-05-27 中国人民解放军国防科技大学 Classification robot based on image processing
CN116834006A (en) * 2023-07-12 2023-10-03 山东大学 Hierarchical communication-based method and system for optimizing identification of dumping area of maximized robot

Similar Documents

Publication Publication Date Title
CN112589809A (en) Tea pouring robot based on binocular vision of machine and artificial potential field obstacle avoidance method
CN102514002B (en) Monocular vision material loading and unloading robot system of numerical control lathe and method thereof
WO2019232806A1 (en) Navigation method, navigation system, mobile control system, and mobile robot
CN113333998A (en) Automatic welding system and method based on cooperative robot
CN111055281B (en) A ROS-based autonomous mobile grasping system and method
CN111906788B (en) Bathroom intelligent polishing system based on machine vision and polishing method thereof
CN107433573B (en) Intelligent binocular automatic grasping robotic arm
CN100360204C (en) Control system of intelligent performing robot based on multi-processor cooperation
CN102902271A (en) Binocular vision-based robot target identifying and gripping system and method
WO2018209863A1 (en) Intelligent moving method and device, robot and storage medium
CN111015649B (en) Driving and controlling integrated control system
WO2017071372A1 (en) Robot having charging automatic-return function, system and corresponding method
CN111459274B (en) 5G + AR-based remote operation method for unstructured environment
CN106113067B (en) A kind of Dual-Arm Mobile Robot system based on binocular vision
CN113311825A (en) Visual and self-defined ROS intelligent robot man-machine interaction system and control method thereof
CN106997201B (en) Multi-robot cooperation path planning method
CN100361792C (en) A mobile manipulator control system
CN106003036A (en) Object grabbing and placing system based on binocular vision guidance
CN109199240A (en) A kind of sweeping robot control method and system based on gesture control
CN106514667A (en) Human-computer cooperation system based on Kinect skeletal tracking and uncalibrated visual servo
CN111702755A (en) An intelligent control system for robotic arms based on multi-eye stereo vision
CN108453739A (en) Stereoscopic vision positioning mechanical arm grasping system and method based on automatic shape fitting
CN110640744A (en) Industrial robot with fuzzy control of motor
Han et al. Grasping control method of manipulator based on binocular vision combining target detection and trajectory planning
Kragic et al. Model based techniques for robotic servoing and grasping

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210402

RJ01 Rejection of invention patent application after publication