[go: up one dir, main page]

CN105690371A - Space service robot-oriented hand-eye system - Google Patents

Space service robot-oriented hand-eye system Download PDF

Info

Publication number
CN105690371A
CN105690371A CN201410697687.9A CN201410697687A CN105690371A CN 105690371 A CN105690371 A CN 105690371A CN 201410697687 A CN201410697687 A CN 201410697687A CN 105690371 A CN105690371 A CN 105690371A
Authority
CN
China
Prior art keywords
robot
eye system
hand
space
space service
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410697687.9A
Other languages
Chinese (zh)
Inventor
梁帆
代凤飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University of Technology
Original Assignee
Tianjin University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University of Technology filed Critical Tianjin University of Technology
Priority to CN201410697687.9A priority Critical patent/CN105690371A/en
Publication of CN105690371A publication Critical patent/CN105690371A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

面向太空服务机器人的手眼系统,具有主控先锋机器人平台(1),机器人自定位摄像头(2),同时还包括机械手臂(3),依附于机械手臂(3)上的机械爪(4)和微型摄像头(5),以及模拟太空舱操作板(6)和操作板(6)上的控制按钮(7),电源模块(8)。本发明提供一种能够辅助宇航员完成一定任务需要基于视觉的移动机器人手眼系统。

The hand-eye system for space service robots has a main control pioneer robot platform (1), a robot self-positioning camera (2), and also includes a robotic arm (3), a mechanical claw (4) attached to the robotic arm (3) and A miniature camera (5), and an operation panel (6) of the simulated space capsule, a control button (7) on the operation panel (6), and a power supply module (8). The invention provides a vision-based mobile robot hand-eye system capable of assisting astronauts to complete certain tasks.

Description

一种面向太空服务机器人的手眼系统A hand-eye system for space service robots

技术领域 technical field

本发明为满足模拟太空失重环境下,利用机器人辅助宇航员完成一定任务的需要,解决服务机器人如何自主识别目标,自主定位目标,自主点击按钮的问题,在先锋移动机器人系统的平台上,提出并实现了基于视觉的移动机器人视觉反馈系统。在基于霍夫变换和Cam-shift跟踪算法等图像处理算法的基础上,提取目标图像特征,利用小孔成像原理建立的视觉测量模型,得出图像与目标的映射关系,实时获取目标点空间信息,从而控制机械手完成指定任务。 In order to meet the needs of using robots to assist astronauts to complete certain tasks in the simulated weightless environment in space, and to solve the problems of how service robots can identify targets, locate targets, and click buttons independently, the invention proposes and A vision-based visual feedback system for mobile robots is implemented. On the basis of image processing algorithms such as Hough transform and Cam-shift tracking algorithm, the features of the target image are extracted, and the visual measurement model established by the principle of pinhole imaging is used to obtain the mapping relationship between the image and the target and obtain the spatial information of the target point in real time , so as to control the manipulator to complete the specified task.

背景技术 Background technique

随着科技的不断发展,人类将越来越多的进入太空。然而在太空失重的条件下,宇航员的工作将受到很多限制。于是宇航员可以通过脑电、肌电生物信号及语音信号发出作业任务和动作指令,让太空服务机器人代替宇航员出舱,并完成指定任务。为了验证上述多模式人机交互接口技术,就有必要开发具有一定智能的服务机器人。模拟在太空环境下,服务机器人在接受到宇航员发出的指令后自主对航天器控制面板进行操作。所以如何使服务机器人实现自主识别目标,自主定位目标,自主点击按钮的任务是需要解决的问题。 With the continuous development of science and technology, more and more human beings will enter space. However, in the weightless conditions of space, the work of astronauts will be subject to many restrictions. As a result, astronauts can issue tasks and action commands through brain electricity, myoelectric biosignals, and voice signals, allowing space service robots to replace astronauts out of the cabin and complete designated tasks. In order to verify the above-mentioned multi-mode human-computer interaction interface technology, it is necessary to develop a service robot with certain intelligence. In a simulated space environment, the service robot autonomously operates the spacecraft control panel after receiving instructions from the astronauts. Therefore, how to make the service robot realize the task of autonomously identifying targets, autonomously locating targets, and autonomously clicking buttons is a problem that needs to be solved.

在面向太空服务的机器人方面,国内外专利发明几乎没有,2003年6月10日从美国的卡纳维拉尔角空军基地发射升空的勇气号火星探测器和机遇号火星探测器是一种太空机器人,于2004年1月4日在火星南半球的古谢夫陨石坑着陆。“勇气”号长1.6米、宽2.3米、高1.5米,重174千克。它的“大脑”是一台每秒能执行约2000万条指令的计算机。它的主要任务是探测火星上是否存在水和生命,并分析其物质成份,以推断火星能否通过改造适合生命生存。勇气号和机遇号都是着陆后进行侦查和探测,并没有在太空舱内实现为航天员服务。所以,我们这种面向太空的服务机器人是很有发展的必要。。 In terms of robots for space services, there are almost no patented inventions at home and abroad. The Mars rover Spirit and the Mars Opportunity launched from Cape Canaveral Air Force Station in the United States on June 10, 2003 are a kind of The space robot landed in Gusev Crater in the southern hemisphere of Mars on January 4, 2004. "Courage" is 1.6 meters long, 2.3 meters wide, 1.5 meters high and weighs 174 kilograms. Its "brain" is a computer capable of executing about 20 million instructions per second. Its main task is to detect whether there is water and life on Mars, and analyze its material composition to deduce whether Mars can be adapted for life through transformation. Both Spirit and Opportunity carried out reconnaissance and detection after landing, and did not serve astronauts in the space capsule. Therefore, our space-oriented service robot is very necessary for development. .

发明内容 Contents of the invention

本发明的目的在于,利用先锋移动机器人系统,设计了基于单目视觉的移动机器人手眼系统。通过霍夫变换和Cam-shift跟踪算法等数字图像处理方法,提取目标图像特征,再结合先验知识,并利用小孔成像原理建立的数学模型,实时得出目标的空间信息,从而完成目标的识别及引导机械手点击按钮的任务。详细说明该发明的技术内容: The object of the present invention is to design a mobile robot hand-eye system based on monocular vision by utilizing the Pioneer mobile robot system. Through digital image processing methods such as Hough transform and Cam-shift tracking algorithm, the target image features are extracted, combined with prior knowledge, and the mathematical model established by the principle of pinhole imaging is used to obtain the spatial information of the target in real time, so as to complete the target detection. The task of identifying and guiding the manipulator to click a button. Describe in detail the technical content of the invention:

参照图1,机器人根据给定的任务自主识别目标,自主定位目标,自主点击按钮的任务。图1为视觉系统位置控制原理框图。该系统通过图像采集及特征提取,完成从图像中抽取特征点在二维像平面上的坐标信息。然后利用目标和摄像头的几何模型,估计出目标与摄像头之间的位置关系。再将估计值与理想位置的误差传递给机器人控制器,控制器用该误差计算出机械手的控制信号,从而控制机械手完成任务。 Referring to Figure 1, the robot autonomously recognizes the target according to the given task, autonomously locates the target, and autonomously clicks the button. Figure 1 is a block diagram of the position control principle of the vision system. The system extracts the coordinate information of feature points on the two-dimensional image plane from the image through image acquisition and feature extraction. Then use the geometric model of the target and the camera to estimate the positional relationship between the target and the camera. Then, the error between the estimated value and the ideal position is transmitted to the robot controller, and the controller uses the error to calculate the control signal of the manipulator, so as to control the manipulator to complete the task.

本文在在已知目标信息的条件下利用摄像头获得的目标图片得到深度信息的基础上,建立单目视觉测距模型,进而提取出目标的物理形态和颜色特征。由于是通过颜色特征识别和轮廓的识别,提高了识别的准确性。在进行特征提取前,结合特征属性,对目标图像进行平滑,膨胀腐蚀等滤波处理。膨胀是将图像与核进行卷积,一般而言核是一个小的中间带有参考点的实心正方形或圆盘。腐蚀是膨胀的反操作,该操作要计算核区域像素的最小值。首先用图像颜色空间转换函数cvCvtColor将采集到的RGB图像转换为灰度图像。为了能够消除高频噪声对检测圆的干扰,再使用滤波函数cvSmooth对图像进行核大小为9×9的高斯卷积。最后用霍夫圆变换cvHoughCircles函数对图像进行圆检测。寻找圆弧圆心的累计分辨率设置为6,两个不同圆之间的最小距离设置为120,用于边缘阀值上限设置为36,累加器的阀值设置为85,最小圆半径设置为3,最大圆半径设置为250。 In this paper, based on the depth information obtained by using the target image obtained by the camera under the condition of known target information, a monocular vision distance measurement model is established, and then the physical shape and color features of the target are extracted. Because it is through the recognition of color features and contours, the accuracy of recognition is improved. Before feature extraction, the target image is smoothed, dilated and corroded, combined with feature attributes. Dilation is convolving an image with a kernel, typically a small solid square or disk with a reference point in the middle. Erosion is the inverse operation of dilation, which computes the minimum value of pixels in the kernel region. First, use the image color space conversion function cvCvtColor to convert the collected RGB image into a grayscale image. In order to eliminate the interference of high-frequency noise on the detection circle, the filter function cvSmooth is used to perform Gaussian convolution with a kernel size of 9×9 on the image. Finally, use the Hough circle transform cvHoughCircles function to detect circles on the image. The cumulative resolution of finding the center of the arc is set to 6, the minimum distance between two different circles is set to 120, the upper threshold for the edge is set to 36, the threshold of the accumulator is set to 85, and the minimum circle radius is set to 3 , set the maximum circle radius to 250.

本发明与现有技术相比所具有的优点及效果,最好能从结构上进行分析,并具有适当的数据。 Compared with the prior art, the advantages and effects of the present invention should preferably be analyzed from the structure and have appropriate data.

本发明专利所要解决的技术问题是,提供一种能够辅助宇航员完成一定任务需要基于视觉的移动机器人手眼系统。 The technical problem to be solved by the patent of the present invention is to provide a vision-based mobile robot hand-eye system that can assist astronauts to complete certain tasks.

本系统具有主控先锋机器人平台,同时还包括机械手臂,依附于机械手臂上的微型摄像头,以及模拟太空舱操作板和操作板上的控制按钮,电源模块。机器人平台采用P3-DX型先锋机器人,内部运行windowXP系统,以其作为控制核心,外围配置连接上相应的机械结构和电气结构,实现整体的手眼系统。机械手臂Robai公司的七自由的机械手,利用其提供的CytonC++API,从而搭建起机械手控制平台。微型摄像头采用直径不到1CM的Canon高清摄像头,通过USB口与机器人连接,实现对操作面板的图像采集和传输。太空舱操作板以及板上的控制按钮,自行开发研制并且布局,6个按钮分别为6种颜色,用于系统内部的识别控制,整个操作板用于模拟太空舱内的环境。 This system has a main control pioneer robot platform, and also includes a robotic arm, a miniature camera attached to the robotic arm, an analog space capsule operating panel, control buttons on the operating panel, and a power module. The robot platform adopts the P3-DX Pioneer robot, which runs the windowXP system internally as the control core, and the peripheral configuration is connected with the corresponding mechanical structure and electrical structure to realize the overall hand-eye system. Robai's seven-free manipulator uses the CytonC++ API provided by it to build a manipulator control platform. The miniature camera adopts a Canon high-definition camera with a diameter of less than 1 cm, which is connected to the robot through a USB port to realize image acquisition and transmission of the operation panel. The space capsule operation panel and the control buttons on the panel are developed and laid out by ourselves. The 6 buttons are 6 colors, which are used for identification and control inside the system. The entire operation panel is used to simulate the environment in the space capsule.

附图及其简要说明: Attached drawings and their brief descriptions:

图1是本系统的视觉系统位置控制原理框图; Figure 1 is a block diagram of the position control principle of the visual system of the system;

图2是本系统的整体外观图; Figure 2 is the overall appearance of the system;

图3是本系统的太空舱操作板; Fig. 3 is the space capsule operation board of this system;

实施例: Example:

本发明的面向太空服务机器人的手眼系统,实现机器人根据给定的任务自主识别目标,自主定位目标,自主点击按钮的任务。 The hand-eye system for the space service robot of the present invention realizes the tasks of autonomously identifying targets, autonomously locating targets, and autonomously clicking buttons according to a given task.

参照图1,该系统通过图像采集及特征提取,完成从图像中抽取特征点在二维像平面上的坐标信息。然后利用目标和摄像头的几何模型,估计出目标与摄像头之间的位置关系。再将估计值与理想位置的误差传递给机器人控制器,控制器用该误差计算出机械手的控制信号,从而控制机械手完成任务。 Referring to Figure 1, the system extracts the coordinate information of feature points on the two-dimensional image plane from the image through image acquisition and feature extraction. Then use the geometric model of the target and the camera to estimate the positional relationship between the target and the camera. Then, the error between the estimated value and the ideal position is transmitted to the robot controller, and the controller uses the error to calculate the control signal of the manipulator, so as to control the manipulator to complete the task.

参照如图2所示机器人,为满足任务需要,在系统的主要硬件上,将七自由度机械手搭载在先锋移动机器人系统上,通过给机械手末端安装小型摄像头,从而构建起手眼系统。 Referring to the robot shown in Figure 2, in order to meet the task requirements, on the main hardware of the system, a seven-degree-of-freedom manipulator is mounted on the Pioneer mobile robot system, and a hand-eye system is constructed by installing a small camera at the end of the manipulator.

参照如图3所示,太空舱操作板上共有6个按钮和一个旋钮,给机器人下达点击不同颜色按钮的命令后,通过对形态和颜色双重特征提取,识别出检测目标,并标定出识别结果。 Referring to Figure 3, there are 6 buttons and a knob on the operating panel of the space capsule. After the robot is given the command to click buttons of different colors, the detection target is identified by double feature extraction of shape and color, and the recognition result is calibrated. .

Claims (3)

1.面向太空服务机器人的手眼系统,具有主控先锋机器人平台(1),机器人自定位摄像头(2),同时还包括机械手臂(3),依附于机械手臂(3)上的机械爪(4)和微型摄像头(5),以及模拟太空舱操作板(6)和操作板(6)上的控制按钮(7),电源模块(8),本发明提供一种能够辅助宇航员完成一定任务需要基于视觉的移动机器人手眼系统。 1. The hand-eye system for space service robots, with the main control pioneer robot platform (1), the robot self-positioning camera (2), and also includes the mechanical arm (3), the mechanical claw (4) attached to the mechanical arm (3) ) and a miniature camera (5), as well as the control button (7) on the simulated space capsule operating panel (6) and the operating panel (6), and the power supply module (8). Vision-based mobile robot hand-eye system. 2.根据权利要求1所述的面向太空服务机器人的手眼系统,其特征在于:所述的机械手臂(3)采用7个自由度,装载在主控机器人平台(1)的正上方,能够正面接触到机械爪(4)的前方。 2. The hand-eye system for space service robots according to claim 1, characterized in that: the mechanical arm (3) adopts 7 degrees of freedom, is loaded directly above the main control robot platform (1), and can comes into contact with the front of the gripper (4). 3.根据权利要求1所述的面向太空服务机器人的手眼系统,其特征在于:所述的微型摄像头(3)安装在机械手臂(2)末端的正上方,是机器人能够拍摄到机械爪(4)的前方图像,并识别。 3. The hand-eye system for space service robots according to claim 1, characterized in that: the miniature camera (3) is installed directly above the end of the mechanical arm (2), so that the robot can photograph the mechanical claw (4 ) of the front image and identify it.
CN201410697687.9A 2014-11-28 2014-11-28 Space service robot-oriented hand-eye system Pending CN105690371A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410697687.9A CN105690371A (en) 2014-11-28 2014-11-28 Space service robot-oriented hand-eye system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410697687.9A CN105690371A (en) 2014-11-28 2014-11-28 Space service robot-oriented hand-eye system

Publications (1)

Publication Number Publication Date
CN105690371A true CN105690371A (en) 2016-06-22

Family

ID=56294141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410697687.9A Pending CN105690371A (en) 2014-11-28 2014-11-28 Space service robot-oriented hand-eye system

Country Status (1)

Country Link
CN (1) CN105690371A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107914272A (en) * 2017-11-20 2018-04-17 北京科技大学 A kind of method of seven freedom robot assemblies crawl target object

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107914272A (en) * 2017-11-20 2018-04-17 北京科技大学 A kind of method of seven freedom robot assemblies crawl target object
CN107914272B (en) * 2017-11-20 2020-06-05 北京科技大学 Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly

Similar Documents

Publication Publication Date Title
CN107767423B (en) A binocular vision-based target positioning and grasping method for manipulators
US11691273B2 (en) Generating a model for an object encountered by a robot
CN109255813B (en) Man-machine cooperation oriented hand-held object pose real-time detection method
WO2022078467A1 (en) Automatic robot recharging method and apparatus, and robot and storage medium
US10335963B2 (en) Information processing apparatus, information processing method, and program
CN107914272B (en) Method for grabbing target object by seven-degree-of-freedom mechanical arm assembly
CN107030692B (en) A method and system for manipulator teleoperation based on perception enhancement
Sanchez-Matilla et al. Benchmark for human-to-robot handovers of unseen containers with unknown filling
CN112634318B (en) A teleoperating system and method for an underwater maintenance robot
Pfanne et al. Fusing joint measurements and visual features for in-hand object pose estimation
CN110900581A (en) Four-degree-of-freedom mechanical arm vision servo control method and device based on RealSense camera
US9008442B2 (en) Information processing apparatus, information processing method, and computer program
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN109079794B (en) A robot control and teaching method based on human posture following
Chaudhury et al. Using collocated vision and tactile sensors for visual servoing and localization
Hoffmann et al. Adaptive robotic tool use under variable grasps
WO2022217667A1 (en) Human physiological sample collection method and apparatus, electronic device, and storage medium
CN108828996A (en) A kind of the mechanical arm remote control system and method for view-based access control model information
CN114463244A (en) A visual robot grasping system and its control method
CN103226693B (en) The identification of fishing for object based on full-view stereo vision and space positioning apparatus and method
CN110555404A (en) Flying wing unmanned aerial vehicle ground station interaction device and method based on human body posture recognition
Faria et al. A methodology for autonomous robotic manipulation of valves using visual sensing
CN105690371A (en) Space service robot-oriented hand-eye system
CN108960109B (en) Space gesture positioning device and method based on two monocular cameras
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160622

WD01 Invention patent application deemed withdrawn after publication