CN100389013C - Reconstruction method of humanoid robot operation scene based on multi-information fusion - Google Patents
Reconstruction method of humanoid robot operation scene based on multi-information fusion Download PDFInfo
- Publication number
- CN100389013C CN100389013C CNB2005100599150A CN200510059915A CN100389013C CN 100389013 C CN100389013 C CN 100389013C CN B2005100599150 A CNB2005100599150 A CN B2005100599150A CN 200510059915 A CN200510059915 A CN 200510059915A CN 100389013 C CN100389013 C CN 100389013C
- Authority
- CN
- China
- Prior art keywords
- robot
- scene
- data
- model
- operator
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Images
Landscapes
- Manipulator (AREA)
Abstract
本发明构建的仿人型机器人作业场景是一种结合操作者控制命令和反馈信息的实时视频图像。场景中能够实时显示作业现场的仿人型机器人模型及其作业环境模型。场景能够接收仿人型机器人作业现场的位置信息及机器人运行中的传感信息。使用以上信息驱动仿人型机器人模型及其环境模型运动以显示实时视频图像。同时场景根据操作者发出的命令进行预测仿真,生成理想情况下的仿人型机器人运行数据及各个模型的位置数据。在实时反馈信息缺失或者无法获取时,使用预测仿真所生成的数据来驱动模型。操作者可以任意改变视角,实现全视角的观察。
The operating scene of the humanoid robot constructed by the present invention is a real-time video image combined with operator control commands and feedback information. In the scene, the humanoid robot model and its working environment model at the job site can be displayed in real time. The scene can receive the position information of the humanoid robot's job site and the sensing information of the robot in operation. Use the above information to drive the motion of the humanoid robot model and its environment model to display the real-time video image. At the same time, the scene is predicted and simulated according to the commands issued by the operator, and the ideal humanoid robot operation data and position data of each model are generated. Use data generated by predictive simulations to drive models when real-time feedback is missing or unavailable. The operator can change the angle of view arbitrarily to achieve full-angle observation.
Description
所属技术领域:Technical field:
本发明属于机器人领域,主要用于对仿人型机器人作业场景进行三维重构。它适用于机器人遥操作控制,可以实时显示仿人型机器人及其作业场景中的物体的三维图像,为仿人型机器人遥操作提供视觉临场感。The invention belongs to the field of robots and is mainly used for three-dimensional reconstruction of a humanoid robot operation scene. It is suitable for robot teleoperation control, and can display the three-dimensional images of the humanoid robot and the objects in the operation scene in real time, providing visual presence for the humanoid robot teleoperation.
背景技术: Background technique:
仿人型机器人,就是一种具有人类外观特征、可模拟人类基本动作的机器人。遥操作是机器人应用的一个重要技术。通过遥操作平台,操作人员可以监视和控制远方机器人完成各种作业任务,从而使仿人型机器人能够代替人类在一些无法触及的、甚至一些危及人类健康或生命安全的环境下完成各种任务。A humanoid robot is a robot that has human appearance characteristics and can simulate basic human actions. Teleoperation is an important technology for robotic applications. Through the teleoperation platform, operators can monitor and control remote robots to complete various tasks, so that humanoid robots can replace humans to complete various tasks in some inaccessible, even some endangering human health or life safety environments.
作业环境的图像显示是遥操作控制的一项关键技术。目前作业环境的图像显示,主要采用现场二维视频图像和基于操作者命令的三维立体虚拟场景两种方式。The image display of the working environment is a key technology of teleoperation control. At present, the image display of the working environment mainly adopts two ways of on-site two-dimensional video image and three-dimensional virtual scene based on the operator's command.
现场二维视频图像方式,是在机器人作业现场安装多个视频摄像机,对机器人及其作业现场进行视频摄像。视频图像经过网络传输到操作者端,在计算机上显示。这样的图像是机器人作业的真实场景,能够真实反映机器人作业的情况。但是存在以下缺点:1)不能提供三维信息,因此实际的三维位置操作难以实现;2)视角有限,在机器人作业现场安装的摄像机,位置固定,无法提供全面的观察视角;3)较大时延,在网络中传输的视频图像文件数据量比较大,在网络带宽的有限情况下,传输过程存在较大的时延。The on-site two-dimensional video image method is to install multiple video cameras on the robot's work site to take video shots of the robot and its work site. The video image is transmitted to the operator through the network and displayed on the computer. Such an image is a real scene of robot operation, which can truly reflect the situation of robot operation. However, there are the following disadvantages: 1) It cannot provide three-dimensional information, so the actual three-dimensional position operation is difficult to realize; 2) The viewing angle is limited, and the camera installed on the robot operation site is fixed in position and cannot provide a comprehensive viewing angle; 3) Large time delay , the amount of video image file data transmitted in the network is relatively large, and in the case of limited network bandwidth, there is a large delay in the transmission process.
基于操作者命令的三维立体虚拟场景的基本原理是,操作者向机器人发出的各种操作指令,假定机器人正确执行并产生相应的运动,为了描述这种运动,在操作者端的计算机上,使用三维建模软件,建立基于操作者指令的机器人运动模型和三维虚拟图像,产生预测仿真的效果。这样的图像是对机器人执行命令过程的预测,可以提供给操作者形象的作业场景。另外,由于使用三维建模软件制作,操作者可以很方便的改变观察角度。但存在以下缺点:这样的系统提供给操作者的是对机器人及其作业现场的预测和仿真,不能够真实反映机器人的作业情况。The basic principle of the three-dimensional virtual scene based on the operator's command is that the operator sends various operating instructions to the robot, assuming that the robot executes correctly and generates the corresponding movement. Modeling software, which establishes the robot motion model and three-dimensional virtual image based on the operator's instructions, and produces the effect of predictive simulation. Such an image is a prediction of the robot's execution process, which can provide the operator with a vivid job scene. In addition, since it is made with 3D modeling software, the operator can easily change the viewing angle. But there are the following disadvantages: what such a system provides to the operator is the prediction and simulation of the robot and its job site, which cannot truly reflect the robot's job situation.
发明内容: Invention content:
本发明结合操作者控制命令、仿人型机器人自身的传感器检测信息以及环境检测信息构建一个作业环境的三维立体场景。The invention combines operator control commands, sensor detection information of the humanoid robot itself and environment detection information to construct a three-dimensional scene of a working environment.
本发明采用的技术方案是:The technical scheme adopted in the present invention is:
本发明构建的仿人型机器人作业场景是一种结合操作命令和反馈信息的实时视频图像。场景能够实时显示仿人型机器人模型和机器人作业现场的环境模型。The operating scene of the humanoid robot constructed by the present invention is a real-time video image combined with operation commands and feedback information. The scene can display the humanoid robot model and the environment model of the robot job site in real time.
场景能够接收仿人型机器人作业现场的位置信息及机器人运行中的传感信息。使用以上信息驱动仿人型机器人模型及其环境模型运动以显示实时视频动画图像。同时场景接收操作者发出的命令并进行预测,生成理想情况下的机器人运行数据及各个模型的位置数据。在实时反馈信息缺失或者无法获取时,使用预测所生成的数据来驱动模型。The scene can receive the position information of the humanoid robot's job site and the sensing information of the robot in operation. Use the above information to drive the motion of the humanoid robot model and its environment model to display real-time video animation images. At the same time, the scene receives and predicts the commands issued by the operator, and generates ideal robot operation data and position data of each model. Use data generated by predictions to drive models when real-time feedback is missing or unavailable.
操作者可以任意改变视角,实现全视角的观察。The operator can change the angle of view arbitrarily to achieve full-angle observation.
其主要技术方案为:Its main technical solutions are:
使用三维建模软件,制作仿人型机器人的作业现场各类物体的三维模型,模型具有与实际物体相同的外形特征。同时建立与模型匹配的数据处理模块。数据处理模块可以接收多种信息,经过匹配处理来驱动作业场景中的模型运动。Use 3D modeling software to make 3D models of various objects on the job site of humanoid robots. The models have the same appearance characteristics as the actual objects. At the same time, a data processing module matching the model is established. The data processing module can receive a variety of information, and drive the model movement in the job scene through matching processing.
仿人型机器人作业场景的模型主要有两种:环境模型和仿人型机器人模型。There are two main models of humanoid robot operation scenarios: environment model and humanoid robot model.
(1)使用三维建模软件制作结构已知的机器人作业场所中的环境模型。环境模型具有与实际物体相同的外形特征和位置关系。可以实现全视角观察。(1) Use 3D modeling software to make an environment model in the robot workplace with a known structure. The environment model has the same appearance characteristics and positional relationship as the actual object. All-round observation can be realized.
(2)使用三维建模软件制作仿人型机器人模型。模型具有与机器人相同的外形几何特征及自由度设置,满足仿人机器人多连杆机构运动学约束条件。仿人型机器人模型接收位置数据在场景中定位。模型接收各个自由度的关节角度数据来驱动各个连杆之间的角度变化,表现仿人型机器人的运动。(2) Use 3D modeling software to make a humanoid robot model. The model has the same geometric features and degrees of freedom settings as the robot, and meets the kinematic constraints of the multi-link mechanism of the humanoid robot. A humanoid robot model receives position data to orient itself in the scene. The model receives the joint angle data of each degree of freedom to drive the angle change between the connecting rods to represent the motion of the humanoid robot.
在机器人作业现场安装位置传感器,实时测量机器人及其作业目标的位置,输出测量得到的三维坐标数据。机器人身体各关节安装有角度传感器,可以实时测量机器人身体各个关节所连接的连杆之间的相对角度数据。Install a position sensor on the robot's job site, measure the position of the robot and its job target in real time, and output the measured three-dimensional coordinate data. Each joint of the robot body is equipped with an angle sensor, which can measure the relative angle data between the connecting rods connected to each joint of the robot body in real time.
数据处理模块主要实现以下功能:The data processing module mainly realizes the following functions:
(1)接收机器人在其作业环境的位置数据,将机器人模型在作业场所环境模型中定位。(1) Receive the position data of the robot in its working environment, and position the robot model in the environment model of the working place.
(2)接收机器人身体各个关节所连接的连杆之间的相对角度数据,将其匹配到机器人模型,驱动模型的各个连杆之间运动来表达机器人的运动。(2) Receive the relative angle data between the connecting rods connected to each joint of the robot body, match it to the robot model, and drive the movement between the connecting rods of the model to express the motion of the robot.
(3)接收操作者发出的命令,进行命令解释。按照命令生成机器人作业的理想运动轨迹,对机器人的运行数据进行预测。可生成的运动轨迹包括机器人在命令执行过程中关节角度数据变化轨迹,机器人的位置数据变化轨迹。在理想情况下,这些数据与机器人作业现场反馈的数据相同。(3) Receive the command issued by the operator and explain the command. According to the command, the ideal trajectory of the robot's operation is generated, and the operating data of the robot is predicted. The motion track that can be generated includes the change track of the joint angle data of the robot during the command execution process, and the change track of the position data of the robot. Ideally, this data would be the same as the data fed back from the robot's job site.
场景在工作的过程中,数据处理模块正常情况下实时接收来自机器人工作现场的数据来驱动各类模型运动。在现场数据临时缺失的情况下,使用操作者发出命令的仿真预测生成的预测数据来驱动各类模型的运动,维持模型运动的连续性。During the working process of the scene, the data processing module normally receives data from the robot work site in real time to drive various model movements. In the case of temporary lack of on-site data, the prediction data generated by the simulation prediction issued by the operator is used to drive the movement of various models and maintain the continuity of the model movement.
本发明的有益效果是:The beneficial effects of the present invention are:
1.融合多种信息实时显示机器人作业现场的动画图像。场景中各个模型的驱动使用现场反馈的数据,在现场反馈数据暂时缺失的情况下,使用命令预测数据进行显示,保持显示的连续性。1. Integrating multiple information to display animated images of the robot's job site in real time. The drivers of each model in the scene use on-site feedback data. When the on-site feedback data is temporarily missing, use the command to predict the data for display to maintain the continuity of the display.
2.全视角观察图像。操作者可以任意改变观察视角,观察场景的细节。2. Observe images from all angles of view. The operator can change the viewing angle arbitrarily and observe the details of the scene.
附图说明: Description of drawings:
图1是基于多信息融合的仿人型机器人作业场景工作原理图Figure 1 is a working principle diagram of a humanoid robot operation scene based on multi-information fusion
具体实施方式: Detailed ways:
整个仿人型机器人工作场景工作过程如下:The working process of the whole humanoid robot working scene is as follows:
第一步,机器人开始运行,遥操作控制开始。启动计算机程序,显示已经建立好的场景模型。使用初始化数据确定机器人模型及其操作目标模型初始位置,确定机器人模型的连杆之间的初始角度。这一步所生成的是场景的初始化界面。In the first step, the robot starts running and the teleoperation control starts. Start the computer program to display the established scene model. Use the initialization data to determine the initial position of the robot model and its operation target model, and determine the initial angle between the connecting rods of the robot model. What is generated in this step is the initialization interface of the scene.
第二步,场景数据处理模块实时接收遥操作者发出的操作命令,解释生成预测轨迹数据。使用预测数据来驱动虚拟场景中的各个模型,以构成三维虚拟场景。这一步所生成的场景能够显示,机器人执行操作者发出命令的理想运动图像。In the second step, the scene data processing module receives the operation commands issued by the teleoperator in real time, interprets and generates predicted trajectory data. Use the prediction data to drive each model in the virtual scene to form a three-dimensional virtual scene. The scene generated in this step can show the ideal motion picture of the robot executing the commands issued by the operator.
第三步,机器人本身的传感器实时测量各关节角数据,经遥操作平台传输到场景的数据处理模块。数据处理模块停止使用预测关节角度数据,改为使用真实关节角数据驱动模型表达机器人自身的运动。此时,三维虚拟场景中的机器人模型,能够真实的表达机器人在作业现场的运行状态。In the third step, the sensor of the robot itself measures the data of each joint angle in real time, and transmits it to the data processing module of the scene through the teleoperation platform. The data processing module stops using the predicted joint angle data, and instead uses the real joint angle data to drive the model to express the motion of the robot itself. At this time, the robot model in the three-dimensional virtual scene can truly express the running state of the robot on the job site.
第四步,机器人作业现场的环境检测传感器开始运行,获取到机器人及其作业目标的位置数据。以上数据经过遥操作平台传输到场景的数据处理模块。数据模块使用以上数据为机器人模型及其环境模型进行定位。此时,三维场景能够真实的表现出作业现场中机器人及其环境的位置关系。In the fourth step, the environmental detection sensor on the robot's job site starts to operate, and obtains the position data of the robot and its job target. The above data is transmitted to the data processing module of the scene through the teleoperation platform. The data module uses the above data to localize the robot model and its environment model. At this time, the three-dimensional scene can truly show the positional relationship between the robot and its environment in the job site.
同时使用机器人自身传感器反馈数据和环境检测位置传感器反馈数据,来驱动机器人及其作业环境的模型的三维虚拟场景,能够真实表达机器人的运行状态及其在环境中的位置信息。其中,在现场反馈暂时缺失的情况下,场景数据处理模块能够自动选择由命令预测生成的轨迹数据,来驱动各类模型。当重新获取到反馈数据后,切换到由真实数据驱动模型的三维虚拟场景。At the same time, the robot's own sensor feedback data and the environment detection position sensor feedback data are used to drive the three-dimensional virtual scene of the model of the robot and its working environment, which can truly express the robot's operating state and its position information in the environment. Among them, in the case of temporary lack of on-site feedback, the scene data processing module can automatically select the trajectory data generated by command prediction to drive various models. After reacquiring the feedback data, switch to the 3D virtual scene where the model is driven by real data.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction method of humanoid robot operation scene based on multi-information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2005100599150A CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction method of humanoid robot operation scene based on multi-information fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN1840298A CN1840298A (en) | 2006-10-04 |
CN100389013C true CN100389013C (en) | 2008-05-21 |
Family
ID=37029581
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2005100599150A Expired - Fee Related CN100389013C (en) | 2005-04-01 | 2005-04-01 | Reconstruction method of humanoid robot operation scene based on multi-information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100389013C (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101434066B (en) * | 2008-10-20 | 2012-11-21 | 北京理工大学 | Method and platform for predicating teleoperation of robot |
CN101844353B (en) * | 2010-04-14 | 2011-08-10 | 华中科技大学 | Teleoperation task planning and simulation method for mechanical arm/dexterous hand system |
US9879994B2 (en) * | 2011-06-15 | 2018-01-30 | Trimble Inc. | Method of placing a total station in a building |
JP5892361B2 (en) * | 2011-08-02 | 2016-03-23 | ソニー株式会社 | Control device, control method, program, and robot control system |
CN107004298B (en) * | 2016-04-25 | 2020-11-10 | 深圳前海达闼云端智能科技有限公司 | Method and device for establishing three-dimensional model of robot and electronic equipment |
CN112388678B (en) * | 2020-11-04 | 2023-04-18 | 公安部第三研究所 | Behavior detection robot based on low-power-consumption pattern recognition technology |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09300272A (en) * | 1996-05-14 | 1997-11-25 | Nippon Telegr & Teleph Corp <Ntt> | Robot control method and device thereof |
US5983166A (en) * | 1995-09-28 | 1999-11-09 | Komatsu Ltd. | Structure measurement system |
JP2000117672A (en) * | 1998-10-16 | 2000-04-25 | Xerox Corp | Dynamic reconstitution method and device for switch connection between module |
CN1309598A (en) * | 1998-07-15 | 2001-08-22 | Ce核电力有限公司 | Visual tube position verification system |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
CN1417006A (en) * | 2001-11-09 | 2003-05-14 | 中国科学院自动化研究所 | Vision controlling platform for opened industrial robot |
CN1472047A (en) * | 2003-06-26 | 2004-02-04 | 上海交通大学 | Open network robot general control system |
CN1573628A (en) * | 2003-05-29 | 2005-02-02 | 发那科株式会社 | Robot system |
-
2005
- 2005-04-01 CN CNB2005100599150A patent/CN100389013C/en not_active Expired - Fee Related
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5983166A (en) * | 1995-09-28 | 1999-11-09 | Komatsu Ltd. | Structure measurement system |
JPH09300272A (en) * | 1996-05-14 | 1997-11-25 | Nippon Telegr & Teleph Corp <Ntt> | Robot control method and device thereof |
CN1309598A (en) * | 1998-07-15 | 2001-08-22 | Ce核电力有限公司 | Visual tube position verification system |
JP2000117672A (en) * | 1998-10-16 | 2000-04-25 | Xerox Corp | Dynamic reconstitution method and device for switch connection between module |
US6233502B1 (en) * | 1998-10-16 | 2001-05-15 | Xerox Corporation | Fault tolerant connection system for transiently connectable modular elements |
CN1343551A (en) * | 2000-09-21 | 2002-04-10 | 上海大学 | Hierarchical modular model for robot's visual sense |
CN1417006A (en) * | 2001-11-09 | 2003-05-14 | 中国科学院自动化研究所 | Vision controlling platform for opened industrial robot |
CN1573628A (en) * | 2003-05-29 | 2005-02-02 | 发那科株式会社 | Robot system |
CN1472047A (en) * | 2003-06-26 | 2004-02-04 | 上海交通大学 | Open network robot general control system |
Non-Patent Citations (2)
Title |
---|
基于虚拟现实技术的机器人外科手术模拟与培训系统研究. 吕洪波,王田苗,刘达,胡磊,唐泽圣,申皓,田增民.高技术通讯,第11期. 2001 |
基于虚拟现实技术的机器人外科手术模拟与培训系统研究. 吕洪波,王田苗,刘达,胡磊,唐泽圣,申皓,田增民.高技术通讯,第11期. 2001 * |
Also Published As
Publication number | Publication date |
---|---|
CN1840298A (en) | 2006-10-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106527177B (en) | The multi-functional one-stop remote operating control design case of one kind and analogue system and method | |
CN103302668B (en) | Based on control system and the method thereof of the Space teleoperation robot of Kinect | |
Pan et al. | Augmented reality-based robot teleoperation system using RGB-D imaging and attitude teaching device | |
CN107671857B (en) | 3D Simulation Platform for Service Robot Operation Demonstration and Algorithm Verification | |
Naceri et al. | Towards a virtual reality interface for remote robotic teleoperation | |
CN102120325B (en) | Novel remote operation far-end robot control platform and method | |
CN110216674B (en) | A Redundant Degree-of-Freedom Manipulator Vision Servo Obstacle Avoidance System | |
CN109434870A (en) | A kind of virtual reality operation system for robot livewire work | |
CN110480657A (en) | A kind of labyrinth environment space robot world remote control system | |
CN206326605U (en) | A kind of intelligent teaching system based on machine vision | |
CN101587329A (en) | Robot predicting method and system | |
Baier et al. | Distributed PC-based haptic, visual and acoustic telepresence system-experiments in virtual and remote environments | |
KR100756345B1 (en) | Robot simulation system using network | |
CN110977981A (en) | Robot virtual reality synchronization system and synchronization method | |
Hashimoto et al. | TouchMe: An Augmented Reality Interface for Remote Robot Control. | |
CN100389013C (en) | Reconstruction method of humanoid robot operation scene based on multi-information fusion | |
US20220101477A1 (en) | Visual Interface And Communications Techniques For Use With Robots | |
Duan et al. | Morphology agnostic gesture mapping for intuitive teleoperation of construction robots | |
CN109213306A (en) | A kind of robot remote control platform and its design method | |
Su et al. | Development of an optical tracking based teleoperation system with virtual reality | |
Kobayashi et al. | Overlay what humanoid robot perceives and thinks to the real-world by mixed reality system | |
Schwenk et al. | An immersive spatially consistent multi-modal augmented virtuality human-machine interface for telerobotic systems | |
JP7390405B2 (en) | Methods and systems for testing robotic systems in integrated physical and simulated environments | |
Zhang et al. | A visual tele-operation system for the humanoid robot BHR-02 | |
Vu et al. | Investigation of Annotation-assisted User Performance in Virtual Reality-based Remote Robot Control |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20080521 Termination date: 20110401 |