CN103991077B - A shared control method for robot hand controllers based on force fusion - Google Patents
A shared control method for robot hand controllers based on force fusion Download PDFInfo
- Publication number
- CN103991077B CN103991077B CN201410056953.XA CN201410056953A CN103991077B CN 103991077 B CN103991077 B CN 103991077B CN 201410056953 A CN201410056953 A CN 201410056953A CN 103991077 B CN103991077 B CN 103991077B
- Authority
- CN
- China
- Prior art keywords
- robot
- force
- hand controller
- control
- virtual
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
Landscapes
- Manipulator (AREA)
Abstract
本发明提出一种基于力融合的手控器共享控制方法,其步骤包括:首先以目标对象为导向,通过操作者自身的智能决策实现人手操纵手控器控制机器人运动;同时通过立体视觉技术获取作业场景图像数据并实时重构三维环境,以目标对象为导向构建使机器人自主接近目标的虚拟引导力,以此虚拟力引导手控器控制机器人运动,并将操作者的操纵力与目标对象的虚拟引导力进行融合,通过融合力对手控器和机器人的运动进行控制,实现机器智能与人类智能共同作用下控制机器人接近目标对象,完成作业过程。本发明可实现人类智能与机器智能的有效结合,保证作业过程安全、快速的进行。
The present invention proposes a shared control method for hand controllers based on force fusion, the steps of which include: firstly taking the target object as the guide, and realizing manipulating the hand controller to control the movement of the robot through the operator's own intelligent decision-making; Working scene image data and real-time reconstruction of the three-dimensional environment, and the target object as the guide to build a virtual guiding force that enables the robot to approach the target autonomously, using this virtual force to guide the hand controller to control the robot's movement, and combine the operator's manipulation force with the target object's The virtual guidance force is fused, and the movement of the hand controller and the robot is controlled through the fusion force, so that the machine intelligence and human intelligence can control the robot to approach the target object and complete the operation process. The invention can realize the effective combination of human intelligence and machine intelligence, and ensure safe and fast operation process.
Description
技术领域technical field
本发明属于机器人的控制方法,特别涉及一种基于力融合的手控器共享控制方法。The invention belongs to a control method of a robot, in particular to a hand controller sharing control method based on force fusion.
背景技术Background technique
机器人对于高温、高压、强辐射等极限环境下作业任务的完成发挥了愈来愈重要的作用。机器人正朝着智能化的方向发展,然而由于受控制、传感及人工智能等发展水平的制约,研制出在变化环境下的全自主机器人是短期内难以达到的目标。操控终端,简称手控器,是操作者和机器人之间进行人机交互的桥梁,必须有较高的操控效率和实时性。对机器人的作业自主控制在长期内主要还是通过机器人操作终端来实时控制。Robots have played an increasingly important role in the completion of tasks in extreme environments such as high temperature, high pressure, and strong radiation. Robots are developing in the direction of intelligence. However, due to the constraints of the development level of control, sensing and artificial intelligence, it is difficult to develop a fully autonomous robot in a changing environment in the short term. The control terminal, referred to as the hand controller, is a bridge for human-computer interaction between the operator and the robot, and must have high control efficiency and real-time performance. In the long run, the autonomous control of the robot's operation is mainly controlled in real time through the robot operation terminal.
但是在现在的复杂和高危环境下,对于操作者和整个作业过程等而言,以往的机器人手控器的控制方法还存在着诸多问题有待改善。However, in today's complex and high-risk environment, for the operator and the entire operation process, there are still many problems to be improved in the previous control method of the robot hand controller.
首先,机器人作业任务完成的成功与否以及效率高低,过多地依赖于操纵人员的技能水平和熟练程度。由于手控器在结构、形式、尺寸等方面与机器人相差甚大,操作者在远距离控制过程中,往往要经过多次反复的位姿调整,无疑会导致机器人操控效率低下。同时,在注意力高度集中的状态下,长时间的操作容易造成操纵人员的心理紧张及精神疲劳。First of all, the success and efficiency of the completion of robot tasks depends too much on the skill level and proficiency of the operator. Since the hand controller differs greatly from the robot in terms of structure, form, and size, the operator often has to go through repeated pose adjustments during remote control, which will undoubtedly lead to low robot control efficiency. At the same time, in a state of high concentration of attention, long-term operation is likely to cause psychological tension and mental fatigue of the operator.
其次,由于作业环境和任务的复杂性,目前的一些操作方法大都难以满足实际作业中的控制需求。因为这些方法无法获得环境的深度信息,操作者主要依靠二维图像推断机器人与目标的相对位姿,并进一步决策如何操纵手控器,这样势必会增加操作者的工作负担。同时,图像由于数据量大,在远距离传输中不可避免地存在较大时滞,由此引发的误操作频频发生。Secondly, due to the complexity of the operating environment and tasks, most of the current operating methods are difficult to meet the control requirements in actual operations. Because these methods cannot obtain the depth information of the environment, the operator mainly relies on two-dimensional images to infer the relative pose of the robot and the target, and further decide how to manipulate the hand controller, which will inevitably increase the workload of the operator. At the same time, due to the large amount of data in the image, there is inevitably a large time lag in the long-distance transmission, and misoperations caused by this occur frequently.
再者,现有的操控终端多采用单边位置伺服或双边力反馈伺服控制策略,无法向操纵人员提供全方位、预见性的信息反馈。根据经验可知,操纵人员利用手控器操控机器人时,更关心的是机器人末端与任务目标、障碍物的相对位姿情况,以便控制机器人迅速、安全地接近目标。在此操作阶段,位移、力传感器并不能提供相关信息。Furthermore, most of the existing control terminals adopt unilateral position servo or bilateral force feedback servo control strategies, which cannot provide comprehensive and predictable information feedback to operators. According to experience, when the operator uses the hand controller to control the robot, he is more concerned about the relative pose of the robot end, the task target and the obstacle, so as to control the robot to approach the target quickly and safely. Displacement and force sensors do not provide relevant information during this phase of operation.
最后,综合操作者的手动控制和机器人的自主控制的优点,在人、自主控制系统之间进行协调,共同对机器人进行控制,即共享控制。共享控制虽然已经被大量应用于环境不可预知、时延导致的控制信号不可靠等机器人操控场合,但是现有的共享控制方法普遍存在传感信号多、控制系统复杂等问题,而且无论是自由度分割与融合的共享控制方式(人的手动与机器人的自主分别控制机器人的不同自由度,而非共同控制某个自由度),还是基于视觉共享的任务分解控制方式(人先手动操纵机器人到指定位置,机器人再自主完成作业任务),操作者的智能与机器人的智能在控制回路依旧相对独立,融合度不高,难以适应复杂的、多变的作业环境。Finally, combining the advantages of the operator's manual control and the robot's autonomous control, coordinate between the human and the autonomous control system to jointly control the robot, that is, shared control. Although shared control has been widely used in robot control occasions such as unpredictable environments and unreliable control signals caused by time delays, existing shared control methods generally have problems such as many sensing signals and complex control systems, and no matter the degree of freedom The shared control method of segmentation and fusion (human manual and robot autonomy control different degrees of freedom of the robot separately, rather than jointly controlling a certain degree of freedom), or the task decomposition control method based on vision sharing (human first manually manipulates the robot to the specified location, the robot will complete the task independently), the intelligence of the operator and the intelligence of the robot are still relatively independent in the control loop, and the degree of integration is not high, so it is difficult to adapt to the complex and changeable working environment.
发明内容Contents of the invention
针对上述机器人操控领域中在操作性与安全性方面存在的问题,本发明提出一种基于力融合的机器人手控器共享控制方法。Aiming at the problems in operability and safety in the field of robot manipulation mentioned above, the present invention proposes a shared control method for robot hand controllers based on force fusion.
本发明所采用的技术方案:本发明研究机器人手控器上操纵力与虚拟引导力融合的人—机共享控制方法。虚拟引导力在机器人运动控制中具有“预测”和“导引”的功能,操作者的操纵力是人类高级决策的一种输出形式。采用合适的控制方法,合理分配操作者手动参与和机器自主参与的权值,将机器人的自主任务规划和手动控制有效结合,从而可以实现人类智能与机器智能的集成。其具体方法包括如下步骤The technical solution adopted in the present invention: the present invention studies the man-machine sharing control method of the fusion of the manipulating force on the robot hand controller and the virtual guiding force. Virtual guidance force has the functions of "prediction" and "guidance" in robot motion control, and the operator's manipulation force is an output form of human advanced decision-making. Using appropriate control methods, reasonably assigning the weights of manual participation of the operator and autonomous participation of the machine, and effectively combining the autonomous task planning of the robot with manual control, the integration of human intelligence and machine intelligence can be realized. Its specific method includes the following steps
a.通过手控器操作终端,以目标对象为导向,通过操作者自身的智能决策实现人手操纵力信号控制机器人运动;a. Operate the terminal through the hand controller, take the target object as the guide, and realize the manual manipulation force signal to control the movement of the robot through the operator's own intelligent decision-making;
b.通过立体视觉技术获取作业场景图像数据并实时重构三维环境,以目标对象为导向构建引导机器人接近目标对象的虚拟引导力(包括引力、斥力、图像伺服力),以此虚拟引导力实现机器人的局部自主控制;b. Obtain the image data of the operation scene through stereo vision technology and reconstruct the three-dimensional environment in real time, and construct a virtual guiding force (including attraction, repulsion, and image servo force) that guides the robot to approach the target object based on the target object, so as to realize the virtual guiding force Local autonomous control of robots;
c.将步骤a中所述操作者手动控制力与步骤b中所述目标对象的虚拟引导力进行融合,通过融合力对手控器和机器人的运动进行控制,实现机器智能与人类智能共同作用下控制机器人接近目标对象,完成作业过程。c. Fuse the operator's manual control force described in step a with the virtual guidance force of the target object described in step b, and control the movement of the hand controller and robot through the fusion force to realize the joint action of machine intelligence and human intelligence. Control the robot to approach the target object and complete the operation process.
所述步骤a通过手控器操作终端,以目标对象为导向,通过操作者自身的智能决策实现人手操纵力信号控制机器人运动,通过以下具体方式实现:The step a uses the hand controller to operate the terminal, is guided by the target object, and realizes the manual manipulation force signal to control the movement of the robot through the operator's own intelligent decision-making, and is realized in the following specific ways:
一、搭建基于力闭环的机器人运动控制平台;1. Build a robot motion control platform based on force closed loop;
二、计算机采集人手操纵力信号,传送数据为系统控制手控器的运动做准备;2. The computer collects the manipulative force signal and transmits the data to prepare for the system to control the movement of the hand controller;
三、以质量—阻尼系统控制策略,依据手控器控制系统动力学模型公式3. Based on the mass-damping system control strategy, according to the dynamic model formula of the hand controller control system
对手控器的运动进行实时控制,式中,M、B分别为手控器的惯量矩阵和阻尼矩阵,J为手控器的雅可比矩阵,Fh为操作者所施加的操纵力,Fe为机器人与环境的接触力,qD为手控器的期望运动位姿;The motion of the hand controller is controlled in real time. In the formula, M and B are the inertia matrix and damping matrix of the hand controller respectively, J is the Jacobian matrix of the hand controller, F h is the manipulation force applied by the operator, and F e is the contact force between the robot and the environment, and q D is the expected motion pose of the hand controller;
四、将手控器的运动位姿信号传递到机器人各运动关节处,使机器人运动与手控器运动同步,以目标对象为导向,通过操作者自身的智能决策实现人手操纵力信号控制机器人运动。4. Transmit the motion position and posture signals of the hand controller to the motion joints of the robot, so that the robot motion is synchronized with the motion of the hand controller, guided by the target object, and realize the human manipulation force signal to control the robot motion through the operator's own intelligent decision-making .
所述步骤b通过立体视觉技术获取作业场景图像数据并实时重构现场环境,以目标对象为导向构建引导机器人接近目标对象的虚拟引导力,以此虚拟力引导手控器控制机器人运动,按以下方式实现:The step b obtains the image data of the operation scene through stereo vision technology and reconstructs the scene environment in real time, and constructs a virtual guiding force that guides the robot to approach the target object guided by the target object, and uses this virtual force to guide the hand controller to control the movement of the robot, as follows Way to achieve:
一、通过立体视觉技术获取作业场景图像数据并实时重构三维环境,计算目标对象为导向下机器人的期望位姿,做法如下:1. Obtain the image data of the operation scene through stereo vision technology and reconstruct the 3D environment in real time, and calculate the expected pose of the robot under the guidance of the target object. The method is as follows:
1)由环境相机实时获取并计算背景图像中各像素点的空间坐标,然后生成背景环境(障碍物)的高程数据模型;1) Obtain and calculate the spatial coordinates of each pixel in the background image in real time by the environment camera, and then generate the elevation data model of the background environment (obstacles);
2)根据手部相机获取的图像中各像素点的颜色、纹理特征,来提取作业对象区域,然后构建作业对象图像特征;2) According to the color and texture features of each pixel in the image acquired by the hand camera, the job object area is extracted, and then the job object image features are constructed;
3)根据图像匹配及摄像机标定的结果,计算作业对象区域中各像素点的空间坐标;3) According to the results of image matching and camera calibration, calculate the spatial coordinates of each pixel in the work object area;
4)构建作业对象包围盒模型,以视觉计算得到的作业对象OBB包围盒轴线方向为期望位姿方向,计算机器人末端的期望位姿;4) Construct the bounding box model of the operation object, and use the axis direction of the OBB bounding box of the operation object obtained by visual calculation as the expected pose direction to calculate the expected pose of the robot end;
二、通过获取的作业场景图像数据和重构的三维环境,计算作业对象及障碍物对机器人末端的虚拟引力、斥力、图像伺服力:2. Calculate the virtual attraction, repulsion, and image servo force of the operation object and obstacles on the end of the robot through the acquired image data of the operation scene and the reconstructed 3D environment:
5)以机器人末端当前位姿与期望位姿的相对量为参数,在已获取的作业对象特征和各像素点空间坐标的基础上,依据引力场中物体的受力关系5) Taking the relative amount of the current pose of the robot end and the expected pose as a parameter, based on the obtained characteristics of the work object and the spatial coordinates of each pixel, according to the force relationship of the object in the gravitational field
Fζ=-α(γ-γ*)F ζ = -α(γ-γ * )
计算机器人所受的作业对象虚拟引力Fζ,式中,α为引力系数,γ与γ*分别为机器人末端的当前位姿和期望位姿向量;Calculate the virtual gravitational force F ζ of the work object suffered by the robot, where α is the gravitational coefficient, γ and γ* are the current pose and expected pose vector of the end of the robot, respectively;
6)以机器人末端当前位姿与期望位姿的相对量为参数,结合环境相机测得的背景环境(或障碍物)高程信息,求解机器人末端沿其速度方向与背景环境的距离,依据斥力场中物体的受力关系公式6) Taking the relative amount of the current pose of the robot end and the expected pose as a parameter, combined with the background environment (or obstacle) elevation information measured by the environmental camera, to solve the distance between the end of the robot and the background environment along its velocity direction, according to the repulsion field The force relation formula of the object in
计算机器人所受的障碍物虚拟斥力Fγ,式中,k为比例因子,ρ为机器人末端沿其速度方向与背景环境或障碍物的距离,ρ0为设定的机器人末端与障碍物或环境间的安全距离,为机器人末端运动速度的单位向量;Calculate the virtual repulsion F γ of the obstacle suffered by the robot, where k is the scaling factor, ρ is the distance between the end of the robot and the background environment or obstacles along its velocity direction, and ρ 0 is the set distance between the end of the robot and the obstacle or environment safe distance between is the unit vector of the robot’s terminal motion velocity;
7)为使图像中的作业对象始终位于相机视野中心区域附近,以作业对象的图像特征点坐标为参数,构建基于图像的虚拟伺服力7) In order to keep the work object in the image always near the center of the camera field of view, the image-based virtual servo force is constructed using the image feature point coordinates of the work object as parameters.
其中,V为基于图像的斥力场势能函数,为势函数V在s处梯度向量的转置,s=(u,v)为当前图像特征点,s*=(u*,v*)为图像期望特征点,M为手部相机运动速度与机器人末端位姿变化间的雅可比矩阵,L+=(LTL)-1LT为作业对象图像特征点运动速度与手部相机运动速度间的图像雅可比广义逆矩阵;Among them, V is the image-based repulsion field potential energy function, is the transposition of the gradient vector of the potential function V at s, s = (u, v) is the feature point of the current image, s * = (u * , v * ) is the expected feature point of the image, M is the movement speed of the hand camera and The Jacobian matrix between the robot end pose changes, L + = (L T L) -1 L T is the image Jacobian generalized inverse matrix between the motion speed of the image feature point of the work object and the motion speed of the hand camera;
三、将上述做法二中获取的目标对象及障碍物对机器人末端的虚拟引力、斥力、图像伺服力进行合成,并映射到机器人手控器的各运动关节上:3. Synthesize the virtual attraction, repulsion, and image servo force of the target object and obstacles obtained in the above method 2 on the end of the robot, and map them to the motion joints of the robot hand controller:
8)将作业对象引力、障碍物斥力、图像伺服力合成为8) Synthesize the gravitational force of the work object, the repulsive force of obstacles, and the image servo force into
式中,α、β、γ为不同斥力的影响系数,HFT、HMT分别为力和力矩向量;In the formula, α, β, γ are the influence coefficients of different repulsive forces, H F T , H M T are force and moment vectors, respectively;
9)将合成的机器人作业空间下的虚拟引导力映射到手控器各关节,计算手控器关节坐标空间下的虚拟引导力/力矩τq为:9) Map the virtual guiding force in the synthesized robot workspace to each joint of the hand controller, and calculate the virtual guiding force/torque τ q in the joint coordinate space of the hand controller as:
式中,为机器人末端执行器坐标系相对手控器坐标系的旋转矩阵,J为手控器的雅可比矩阵。In the formula, is the rotation matrix of the coordinate system of the robot end effector relative to the coordinate system of the hand controller, and J is the Jacobian matrix of the hand controller.
四、以质量—阻尼系统构建手控器的控制模型,将上述步骤三中所求的虚拟力矩应用到手控器的位姿“预测”控制中:4. Construct the control model of the hand controller with the mass-damping system, and apply the virtual torque obtained in the above step 3 to the "predictive" control of the hand controller's pose:
10)手控器各关节在虚拟力/力矩τq作用下将产生期望的运动趋势qD,将手控器的关节简化为质量—阻尼系统,按照手控器控制系统动力学模型公式10) Each joint of the hand controller will produce the expected motion trend q D under the action of the virtual force/torque τ q , the joints of the hand controller are simplified as a mass-damping system, according to the dynamic model formula of the hand controller control system
设计手控器的位置控制器。Design the position controller for the hand control.
11)以手控器的位置预测控制机器人产生相对应的随动,实现以目标对象为导向在虚拟力的引导作用下手控器控制机器人运动。11) Use the position prediction of the hand controller to control the robot to generate corresponding follow-up, and realize the robot movement controlled by the hand controller under the guidance of the virtual force guided by the target object.
步骤c中所述的操作者操纵力与所述目标对象的虚拟引导力的融合按以下做法实现:The fusion of the operator's manipulation force described in step c and the virtual guidance force of the target object is realized in the following manner:
1)以合理的权值系数,将手控器各关节处的虚拟力/力矩τq、操纵力Fh、机器人与环境的接触力Fe共同作用到手控器的质量—阻尼控制系统模型中,手控器将产生期望的运动趋势qD,按照手控器控制系统动力学模型公式1) With a reasonable weight coefficient, the virtual force/torque τ q at each joint of the hand controller, the manipulation force F h , and the contact force F e between the robot and the environment are jointly applied to the mass-damping control system model of the hand controller , the hand controller will produce the desired movement trend q D , according to the formula of the dynamic model of the hand controller control system
设计手控器的位置控制器。k1、k2分别为手控器自主控制和操作者手动控制的权值系数。Design the position controller for the hand control. k 1 and k 2 are the weight coefficients of the autonomous control of the hand controller and the manual control of the operator, respectively.
2)以虚拟力的引导做路径规划,实现机器人初始阶段快速接近目标对象,在最终控制阶段,以操作者对手控器的控制力为决定性作用,实现对机器人位姿的实时调整,最终达成机器智能与人类智能共同作用下的共享控制。2) Guided by virtual force for path planning, the robot can quickly approach the target object in the initial stage. In the final control stage, the operator's control force on the hand controller is the decisive role to realize the real-time adjustment of the robot's pose, and finally achieve the goal of the robot. Shared control under the joint action of intelligence and human intelligence.
本发明相对于现有技术具有如下的优点及技术效果:Compared with the prior art, the present invention has the following advantages and technical effects:
1.提出将视觉对环境的认知结果以虚拟力的方式同操作者的操纵力进行融合这一新型的人—机共享控制方法,从而实现机器人自主任务规划与手动控制的有机集成;1. Propose a new man-machine shared control method that integrates the visual cognition results of the environment with the operator's manipulation force in the form of virtual force, so as to realize the organic integration of robot autonomous task planning and manual control;
2.降低对操作人员熟练程度的要求,并在一定程度上实现机器人对人手操作的导航;2. Reduce the requirements for the proficiency of the operator, and realize the navigation of the robot to the manual operation to a certain extent;
3.在自动导航接近目标对象的基础上,可实现操作者人手辅助调节,确保更安全,精确的作业过程进行;3. On the basis of automatic navigation approaching the target object, the operator can be assisted by manual adjustment to ensure a safer and more accurate operation process;
4.对可能发生的机器人与环境间的碰撞进行预测,可有效保证机器人作业系统的安全性。4. Predicting possible collisions between the robot and the environment can effectively ensure the safety of the robot operating system.
附图说明Description of drawings
图1为本发明方法控制系统组成图。Fig. 1 is a composition diagram of the method control system of the present invention.
具体实施方式Detailed ways
下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此实施例。The present invention will be further described in detail below in conjunction with the embodiments and accompanying drawings, but the embodiment of the present invention is not limited to this embodiment.
参照图1,一种基于力融合的手控器共享控制方法,包括以下步骤:Referring to Fig. 1, a hand controller sharing control method based on force fusion includes the following steps:
a.通过手控器操作终端,以目标对象为导向,通过操作者自身的智能决策实现人手操纵力信号控制机器人运动,通过以下具体方式实现:a. Use the hand controller to operate the terminal, take the target object as the guide, and realize the manual manipulation force signal to control the movement of the robot through the operator's own intelligent decision-making, and realize it in the following specific ways:
一、搭建基于力闭环的机器人运动控制平台;1. Build a robot motion control platform based on force closed loop;
二、计算机采集人手操纵力信号,传送数据为系统控制手控器的运动做准备;2. The computer collects the manipulative force signal and transmits the data to prepare for the system to control the movement of the hand controller;
三、以质量—阻尼系统控制策略,依据手控器控制系统动力学模型公式3. Based on the mass-damping system control strategy, according to the dynamic model formula of the hand controller control system
对手控器的运动进行实时控制,式中,M、B分别为手控器的惯量矩阵和阻尼矩阵,J为手控器的雅可比矩阵,Fh为操作者所施加的操纵力,Fe为机器人与环境的接触力,qD为手控器的期望运动位姿;The motion of the hand controller is controlled in real time. In the formula, M and B are the inertia matrix and damping matrix of the hand controller respectively, J is the Jacobian matrix of the hand controller, F h is the manipulation force applied by the operator, and F e is the contact force between the robot and the environment, and q D is the expected motion pose of the hand controller;
四、将手控器的运动位姿信号传递到机器人各运动关节处,使机器人运动与手控器运动同步,以目标对象为导向,通过操作者自身的智能决策实现人手操纵力信号控制机器人运动。4. Transmit the motion position and posture signals of the hand controller to the motion joints of the robot, so that the robot motion is synchronized with the motion of the hand controller, guided by the target object, and realize the human manipulation force signal to control the robot motion through the operator's own intelligent decision-making .
b.通过立体视觉技术获取作业场景图像数据并实时重构三维环境,以目标对象为导向构建引导机器人接近目标对象的虚拟引导力(包括引力、斥力、图像伺服力),以此虚拟引导力引导手控器控制机器人运动;b. Obtain the image data of the operation scene through stereo vision technology and reconstruct the three-dimensional environment in real time, and construct a virtual guiding force (including attraction, repulsion, and image servo force) that guides the robot to approach the target object guided by the target object, and guide the robot with this virtual guiding force The hand controller controls the movement of the robot;
按以下方式实现:Implemented as follows:
一、通过立体视觉技术获取作业场景图像数据并实时重构三维环境,计算目标对象为导向下机器人的期望位姿,做法如下:1. Obtain the image data of the operation scene through stereo vision technology and reconstruct the 3D environment in real time, and calculate the expected pose of the robot under the guidance of the target object. The method is as follows:
1)由环境相机实时获取并计算背景图像中各像素点的空间坐标,然后生成背景环境(障碍物)的高程数据模型;1) Obtain and calculate the spatial coordinates of each pixel in the background image in real time by the environment camera, and then generate the elevation data model of the background environment (obstacles);
2)根据手部相机获取的图像中各像素点的颜色、纹理特征,来提取作业对象区域,然后构建作业对象图像特征;2) According to the color and texture features of each pixel in the image acquired by the hand camera, the job object area is extracted, and then the job object image features are constructed;
3)根据图像匹配及摄像机标定的结果,计算作业对象区域中各像素点的空间坐标;3) According to the results of image matching and camera calibration, calculate the spatial coordinates of each pixel in the work object area;
4)构建作业对象包围盒模型,以视觉计算得到的作业对象OBB包围盒轴线方向为期望位姿方向,计算机器人末端的期望位姿;4) Construct the bounding box model of the operation object, and use the axis direction of the OBB bounding box of the operation object obtained by visual calculation as the expected pose direction to calculate the expected pose of the robot end;
二、通过获取的作业场景图像数据和重构的三维环境,计算作业对象及障碍物对机器人末端的虚拟引力、斥力、图像伺服力:2. Calculate the virtual attraction, repulsion, and image servo force of the operation object and obstacles on the end of the robot through the acquired image data of the operation scene and the reconstructed 3D environment:
5)以机器人末端当前位姿与期望位姿的相对量为参数,在已获取的作业对象特征和各像素点空间坐标的基础上,依据引力场中物体的受力关系5) Taking the relative amount of the current pose of the robot end and the expected pose as a parameter, based on the obtained characteristics of the work object and the spatial coordinates of each pixel, according to the force relationship of the object in the gravitational field
Fζ=-α(γ-γ*)F ζ = -α(γ-γ * )
计算机器人所受的作业对象虚拟引力Fζ,式中,α为引力系数,γ与γ*分别为机器人末端的当前位姿和期望位姿向量;Calculate the virtual gravitational force F ζ of the work object suffered by the robot, where α is the gravitational coefficient, γ and γ* are the current pose and expected pose vector of the end of the robot, respectively;
6)以机器人末端当前位姿与期望位姿的相对量为参数,结合环境相机测得的背景环境(或障碍物)高程信息,求解机器人末端沿其速度方向与背景环境的距离,依据斥力场中物体的受力关系公式6) Taking the relative amount of the current pose of the robot end and the expected pose as a parameter, combined with the background environment (or obstacle) elevation information measured by the environmental camera, to solve the distance between the end of the robot and the background environment along its velocity direction, according to the repulsion field The force relation formula of the object in
计算机器人所受的障碍物虚拟斥力Fγ,式中,k为比例因子,ρ为机器人末端沿其速度方向与背景环境或障碍物的距离,ρ0为设定的机器人末端与障碍物或环境间的安全距离,为机器人末端运动速度的单位向量;Calculate the virtual repulsion F γ of the obstacle suffered by the robot, where k is the scaling factor, ρ is the distance between the end of the robot and the background environment or obstacles along its velocity direction, and ρ 0 is the set distance between the end of the robot and the obstacle or environment safe distance between is the unit vector of the robot’s terminal motion velocity;
7)为使图像中的作业对象始终位于相机视野中心区域附近,以作业对象的图像特征点坐标为参数,构建基于图像的虚拟伺服力7) In order to keep the work object in the image always near the center of the camera field of view, the image-based virtual servo force is constructed using the image feature point coordinates of the work object as parameters.
其中,V为基于图像的斥力场势能函数,为势函数V在s处梯度向量的转置,s=(u,v)为当前图像特征点,s*=(u*,v*)为图像期望特征点,M为手部相机运动速度与机器人末端位姿变化间的雅可比矩阵,L+=(LTL)-1LT为作业对象图像特征点运动速度与手部相机运动速度间的图像雅可比广义逆矩阵;Among them, V is the image-based repulsion field potential energy function, is the transposition of the gradient vector of the potential function V at s, s = (u, v) is the feature point of the current image, s * = (u * , v * ) is the expected feature point of the image, M is the movement speed of the hand camera and The Jacobian matrix between the robot end pose changes, L + = (L T L) -1 L T is the image Jacobian generalized inverse matrix between the motion speed of the image feature point of the work object and the motion speed of the hand camera;
三、将上述做法二中获取的目标对象及障碍物对机器人末端的虚拟吸引力、排斥力、图像伺服力按照各自的影响进行合成,并映射到机器人手控器的各运动关节上:3. Synthesize the virtual attractive force, repulsive force, and image servo force of the target object and obstacles obtained in the above method 2 on the end of the robot according to their respective influences, and map them to the motion joints of the robot hand controller:
8)将作业对象引力、障碍物斥力、图像伺服力合成为8) Synthesize the gravitational force of the work object, the repulsive force of obstacles, and the image servo force into
式中,α、β、γ为不同斥力的影响系数,HFT、HMT分别为力和力矩向量;In the formula, α, β, γ are the influence coefficients of different repulsive forces, H F T , H M T are force and moment vectors, respectively;
9)将合成的机器人作业空间下的虚拟引导力映射到手控器各关节,计算手控器关节坐标空间下的虚拟引导力/力矩τq为:9) Map the virtual guiding force in the synthesized robot workspace to each joint of the hand controller, and calculate the virtual guiding force/torque τ q in the joint coordinate space of the hand controller as:
式中,为机器人末端执行器坐标系相对手控器坐标系的旋转矩阵,J为手控器的雅可比矩阵。In the formula, is the rotation matrix of the coordinate system of the robot end effector relative to the coordinate system of the hand controller, and J is the Jacobian matrix of the hand controller.
四、以质量—阻尼系统构建手控器的控制模型,将上述步骤三中所求的虚拟力矩应用到手控器的位姿“预测”控制中:4. Construct the control model of the hand controller with the mass-damping system, and apply the virtual torque obtained in the above step 3 to the "predictive" control of the hand controller's pose:
10)手控器各关节在虚拟力/力矩τq作用下将产生期望的运动趋势qD,将手控器的关节简化为质量—阻尼系统,按照手控器控制系统动力学模型公式10) Each joint of the hand controller will produce the expected motion trend q D under the action of the virtual force/torque τ q , the joints of the hand controller are simplified as a mass-damping system, according to the dynamic model formula of the hand controller control system
设计手控器的位置控制器。Design the position controller for the hand control.
11)以手控器的位置预测控制机器人产生相对应的随动,实现以目标对象为导向在虚拟力的引导作用下手控器控制机器人运动。11) Use the position prediction of the hand controller to control the robot to generate corresponding follow-up, and realize the robot movement controlled by the hand controller under the guidance of the virtual force guided by the target object.
c.将步骤a中所述操作者操纵力与步骤b中所述目标对象的虚拟引导力进行融合,通过融合力对手控器和机器人的运动进行控制,实现机器智能与人类智能共同作用下控制机器人接近目标对象,完成作业过程;按以下做法实现:c. Fuse the operator's manipulation force described in step a with the virtual guidance force of the target object described in step b, and control the motion of the hand controller and robot through the fusion force to realize the control under the joint action of machine intelligence and human intelligence The robot approaches the target object and completes the operation process; it is realized by the following methods:
1)以合理的权值系数,将手控器各关节处的虚拟力/力矩τq、操纵力Fh、机器人与环境的接触力Fe共同作用到手控器的质量—阻尼控制系统模型中,手控器将产生期望的运动趋势qD,按照手控器控制系统动力学模型公式1) With a reasonable weight coefficient, the virtual force/torque τ q at each joint of the hand controller, the manipulation force F h , and the contact force F e between the robot and the environment are jointly applied to the mass-damping control system model of the hand controller , the hand controller will produce the desired movement trend q D , according to the formula of the dynamic model of the hand controller control system
设计手控器的位置控制器。k1、k2分别为手控器自主控制和操作者手动控制的权值系数。Design the position controller for the hand control. k 1 and k 2 are the weight coefficients of the autonomous control of the hand controller and the manual control of the operator, respectively.
2)以虚拟力的引导做路径规划,实现机器人初始阶段快速接近目标对象,在最终控制阶段,以操作者对手控器的控制力为决定性作用,实现对机器人位姿的实时调整,最终达成机器智能与人类智能共同作用下的共享控制。2) Guided by virtual force for path planning, the robot can quickly approach the target object in the initial stage. In the final control stage, the operator's control force on the hand controller is the decisive role to realize the real-time adjustment of the robot's pose, and finally achieve the goal of the robot. Shared control under the joint action of intelligence and human intelligence.
上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,奇它的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiment is a preferred implementation mode of the present invention, but the implementation mode of the present invention is not limited by the above-mentioned embodiment, except any changes, modifications, substitutions and combinations made without departing from the spirit and principle of the present invention , simplification, all should be equivalent replacement methods, and are all included in the protection scope of the present invention.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410056953.XA CN103991077B (en) | 2014-02-19 | 2014-02-19 | A shared control method for robot hand controllers based on force fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410056953.XA CN103991077B (en) | 2014-02-19 | 2014-02-19 | A shared control method for robot hand controllers based on force fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103991077A CN103991077A (en) | 2014-08-20 |
CN103991077B true CN103991077B (en) | 2015-12-09 |
Family
ID=51305546
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410056953.XA Expired - Fee Related CN103991077B (en) | 2014-02-19 | 2014-02-19 | A shared control method for robot hand controllers based on force fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103991077B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105242533B (en) * | 2015-09-01 | 2017-11-28 | 西北工业大学 | A kind of change admittance remote operating control method for merging multi information |
TWI664573B (en) * | 2018-05-11 | 2019-07-01 | 國立交通大學 | Motion computing device, robot system and robot controlling method |
CN110355750B (en) * | 2018-10-29 | 2022-05-10 | 西北工业大学 | Interactive control method for hand-eye coordination in teleoperation |
CN110421557A (en) * | 2019-06-21 | 2019-11-08 | 国网安徽省电力有限公司淮南供电公司 | Environmental perspective perception and the safe early warning of distribution network live line work robot protect system and method |
CN111702757B (en) * | 2020-05-27 | 2021-08-17 | 华中科技大学 | Control method, device, computing device and storage medium based on operator's intention |
CN112596547B (en) * | 2020-12-22 | 2022-02-11 | 燕山大学 | Force sense guiding remote control method for quickly recovering helicopter in offshore operation |
CN115113517B (en) * | 2021-03-17 | 2025-05-27 | 腾讯科技(深圳)有限公司 | Bilateral force feedback method, device, equipment and medium |
CN113467461B (en) * | 2021-07-13 | 2022-04-01 | 燕山大学 | Man-machine cooperation type path planning method under mobile robot unstructured environment |
CN114131616B (en) * | 2021-12-28 | 2024-02-02 | 华中科技大学 | Three-dimensional virtual force field visual enhancement method applied to mechanical arm control |
CN117173240B (en) * | 2023-11-03 | 2024-02-06 | 天津信天电子科技有限公司 | AR auxiliary assembly method, device, equipment and medium for servo control driver |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101986219A (en) * | 2010-08-27 | 2011-03-16 | 东南大学 | Implementation method of force telepresence of telerobotics based on integration of virtual strength and real strength |
CN102814815A (en) * | 2012-08-24 | 2012-12-12 | 广东工业大学 | Virtual engineering robot system and control method |
CN103105851A (en) * | 2012-11-07 | 2013-05-15 | 吉林大学 | Kinesthesis teaching control method based on vision sense for remote control of robot |
EP2617533A1 (en) * | 2012-01-17 | 2013-07-24 | Seiko Epson Corporation | Robot controller, robot system, robot control method |
CN103514608A (en) * | 2013-06-24 | 2014-01-15 | 西安理工大学 | Movement target detection and extraction method based on movement attention fusion model |
-
2014
- 2014-02-19 CN CN201410056953.XA patent/CN103991077B/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101986219A (en) * | 2010-08-27 | 2011-03-16 | 东南大学 | Implementation method of force telepresence of telerobotics based on integration of virtual strength and real strength |
EP2617533A1 (en) * | 2012-01-17 | 2013-07-24 | Seiko Epson Corporation | Robot controller, robot system, robot control method |
CN102814815A (en) * | 2012-08-24 | 2012-12-12 | 广东工业大学 | Virtual engineering robot system and control method |
CN103105851A (en) * | 2012-11-07 | 2013-05-15 | 吉林大学 | Kinesthesis teaching control method based on vision sense for remote control of robot |
CN103514608A (en) * | 2013-06-24 | 2014-01-15 | 西安理工大学 | Movement target detection and extraction method based on movement attention fusion model |
Non-Patent Citations (1)
Title |
---|
基于立体视觉的遥操作工程机器人自主作业系统;唐新星;《农业机械学报》;20121025;第43卷(第10期);224-228 * |
Also Published As
Publication number | Publication date |
---|---|
CN103991077A (en) | 2014-08-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103991077B (en) | A shared control method for robot hand controllers based on force fusion | |
US20210205986A1 (en) | Teleoperating Of Robots With Tasks By Mapping To Human Operator Pose | |
CN104460670B (en) | SCARA robot motion simulation and remote control system and method | |
CN103271784B (en) | Man-machine interactive manipulator control system and method based on binocular vision | |
CN109164829B (en) | Flying mechanical arm system based on force feedback device and VR sensing and control method | |
CN103105851B (en) | Kinesthesis teaching control method based on vision sense for remote control of robot | |
CN106313049A (en) | Somatosensory control system and control method for apery mechanical arm | |
CN115319764B (en) | Robot based on multi-mode fusion in complex limited environment and operation method | |
Luo et al. | Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control | |
CN110744541A (en) | Vision-guided underwater mechanical arm control method | |
Le et al. | Remote control of excavator using head tracking and flexible monitoring method | |
CN113759901A (en) | Mobile robot autonomous obstacle avoidance method based on deep reinforcement learning | |
CN112634318A (en) | Teleoperation system and method for underwater maintenance robot | |
CN115469576A (en) | A Teleoperation System Based on Hybrid Mapping of Human-Robot Arm Heterogeneous Motion Space | |
CN110480657A (en) | A kind of labyrinth environment space robot world remote control system | |
Valenzuela-Urrutia et al. | Virtual reality-based time-delayed haptic teleoperation using point cloud data | |
Corsini et al. | Nonlinear model predictive control for human-robot handover with application to the aerial case | |
CN110039561A (en) | Hot line robot remote operating staff training system and method based on cloud | |
Li et al. | Unleashing mixed-reality capability in deep reinforcement learning-based robot motion generation towards safe human–robot collaboration | |
CN115157261A (en) | Human-computer interaction device and method for teleoperation of flexible manipulator based on mixed reality | |
CN116197899A (en) | Active robot teleoperation system based on VR | |
Materna et al. | Teleoperating assistive robots: a novel user interface relying on semi-autonomy and 3D environment mapping | |
Adetunji et al. | Digital twins below the surface: Enhancing underwater teleoperation | |
Wang et al. | Investigation on target point approaching control of bionic robotic fish in static flow | |
CN113467461B (en) | Man-machine cooperation type path planning method under mobile robot unstructured environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20151209 Termination date: 20180219 |
|
CF01 | Termination of patent right due to non-payment of annual fee |