[go: up one dir, main page]

CN104647390B - For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating - Google Patents

For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating Download PDF

Info

Publication number
CN104647390B
CN104647390B CN201510072044.XA CN201510072044A CN104647390B CN 104647390 B CN104647390 B CN 104647390B CN 201510072044 A CN201510072044 A CN 201510072044A CN 104647390 B CN104647390 B CN 104647390B
Authority
CN
China
Prior art keywords
camera
particle
coordinate system
coordinates
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510072044.XA
Other languages
Chinese (zh)
Other versions
CN104647390A (en
Inventor
孙富春
刘华平
滕辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201510072044.XA priority Critical patent/CN104647390B/en
Publication of CN104647390A publication Critical patent/CN104647390A/en
Application granted granted Critical
Publication of CN104647390B publication Critical patent/CN104647390B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明涉及一种用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,属于机械臂遥操作领域。本发明在遥操作基础上,将多个摄像机安放在不同角度,对摄像机进行标定,采用粒子滤波和sift局部特征匹配进而实现联合主动跟踪,保证待跟踪目标始终处于视场中央,对机械臂各个角度进行监控,并避免因遮挡等因素引起的跟踪失败,并加入人机交互,提高跟踪的鲁棒性。本发明方法在遥操作过程中,通过人机交互方式标注新的目标,使目标始终处在操作员的观察区域,或者更新目标,方便操作员制定下一步的控制策略。

The invention relates to a multi-camera joint active tracking target method for remote operation of a mechanical arm, belonging to the field of remote operation of a mechanical arm. On the basis of teleoperation, the present invention places multiple cameras at different angles, calibrates the cameras, uses particle filtering and sift local feature matching to realize joint active tracking, ensures that the target to be tracked is always in the center of the field of view, Monitor from different angles, avoid tracking failures caused by occlusion and other factors, and add human-computer interaction to improve the robustness of tracking. In the process of remote operation, the method of the present invention marks a new target through human-computer interaction, so that the target is always in the operator's observation area, or updates the target, which is convenient for the operator to formulate the next control strategy.

Description

用于机械臂遥操作的多摄像机联合主动跟踪目标的方法Multi-camera joint active tracking method for manipulator teleoperation

技术领域technical field

本发明涉及一种用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,属于机械臂遥操作领域。The invention relates to a multi-camera joint active tracking target method for remote operation of a mechanical arm, belonging to the field of remote operation of a mechanical arm.

背景技术Background technique

对于机械臂的遥操作,视觉系统是机械臂遥操作中的一个关键技术,其为操作臂提供目标物体的实时图像信息和空间位姿信息。同时视觉系统还将操作臂所处的状态、环境、以及工作阶段直观的反馈给地面操作员。For the teleoperation of the manipulator, the vision system is a key technology in the teleoperation of the manipulator, which provides real-time image information and space pose information of the target object for the manipulator. At the same time, the vision system also provides intuitive feedback to the ground operator on the state, environment, and working stage of the manipulator arm.

目前的遥操作方法中,摄像机提供了图像信息,并没有利用图像信息实现对机械臂工作过程中特定目标的主动跟踪,且摄像头都保持不变,不能主动调整视角,没有考虑机械臂运动超过摄像头视场而无法监控的情况。In the current teleoperation method, the camera provides image information, and the image information is not used to realize the active tracking of a specific target during the working process of the manipulator, and the camera remains unchanged, and the viewing angle cannot be actively adjusted, and the movement of the manipulator beyond the camera is not considered. The field of view cannot be monitored.

现有的技术文献中,发明专利“基于Kinect的空间遥操作机器人的控制系统及其方法”,公开号为CN201310193564.7,利用Kinect实现三维环境建模,并对预测环境进行一致性校正。该方法的缺点是,只采用一台摄像机对机械臂区域进行拍摄,并且没有对目标实现跟踪。另外,该方法摄像头保持不动,无法确保机械臂始终在视场中。In the existing technical literature, the invention patent "Kinect-based space teleoperation robot control system and its method", the publication number is CN201310193564.7, uses Kinect to realize three-dimensional environment modeling, and performs consistency correction on the predicted environment. The disadvantage of this method is that only one camera is used to photograph the area of the manipulator, and the target is not tracked. In addition, this method keeps the camera still and cannot ensure that the robotic arm is always in the field of view.

发明内容Contents of the invention

本发明的目的是提出一种用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,在遥操作基础上,为机械臂遥操作设计一套多个摄像头主动视觉系统,以实时采集、传输图像,为操作员提供直观的视觉观测依据。The purpose of the present invention is to propose a multi-camera active tracking method for manipulator teleoperation, on the basis of teleoperation, design a set of multi-camera active vision system for manipulator teleoperation to collect The image provides intuitive visual observation basis for the operator.

本发明提出的用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,包括以下步骤:The multi-camera joint active tracking target method for the teleoperation of the mechanical arm proposed by the present invention comprises the following steps:

(1)将多个摄像机置于机械臂的左上、右上和正面,对摄像机进行标定,分别获得多台摄像机的内参数矩阵M1、外参数矩阵M2、畸变系数和位姿变换矩阵T12,T23,T34……,具体步骤如下:(1) Place multiple cameras on the upper left, upper right and front of the robotic arm, calibrate the cameras, and obtain the internal parameter matrix M 1 , external parameter matrix M 2 , distortion coefficient and pose transformation matrix T 12 of multiple cameras respectively ,T 23 ,T 34... , the specific steps are as follows:

(1-1)设空间点P在摄像机坐标系中的坐标为P(xC,yC,zC),使空间点P沿着通过光心的光线投影到一个图像平面上,在图像平面中建立图像坐标系,得到该空间点P在图像坐标系中的投影坐标为P(x,y),将该投影坐标表示为计算机图像坐标系中的坐标为P(u,v),由小孔成像原理得到,空间点P在摄像机坐标中的坐标(xC,yC,zC)与投影坐标P(x,y)的坐标变换关系为: x = f x C z C y = f y C z C , 其中f为摄像机焦距;(1-1) Let the coordinates of the spatial point P in the camera coordinate system be P(x C , y C , z C ), so that the spatial point P is projected onto an image plane along the ray passing through the optical center, and on the image plane The image coordinate system is established in the image coordinate system, and the projected coordinates of the space point P in the image coordinate system are P(x, y), and the projected coordinates are expressed as the coordinates in the computer image coordinate system as P(u, v). According to the hole imaging principle, the coordinate transformation relationship between the coordinates (x C , y C , z C ) of the spatial point P in the camera coordinates and the projected coordinates P(x, y) is: x = f x C z C the y = f the y C z C , where f is the focal length of the camera;

(1-2)设定一个成像单元,该成像单元在上述图像坐标系中的x轴和y轴方向上的物理尺寸分别为dx和dy,成像单元中任意一个像素在上述计算机图像坐标系下的坐标(u,v)和上述图像坐标系下的坐标(x,y)存在以下坐标变换关系: u = x dx + u 0 v = y dy + v 0 , 其中,O(u0,v0)为位于摄像机主光轴上任意一点在计算机图像坐标系中的成像坐标,该成像坐标即主点坐标;(1-2) An imaging unit is set, the physical dimensions of the imaging unit in the x-axis and y-axis directions in the above-mentioned image coordinate system are respectively dx and dy, and any pixel in the imaging unit is in the above-mentioned computer image coordinate system The coordinates (u, v) of and the coordinates (x, y) in the above image coordinate system have the following coordinate transformation relationship: u = x dx + u 0 v = the y dy + v 0 , Among them, O(u 0 , v 0 ) is the imaging coordinate of any point on the main optical axis of the camera in the computer image coordinate system, and the imaging coordinate is the principal point coordinate;

(1-3)根据上述步骤(1-1)和步骤(1-2)的坐标变换关系,得到摄像机的内参数矩阵M1为: M 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , 其中,为摄像机焦距f在摄像机图像平面坐标系的u轴上的归一化焦距,为摄像机焦距f在摄像机图像平面坐标系的v轴上的归一化焦距;(1-3) According to the coordinate transformation relation of above-mentioned step (1-1) and step (1-2), obtain the internal parameter matrix M of camera as: m 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , in, is the normalized focal length of the camera focal length f on the u-axis of the camera image plane coordinate system, is the normalized focal length of the camera focal length f on the v-axis of the camera image plane coordinate system;

(1-4)设空间点P在世界坐标系中的坐标为(xW,yW,zW),空间点P在摄像机坐标系中的坐标和在世界坐标系中的坐标存在以下关系: x C y C z C = R t x w y w z w 1 , 其中,R为3×3的单位正交矩阵,t为摄像机坐标系和世界坐标系之间的三维平移向量,定义其中的[Rt]为摄像机外参数矩阵M2(1-4) Let the coordinates of the space point P in the world coordinate system be (x W , y W , z W ), the coordinates of the space point P in the camera coordinate system and the coordinates in the world coordinate system have the following relationship: x C the y C z C = R t x w the y w z w 1 , Wherein, R is a 3×3 unit orthogonal matrix, t is a three-dimensional translation vector between the camera coordinate system and the world coordinate system, and [Rt] is defined as the camera external parameter matrix M 2 ;

(1-5)根据上述步骤(1-3)和步骤(1-4)得到的摄像机内参数M1和外参数M2,得到空间点P在世界坐标系中的坐标与空间点P在计算机图像坐标系中的投影坐标之间的投影变换关系为:P=M1M2(1-5) According to the camera internal parameter M 1 and external parameter M 2 obtained in the above steps (1-3) and steps (1-4), the coordinates of the spatial point P in the world coordinate system and the coordinates of the spatial point P in the computer The projection transformation relationship between the projection coordinates in the image coordinate system is: P=M 1 M 2 ;

(1-6)设摄像机的径向畸变参数为k1、k2和k3,使k1、k2和k3满足以下方程组: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y = y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , 求解该方程组,得到摄像机的径向畸变参数k1、k2和k3,其中,(x,y)为空间点P在图像坐标系中的原始位置坐标,(xu,yu)为空间点P通过成像原理得到的理想位置坐标;(1-6) Set the radial distortion parameters of the camera as k 1 , k 2 and k 3 , so that k 1 , k 2 and k 3 satisfy the following equations: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) the y = the y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , Solve the equations to get the radial distortion parameters k 1 , k 2 and k 3 of the camera, where, (x, y) are the original position coordinates of the spatial point P in the image coordinate system, (x u , y u ) are the ideal position coordinates of the spatial point P obtained through the imaging principle;

设摄像机的切向畸变参数为p1和p2,使p1和p2满足以下方程组: x = x u + 2 p 1 x u y u + p 2 ( r 2 + 2 x u 2 ) y = y u + 2 p 2 x u y u + p 1 ( r 2 + 2 y u 2 ) , 求解该方程组,得到摄像机的切向畸变参数p1和p2Let p 1 and p 2 be the tangential distortion parameters of the camera, so that p 1 and p 2 satisfy the following equations: x = x u + 2 p 1 x u the y u + p 2 ( r 2 + 2 x u 2 ) the y = the y u + 2 p 2 x u the y u + p 1 ( r 2 + 2 the y u 2 ) , Solve the equations to obtain the tangential distortion parameters p 1 and p 2 of the camera;

(1-7)根据上述步骤(1-6)得到的径向畸变参数为k1、k2和k3和切向畸变参数p1和p2,空间点P的原始位置坐标与理想位置坐标之间存在以下转换关系:(1-7) The radial distortion parameters obtained according to the above steps (1-6) are k 1 , k 2 and k 3 and the tangential distortion parameters p 1 and p 2 , the original position coordinates and the ideal position coordinates of the spatial point P The following conversion relationship exists between:

xx == xx uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 11 xx uu ythe y uu ++ pp 22 (( rr 22 ++ 22 xx uu 22 )) ythe y == ythe y uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 22 xx uu ythe y uu ++ pp 11 (( rr 22 ++ 22 ythe y uu 22 )) ;;

空间点P的理想位置坐标与原始位置坐标之间存在以下转换关系:The following conversion relationship exists between the ideal position coordinates of the spatial point P and the original position coordinates:

xx uu == xx -- 22 pp 11 xyxy ++ pp 22 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ythe y uu == ythe y -- 22 pp 22 xyxy ++ pp 11 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ;;

(1-8)遍历多个摄像机中的每个摄像机,重复步骤(1-1)-步骤(1-7),完成摄像机的标定;(1-8) each camera in traversing a plurality of cameras, repeat step (1-1)-step (1-7), complete the calibration of camera;

(1-9)设多个摄像机中的第一个摄像机和第二个摄像机之间的旋转矩阵为R12,第一个摄像机和第二个摄像机之间的平移矩阵为Tran12,得到第一个摄像机和第二个摄像机之间的位姿变换矩阵T12:T12=[R12Tran12];(1-9) Let the rotation matrix between the first camera and the second camera among multiple cameras be R 12 , the translation matrix between the first camera and the second camera be Tran 12 , and the first The pose transformation matrix T 12 between the first camera and the second camera: T 12 =[R 12 Tran 12 ];

(1-10)遍历多个摄像机中的每任意两个摄像机,重复步骤(1-9),得到多个摄像机的位姿变换矩阵T12,T23,T34……(1-10) traversing every arbitrary two cameras in multiple cameras, repeating steps (1-9), obtain the pose transformation matrix T 12 of multiple cameras, T 23 , T 34... ;

(2)使经过上述步骤(1)标定后的多台摄像机主动跟踪待抓取目标,具体包括以下步骤:(2) Make the multiple cameras calibrated in the above step (1) actively track the target to be captured, specifically including the following steps:

(2-1)将多个经过上述步骤(1)标定后的摄像机置于机械臂的左上、右上和正面,设待抓取目标中心S在摄像机成像平面Pd上的投影为S',计算S'与摄像机成像平面的中心C之间的距离d,设定距离阈值th,对距离d进行判断,若d>th,则向摄像机发送一个调整指令,使摄像机镜头进行使距离d减小的方向转动,直到d≤th,若d≤th,则使摄像机镜头保持原来位置;(2-1) Place multiple cameras calibrated in the above step (1) on the upper left, upper right and front of the manipulator, set the projection of the center S of the target to be grasped on the imaging plane P d of the camera as S', calculate The distance d between S' and the center C of the camera imaging plane, set the distance threshold th, and judge the distance d, if d>th, send an adjustment command to the camera to make the camera lens reduce the distance d Rotate the direction until d≤th, if d≤th, keep the camera lens in its original position;

(2-2)采用粒子滤波跟踪算法,摄像机联合主动跟踪待抓取目标,具体步骤如下:(2-2) Using the particle filter tracking algorithm, the camera jointly and actively tracks the target to be captured. The specific steps are as follows:

(2-2-1)使多台摄像机中的任意一台摄像机获取待抓取目标的视频图像序列,对视频图像序列中t0时刻的图像进行手工标注,标注出其中的待抓取目标区域;(2-2-1) Make any one of the multiple cameras obtain the video image sequence of the target to be captured, manually mark the image at time t 0 in the video image sequence, and mark the target area to be captured ;

(2-2-2)以上述待抓取目标区域的中心为原点,产生一个粒子集{},其中m为粒子集中的粒子个数,m=1,...,M,每个粒子代表一个待抓取目标可能存在的区域;设粒子集{}中的粒子的运动服从正态分布,粒子集{}中的每个粒子独立传播,得到t时刻的粒子集{}和t+1时刻的粒子集{};(2-2-2) Taking the center of the target area to be captured as the origin, generate a particle set{ }, where m is the number of particles in the particle set, m=1,...,M, and each particle represents a possible area where a target to be captured exists; let the particle set { }The motion of particles in } obeys the normal distribution, and the particle set { } each particle propagates independently, and the particle set at time t is obtained { } and the particle set at time t+1 { };

(2-2-3)设t0时刻粒子集{}的参考直方图为参考直方图q*共有L个灰度级,设t时刻粒子集{}的颜色直方图为qt(x)=qt(n;x),n=1,2...L,x为粒子集{}中的粒子,对t时刻粒子集{}中的每个粒子独立传播后,得到的t+1时刻粒子集{}进行观测,得到粒子集{}中每个粒子区域的颜色直方图和参考直方图,计算粒子集}中每个粒子区域的颜色直方图和参考直方图之间的巴氏距离D:定义粒子权值为ω,使ω=D,N的取值为200;(2-2-3) Let the particle set at time t 0 { }'s reference histogram is The reference histogram q * has a total of L gray levels, and set the particle set at time t { }'s color histogram is qt(x)=qt( n ; x ), n=1,2...L, x is the particle set{ }, for the particle set at time t { } After each particle propagates independently, the obtained particle set at time t+1 { } to observe and get the particle set { } in the color histogram and reference histogram of each particle region, calculate the particle set } between the color histogram of each particle region and the reference histogram, D: Define the particle weight as ω, make ω=D, and the value of N is 200;

(2-2-4)对上述粒子权值进行后验概率计算,得到t+1时刻粒子的存在概率期望值E(xt+1):其中,ωt+1是t+1时刻的每个粒子权值;(2-2-4) Calculate the posterior probability of the above particle weights to obtain the expected value E(x t+1 ) of the particle’s existence probability at time t+1: Among them, ω t+1 is the weight of each particle at time t+1;

(2-2-5)将上述期望值E(xt+1)作为待抓取目标在t+1时刻的存在概率最优估计,并将上述粒子集{}中的存在概率最优粒子所覆盖的区域的中心作为待抓取目标区域中心;(2-2-5) The above expected value E(x t+1 ) is used as the optimal estimate of the existence probability of the target to be grasped at time t+1, and the above particle set { } in the center of the area covered by the particle with the optimal probability of existence as the center of the target area to be grasped;

(2-2-6)重复上述步骤(2-2-2)~步骤(2-2-5),得到待抓取目标在时刻t到下一时刻t+1的存在概率最优估计和待抓取目标区域中心;(2-2-6) Repeat the above steps (2-2-2) to (2-2-5) to obtain the optimal estimate of the existence probability of the target to be grasped from time t to the next time t+1 and the Grab the center of the target area;

(2-2-7)重复步骤(2-1),使摄像机镜头对准上述步骤(2-2-6)的待抓取目标区域中心;(2-2-7) Repeat step (2-1) to align the camera lens with the center of the target area to be captured in the above step (2-2-6);

(2-2-8)提取上述步骤(2-2-7)的待抓取目标区域的局部特征点;(2-2-8) extracting the local feature points of the target area to be captured in the above step (2-2-7);

(2-2-9)使多台摄像机中的其他几台摄像机获取待抓取目标的视频图像序列,从多个视频图像序列中分别提取局部特征点;(2-2-9) Make other several cameras in multiple cameras obtain the video image sequence of the target to be captured, and extract local feature points respectively from multiple video image sequences;

(2-2-10)将步骤(2-2-8)提取到的待抓取目标区域的局部特征点与步骤(2-2-9)提取到的所有局部特征点进行匹配,得到多台摄像机中的其他几台摄像机的待抓取目标区域的精确区域;(2-2-10) Match the local feature points of the target area to be captured extracted in step (2-2-8) with all the local feature points extracted in step (2-2-9) to obtain multiple The precise area of the target area to be captured by other cameras in the camera;

(2-2-11)重复上述步骤(2-2-2)~步骤(2-2-10),使每台摄像机分别跟踪得到的待抓取目标区域的精确区域,实现多摄像机联合主动跟踪目标。(2-2-11) Repeat the above steps (2-2-2) to (2-2-10), so that each camera can track the precise area of the target area to be captured separately, and realize multi-camera joint active tracking Target.

本发明提出的用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,具有以下优点:The multi-camera joint active tracking method for manipulator remote operation proposed by the present invention has the following advantages:

1、本发明的用于机械臂遥操作的多摄像机联合主动跟踪方法,采用多个摄像机,实现对特抓取目标的主动跟踪。1. The multi-camera joint active tracking method for the teleoperation of the manipulator of the present invention adopts multiple cameras to realize the active tracking of the special grasping target.

2、本发明中的多个摄像机,安放在不同角度,进行联合主动跟踪,被跟踪目标始终处于镜头中心,对机械臂各个角度进行监控,并避免因遮挡、背景混乱等因素引起的跟踪失败。2. Multiple cameras in the present invention are placed at different angles for joint active tracking. The tracked target is always in the center of the lens, monitoring all angles of the mechanical arm, and avoiding tracking failures caused by factors such as occlusion and background confusion.

3、本发明采用了粒子滤波跟踪算法和sift特征点匹配,实现多个摄像机联合跟踪,提高了摄像机跟踪的鲁棒性。3. The present invention adopts a particle filter tracking algorithm and SIFT feature point matching to realize joint tracking of multiple cameras and improve the robustness of camera tracking.

附图说明Description of drawings

图1是本发明方法中摄像机成像原理示意图。Fig. 1 is a schematic diagram of camera imaging principle in the method of the present invention.

图2是本发明方法中摄像机控制模型示意图Fig. 2 is a schematic diagram of camera control model in the method of the present invention

具体实施方式detailed description

本发明提出的用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,包括以下步骤:The multi-camera joint active tracking target method for the teleoperation of the mechanical arm proposed by the present invention comprises the following steps:

(1)将多个摄像机置于机械臂的左上、右上和正面,对摄像机进行标定,分别获得多台摄像机的内参数矩阵M1、外参数矩阵M2、畸变系数和位姿变换矩阵T12,T23,T34……,具体步骤如下:(1) Place multiple cameras on the upper left, upper right and front of the robotic arm, calibrate the cameras, and obtain the internal parameter matrix M 1 , external parameter matrix M 2 , distortion coefficient and pose transformation matrix T 12 of multiple cameras respectively ,T 23 ,T 34... , the specific steps are as follows:

(1-1)如图1所示,设空间点P在摄像机坐标系中的坐标为P(xC,yC,zC),使空间点P沿着通过光心的光线投影到一个图像平面上,在图像平面中建立图像坐标系,得到该空间点P在图像坐标系中的投影坐标为P(x,y),将该投影坐标表示为计算机图像坐标系中的坐标为P(u,v),由小孔成像原理得到,空间点P在摄像机坐标中的坐标(xC,yC,zC)与投影坐标P(x,y)的坐标变换关系为: x = f x C z C y = f y C z C , 其中f为摄像机焦距;(1-1) As shown in Figure 1, let the coordinates of the spatial point P in the camera coordinate system be P(x C , y C , z C ), so that the spatial point P is projected into an image along the ray passing through the optical center On the plane, the image coordinate system is established in the image plane, and the projected coordinates of the space point P in the image coordinate system are obtained as P(x, y), and the projected coordinates are represented as coordinates in the computer image coordinate system as P(u ,v), obtained from the principle of pinhole imaging, the coordinate transformation relationship between the coordinates (x C ,y C ,z C ) of the spatial point P in the camera coordinates and the projected coordinates P(x,y) is: x = f x C z C the y = f the y C z C , where f is the focal length of the camera;

(1-2)设定一个成像单元,该成像单元在上述图像坐标系中的x轴和y轴方向上的物理尺寸分别为dx和dy,成像单元中任意一个像素在上述计算机图像坐标系下的坐标(u,v)和上述图像坐标系下的坐标(x,y)存在以下坐标变换关系: u = x dx + u 0 v = y dy + v 0 , 其中,O(u0,v0)为位于摄像机主光轴上任意一点在计算机图像坐标系中的成像坐标,该成像坐标即主点坐标;(1-2) An imaging unit is set, the physical dimensions of the imaging unit in the x-axis and y-axis directions in the above-mentioned image coordinate system are respectively dx and dy, and any pixel in the imaging unit is in the above-mentioned computer image coordinate system The coordinates (u, v) of and the coordinates (x, y) in the above image coordinate system have the following coordinate transformation relationship: u = x dx + u 0 v = the y dy + v 0 , Among them, O(u 0 , v 0 ) is the imaging coordinate of any point on the main optical axis of the camera in the computer image coordinate system, and the imaging coordinate is the principal point coordinate;

(1-3)根据上述步骤(1-1)和步骤(1-2)的坐标变换关系,得到摄像机的内参数矩阵M1为: M 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , 其中,为摄像机焦距f在摄像机图像平面坐标系的u轴上的归一化焦距,为摄像机焦距f在摄像机图像平面坐标系的v轴上的归一化焦距;(1-3) According to the coordinate transformation relation of above-mentioned step (1-1) and step (1-2), obtain the internal parameter matrix M of camera as: m 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , in, is the normalized focal length of the camera focal length f on the u-axis of the camera image plane coordinate system, is the normalized focal length of the camera focal length f on the v-axis of the camera image plane coordinate system;

(1-4)设空间点P在世界坐标系中的坐标为(xW,yW,zW),空间点P在摄像机坐标系中的坐标和在世界坐标系中的坐标存在以下关系: x C y C z C = R t x w y w z w 1 , 其中,R为3×3的单位正交矩阵,t为摄像机坐标系和世界坐标系之间的三维平移向量,定义其中的[Rt]为摄像机外参数矩阵M2(1-4) Let the coordinates of the space point P in the world coordinate system be (x W , y W , z W ), the coordinates of the space point P in the camera coordinate system and the coordinates in the world coordinate system have the following relationship: x C the y C z C = R t x w the y w z w 1 , Wherein, R is a 3×3 unit orthogonal matrix, t is a three-dimensional translation vector between the camera coordinate system and the world coordinate system, and [Rt] is defined as the camera external parameter matrix M 2 ;

(1-5)根据上述步骤(1-3)和步骤(1-4)得到的摄像机内参数M1和外参数M2,得到空间点P在世界坐标系中的坐标与空间点P在计算机图像坐标系中的投影坐标之间的投影变换关系为:P=M1M2(1-5) According to the camera internal parameter M 1 and external parameter M 2 obtained in the above steps (1-3) and steps (1-4), the coordinates of the spatial point P in the world coordinate system and the coordinates of the spatial point P in the computer The projection transformation relationship between the projection coordinates in the image coordinate system is: P=M 1 M 2 ;

(1-6)设摄像机的径向畸变参数为k1、k2和k3,使k1、k2和k3满足以下方程组: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y = y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , 求解该方程组,得到摄像机的径向畸变参数k1、k2和k3,其中,(x,y)为空间点P在图像坐标系中的原始位置坐标,(xu,yu)为空间点P通过成像原理得到的理想位置坐标;(1-6) Set the radial distortion parameters of the camera as k 1 , k 2 and k 3 , so that k 1 , k 2 and k 3 satisfy the following equations: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) the y = the y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , Solve the equations to get the radial distortion parameters k 1 , k 2 and k 3 of the camera, where, (x, y) are the original position coordinates of the spatial point P in the image coordinate system, (x u , y u ) are the ideal position coordinates of the spatial point P obtained through the imaging principle;

设摄像机的切向畸变参数为p1和p2,使p1和p2满足以下方程组: x = x u + 2 p 1 x u y u + p 2 ( r 2 + 2 x u 2 ) y = y u + 2 p 2 x u y u + p 1 ( r 2 + 2 y u 2 ) , 求解该方程组,得到摄像机的切向畸变参数p1和p2Let p 1 and p 2 be the tangential distortion parameters of the camera, so that p 1 and p 2 satisfy the following equations: x = x u + 2 p 1 x u the y u + p 2 ( r 2 + 2 x u 2 ) the y = the y u + 2 p 2 x u the y u + p 1 ( r 2 + 2 the y u 2 ) , Solve the equations to obtain the tangential distortion parameters p 1 and p 2 of the camera;

(1-7)根据上述步骤(1-6)得到的径向畸变参数为k1、k2和k3和切向畸变参数p1和p2,空间点P的原始位置坐标与理想位置坐标之间存在以下转换关系:(1-7) The radial distortion parameters obtained according to the above steps (1-6) are k 1 , k 2 and k 3 and the tangential distortion parameters p 1 and p 2 , the original position coordinates and the ideal position coordinates of the spatial point P The following conversion relationship exists between:

xx == xx uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 11 xx uu ythe y uu ++ pp 22 (( rr 22 ++ 22 xx uu 22 )) ythe y == ythe y uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 22 xx uu ythe y uu ++ pp 11 (( rr 22 ++ 22 ythe y uu 22 )) ;;

空间点P的理想位置坐标与原始位置坐标之间存在以下转换关系:The following conversion relationship exists between the ideal position coordinates of the spatial point P and the original position coordinates:

xx uu == xx -- 22 pp 11 xyxy ++ pp 22 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ythe y uu == ythe y -- 22 pp 22 xyxy ++ pp 11 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ;;

(1-8)遍历多个摄像机中的每个摄像机,重复步骤(1-1)-步骤(1-7),完成摄像机的标定;(1-8) each camera in traversing a plurality of cameras, repeat step (1-1)-step (1-7), complete the calibration of camera;

(1-9)设多个摄像机中的第一个摄像机和第二个摄像机之间的旋转矩阵为R12,第一个摄像机和第二个摄像机之间的平移矩阵为Tran12,得到第一个摄像机和第二个摄像机之间的位姿变换矩阵T12:T12=[R12Tran12];(1-9) Let the rotation matrix between the first camera and the second camera among multiple cameras be R 12 , the translation matrix between the first camera and the second camera be Tran 12 , and the first The pose transformation matrix T 12 between the first camera and the second camera: T 12 =[R 12 Tran 12 ];

(1-10)遍历多个摄像机中的每任意两个摄像机,重复步骤(1-9),得到多个摄像机的位姿变换矩阵T12,T23,T34……(1-10) traversing every arbitrary two cameras in multiple cameras, repeating steps (1-9), obtain the pose transformation matrix T 12 of multiple cameras, T 23 , T 34... ;

(2)使经过上述步骤(1)标定后的多台摄像机主动跟踪待抓取目标,具体包括以下步骤:(2) Make the multiple cameras calibrated in the above step (1) actively track the target to be captured, specifically including the following steps:

(2-1)将多个经过上述步骤(1)标定后的摄像机置于机械臂的左上、右上和正面,设待抓取目标中心S在摄像机成像平面Pd上的投影为S',如图2所示,计算S'与摄像机成像平面的中心C之间的距离d,设定距离阈值th,对距离d进行判断,若d>th,则向摄像机发送一个调整指令,使摄像机镜头进行使距离d减小的方向转动,直到d≤th,若d≤th,则使摄像机镜头保持原来位置;(2-1) Place multiple cameras calibrated in the above step (1) on the upper left, upper right and front of the manipulator, and set the projection of the center S of the target to be grasped on the imaging plane P d of the camera as S', as As shown in Figure 2, calculate the distance d between S' and the center C of the camera imaging plane, set the distance threshold th, and judge the distance d, if d>th, send an adjustment command to the camera to make the camera lens Make the distance d decrease in the direction until d≤th, if d≤th, keep the camera lens in its original position;

(2-2)采用粒子滤波跟踪算法,摄像机联合主动跟踪待抓取目标,具体步骤如下:(2-2) Using the particle filter tracking algorithm, the camera jointly and actively tracks the target to be captured. The specific steps are as follows:

(2-2-1)使多台摄像机中的任意一台摄像机获取待抓取目标的视频图像序列,对视频图像序列中t0时刻的图像进行手工标注,标注出其中的待抓取目标区域;(2-2-1) Make any one of the multiple cameras obtain the video image sequence of the target to be captured, manually mark the image at time t 0 in the video image sequence, and mark the target area to be captured ;

(2-2-2)以上述待抓取目标区域的中心为原点,产生一个粒子集{},其中m为粒子集中的粒子个数,m=1,...,M,每个粒子代表一个待抓取目标可能存在的区域;设粒子集{}中的粒子的运动服从正态分布,粒子集{}中的每个粒子独立传播,得到t时刻的粒子集{}和t+1时刻的粒子集{};(2-2-2) Taking the center of the target area to be captured as the origin, generate a particle set { }, where m is the number of particles in the particle set, m=1,...,M, and each particle represents a possible area where a target to be captured exists; let the particle set { }The motion of particles in } obeys the normal distribution, and the particle set { } each particle propagates independently, and the particle set at time t is obtained { } and the particle set at time t+1 { };

(2-2-3)设t0时刻粒子集{}的参考直方图为参考直方图q*共有L个灰度级,设t时刻粒子集{}的颜色直方图为qt(x)=qt(n;x),n=1,2...L,x为粒子集{}中的粒子,对t时刻粒子集{}中的每个粒子独立传播后,得到的t+1时刻粒子集{}进行观测,得到粒子集{}中每个粒子区域的颜色直方图和参考直方图,计算粒子集{}中每个粒子区域的颜色直方图和参考直方图之间的巴氏距离D:定义粒子权值为ω,使ω=D,N的取值为200;(2-2-3) Let the particle set at time t 0 { }'s reference histogram is The reference histogram q * has a total of L gray levels, and set the particle set at time t { }'s color histogram is qt(x)=qt( n ; x ), n=1,2...L, x is the particle set{ }, for the particle set at time t { } After each particle propagates independently, the obtained particle set at time t+1 { } to observe and get the particle set { } in the color histogram and reference histogram of each particle region, calculate the particle set { } between the color histogram of each particle region and the reference histogram, D: Define the particle weight as ω, make ω=D, and the value of N is 200;

(2-2-4)对上述粒子权值进行后验概率计算,得到t+1时刻粒子的存在概率期望值E(xt+1):其中,ωt+1是t+1时刻的每个粒子权值;(2-2-4) Calculate the posterior probability of the above particle weights to obtain the expected value E(x t+1 ) of the particle’s existence probability at time t+1: Among them, ω t+1 is the weight of each particle at time t+1;

(2-2-5)将上述期望值E(xt+1)作为待抓取目标在t+1时刻的存在概率最优估计,并将上述粒子集{}中的存在概率最优粒子所覆盖的区域的中心作为待抓取目标区域中心;(2-2-5) The above expected value E(x t+1 ) is used as the optimal estimate of the existence probability of the target to be grasped at time t+1, and the above particle set { } in the center of the area covered by the particle with the optimal probability of existence as the center of the target area to be grasped;

(2-2-6)重复上述步骤(2-2-2)~步骤(2-2-5),得到待抓取目标在时刻t到下一时刻t+1的存在概率最优估计和待抓取目标区域中心;(2-2-6) Repeat the above steps (2-2-2) to (2-2-5) to obtain the optimal estimate of the existence probability of the target to be grasped from time t to the next time t+1 and the Grab the center of the target area;

(2-2-7)重复步骤(2-1),使摄像机镜头对准上述步骤(2-2-6)的待抓取目标区域中心;(2-2-7) Repeat step (2-1) to align the camera lens with the center of the target area to be captured in the above step (2-2-6);

(2-2-8)提取上述步骤(2-2-7)的待抓取目标区域的局部特征点;(2-2-8) extracting the local feature points of the target area to be captured in the above step (2-2-7);

(2-2-9)使多台摄像机中的其他几台摄像机获取待抓取目标的视频图像序列,从多个视频图像序列中分别提取局部特征点(即sift局部特征点),(2-2-9) Make other several cameras in multiple cameras obtain the video image sequence of the target to be captured, and extract local feature points (i.e. sift local feature points) respectively from multiple video image sequences,

(2-2-10)将步骤(2-2-8)提取到的待抓取目标区域的局部特征点与步骤(2-2-9)提取到的所有局部特征点进行匹配,得到多台摄像机中的其他几台摄像机的待抓取目标区域的精确区域;(2-2-10) Match the local feature points of the target area to be captured extracted in step (2-2-8) with all the local feature points extracted in step (2-2-9) to obtain multiple The precise area of the target area to be captured by other cameras in the camera;

(2-2-11)重复上述步骤(2-2-2)~步骤(2-2-10),使每台摄像机分别跟踪得到的待抓取目标区域的精确区域,实现多摄像机联合主动跟踪目标。(2-2-11) Repeat the above steps (2-2-2) to (2-2-10), so that each camera can track the precise area of the target area to be captured separately, and realize multi-camera joint active tracking Target.

Claims (1)

1.一种用于机械臂遥操作的多摄像机联合主动跟踪目标的方法,其特征在于该方法包括以下步骤:1. A method for joint active tracking of targets by multiple cameras for manipulator teleoperation, characterized in that the method may further comprise the steps: (1)将多个摄像机置于机械臂的左上、右上和正面,对摄像机进行标定,分别获得多台摄像机的内参数矩阵M1、外参数矩阵M2、畸变系数和位姿变换矩阵T12,T23,T34……,具体步骤如下:(1) Place multiple cameras on the upper left, upper right and front of the robotic arm, calibrate the cameras, and obtain the internal parameter matrix M 1 , external parameter matrix M 2 , distortion coefficient and pose transformation matrix T 12 of multiple cameras respectively ,T 23 ,T 34... , the specific steps are as follows: (1-1)设空间点P在摄像机坐标系中的坐标为P(xC,yC,zC),使空间点P沿着通过光心的光线投影到一个图像平面上,在图像平面中建立图像坐标系,得到该空间点P在图像坐标系中的投影坐标为P(x,y),将该投影坐标表示为计算机图像坐标系中的坐标为P(u,v),由小孔成像原理得到,空间点P在摄像机坐标中的坐标(xC,yC,zC)与投影坐标P(x,y)的坐标变换关系为: x = f x C z C y = f y C z C , 其中f为摄像机焦距;(1-1) Let the coordinates of the spatial point P in the camera coordinate system be P(x C , y C , z C ), so that the spatial point P is projected onto an image plane along the ray passing through the optical center, and on the image plane The image coordinate system is established in the image coordinate system, and the projected coordinates of the space point P in the image coordinate system are P(x, y), and the projected coordinates are expressed as the coordinates in the computer image coordinate system as P(u, v). According to the hole imaging principle, the coordinate transformation relationship between the coordinates (x C , y C , z C ) of the spatial point P in the camera coordinates and the projected coordinates P(x, y) is: x = f x C z C the y = f the y C z C , where f is the focal length of the camera; (1-2)设定一个成像单元,该成像单元在上述图像坐标系中的x轴和y轴方向上的物理尺寸分别为dx和dy,成像单元中任意一个像素在上述计算机图像坐标系下的坐标(u,v)和上述图像坐标系下的坐标(x,y)存在以下坐标变换关系: u = x dx + u 0 v = y dy + v 0 , 其中,O(u0,v0)为位于摄像机主光轴上任意一点在计算机图像坐标系中的成像坐标,该成像坐标即主点坐标;(1-2) An imaging unit is set, the physical dimensions of the imaging unit in the x-axis and y-axis directions in the above-mentioned image coordinate system are respectively dx and dy, and any pixel in the imaging unit is in the above-mentioned computer image coordinate system The coordinates (u, v) of and the coordinates (x, y) in the above image coordinate system have the following coordinate transformation relationship: u = x dx + u 0 v = the y dy + v 0 , Among them, O(u 0 , v 0 ) is the imaging coordinate of any point on the main optical axis of the camera in the computer image coordinate system, and the imaging coordinate is the principal point coordinate; (1-3)根据上述步骤(1-1)和步骤(1-2)的坐标变换关系,得到摄像机的内参数矩阵M1为: M 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , 其中,为摄像机焦距f在摄像机图像平面坐标系的u轴上的归一化焦距,为摄像机焦距f在摄像机图像平面坐标系的v轴上的归一化焦距;(1-3) According to the coordinate transformation relation of above-mentioned step (1-1) and step (1-2), obtain the internal parameter matrix M of camera as: m 1 = f dx 0 u 0 0 f dy v 0 0 0 1 , in, is the normalized focal length of the camera focal length f on the u-axis of the camera image plane coordinate system, is the normalized focal length of the camera focal length f on the v-axis of the camera image plane coordinate system; (1-4)建立世界坐标系,设空间点P在世界坐标系中的坐标为(xW,yW,zW),空间点P在摄像机坐标系中的坐标和在世界坐标系中的坐标存在以下关系: x C y C z C = R t x W y W z W 1 , 其中,R为3×3的单位正交矩阵,t为摄像机坐标系和世界坐标系之间的三维平移向量,定义其中的[Rt]为摄像机外参数矩阵M2(1-4) Establish the world coordinate system, set the coordinates of the space point P in the world coordinate system as (x W , y W , z W ), the coordinates of the space point P in the camera coordinate system and the coordinates of the space point P in the world coordinate system The coordinates have the following relationship: x C the y C z C = R t x W the y W z W 1 , Wherein, R is a 3×3 unit orthogonal matrix, t is a three-dimensional translation vector between the camera coordinate system and the world coordinate system, and [Rt] is defined as the camera external parameter matrix M 2 ; (1-5)根据上述步骤(1-3)和步骤(1-4)得到的摄像机内参数M1和外参数M2,得到空间点P在世界坐标系中的坐标与空间点P在计算机图像坐标系中的投影坐标之间的投影变换关系为:P=M1M2(1-5) According to the camera internal parameter M 1 and external parameter M 2 obtained in the above steps (1-3) and steps (1-4), the coordinates of the spatial point P in the world coordinate system and the coordinates of the spatial point P in the computer The projection transformation relationship between the projection coordinates in the image coordinate system is: P=M 1 M 2 ; (1-6)设摄像机的径向畸变参数为k1、k2和k3,使k1、k2和k3满足以下方程组: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) y = y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , 求解该方程组,得到摄像机的径向畸变参数k1、k2和k3,其中,(x,y)为空间点P在图像坐标系中的原始位置坐标,(xu,yu)为空间点P通过成像原理得到的理想位置坐标;(1-6) Set the radial distortion parameters of the camera as k 1 , k 2 and k 3 , so that k 1 , k 2 and k 3 satisfy the following equations: x = x u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) the y = the y u ( 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 ) , Solve the equations to get the radial distortion parameters k 1 , k 2 and k 3 of the camera, where, (x, y) are the original position coordinates of the spatial point P in the image coordinate system, (x u , y u ) are the ideal position coordinates of the spatial point P obtained through the imaging principle; 设摄像机的切向畸变参数为p1和p2,使p1和p2满足以下方程组: x = x u + 2 p 1 x u y u + p 2 ( r 2 + 2 x u 2 ) y = y u + 2 p 2 x u y u + p 1 ( r 2 + 2 y u 2 ) , 求解该方程组,得到摄像机的切向畸变参数p1和p2Let p 1 and p 2 be the tangential distortion parameters of the camera, so that p 1 and p 2 satisfy the following equations: x = x u + 2 p 1 x u the y u + p 2 ( r 2 + 2 x u 2 ) the y = the y u + 2 p 2 x u the y u + p 1 ( r 2 + 2 the y u 2 ) , Solve the equations to obtain the tangential distortion parameters p 1 and p 2 of the camera; (1-7)根据上述步骤(1-6)得到的径向畸变参数为k1、k2和k3和切向畸变参数p1和p2,空间点P的原始位置坐标与理想位置坐标之间存在以下转换关系:(1-7) The radial distortion parameters obtained according to the above steps (1-6) are k 1 , k 2 and k 3 and the tangential distortion parameters p 1 and p 2 , the original position coordinates and the ideal position coordinates of the spatial point P The following conversion relationship exists between: xx == xx uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 11 xx uu ythe y uu ++ pp 22 (( rr 22 ++ 22 xx uu 22 )) ythe y == ythe y uu (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) ++ 22 pp 22 xx uu ythe y uu ++ pp 11 (( rr 22 ++ 22 ythe y uu 22 )) ;; 空间点P的理想位置坐标与原始位置坐标之间存在以下转换关系:The following conversion relationship exists between the ideal position coordinates of the spatial point P and the original position coordinates: xx uu == xx -- 22 pp 11 xyxy ++ pp 22 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ythe y uu == ythe y -- 22 pp 22 x yx y ++ pp 11 (( rr 22 ++ 22 xx 22 )) 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 ;; (1-8)遍历多个摄像机中的每个摄像机,重复步骤(1-1)-步骤(1-7),完成摄像机的标定;(1-8) traverse each camera in a plurality of cameras, repeat step (1-1)-step (1-7), complete the calibration of camera; (1-9)设多个摄像机中的第一个摄像机和第二个摄像机之间的旋转矩阵为R12,第一个摄像机和第二个摄像机之间的平移矩阵为Tran12,得到第一个摄像机和第二个摄像机之间的位姿变换矩阵T12:T12=[R12Tran12];(1-9) Let the rotation matrix between the first camera and the second camera among multiple cameras be R 12 , the translation matrix between the first camera and the second camera be Tran 12 , and the first The pose transformation matrix T 12 between the first camera and the second camera: T 12 =[R 12 Tran 12 ]; (1-10)遍历多个摄像机中的每任意两个摄像机,重复步骤(1-9),得到多个摄像机的位姿变换矩阵T12,T23,T34……(1-10) traversing every arbitrary two cameras in multiple cameras, repeating steps (1-9), obtain the pose transformation matrix T 12 of multiple cameras, T 23 , T 34... ; (2)使经过上述步骤(1)标定后的多台摄像机主动跟踪待抓取目标,具体包括以下步骤:(2) Make the multiple cameras calibrated in the above step (1) actively track the target to be captured, which specifically includes the following steps: (2-1)将多个经过上述步骤(1)标定后的摄像机置于机械臂的左上、右上和正面,设待抓取目标中心S在摄像机成像平面Pd上的投影为S',计算S'与摄像机成像平面的中心C之间的距离d,设定距离阈值th,对距离d进行判断,若d>th,则向摄像机发送一个调整指令,使摄像机镜头进行使距离d减小的方向转动,直到d≤th,若d≤th,则使摄像机镜头保持原来位置;(2-1) Place multiple cameras calibrated in the above step (1) on the upper left, upper right and front of the manipulator, set the projection of the center S of the target to be grasped on the imaging plane P d of the camera as S', calculate The distance d between S' and the center C of the camera imaging plane, set the distance threshold th, and judge the distance d, if d>th, send an adjustment command to the camera to make the camera lens reduce the distance d Rotate the direction until d≤th, if d≤th, keep the camera lens in its original position; (2-2)采用粒子滤波跟踪算法,摄像机联合主动跟踪待抓取目标,具体步骤如下:(2-2) Using the particle filter tracking algorithm, the camera jointly and actively tracks the target to be captured. The specific steps are as follows: (2-2-1)使多台摄像机中的任意一台摄像机获取待抓取目标的视频图像序列,对视频图像序列中t0时刻的图像进行手工标注,标注出其中的待抓取目标区域;(2-2-1) Make any one of the multiple cameras obtain the video image sequence of the target to be captured, manually mark the image at time t 0 in the video image sequence, and mark the target area to be captured ; (2-2-2)以上述待抓取目标区域的中心为原点,产生一个粒子集其中m为粒子集中的粒子个数,m=1,...,M,每个粒子代表一个待抓取目标可能存在的区域;设粒子集中的粒子的运动服从正态分布,粒子集中的每个粒子独立传播,得到t时刻的粒子集和t+1时刻的粒子集 (2-2-2) Taking the center of the target area to be captured as the origin, generate a particle set Among them, m is the number of particles in the particle set, m=1,...,M, and each particle represents an area where a target to be captured may exist; set the particle set The motion of the particles in obeys the normal distribution, and the particle set Each particle in is propagated independently, and the particle set at time t is obtained and the particle set at time t+1 (2-2-3)设t0时刻粒子集的参考直方图为参考直方图q*共有L个灰度级,设t时刻粒子集的颜色直方图为qt(x)=qt(n;x),n=1,2...L,x为粒子集中的粒子,对t时刻粒子集中的每个粒子独立传播后,得到的t+1时刻粒子集进行观测,得到粒子集中每个粒子区域的颜色直方图和参考直方图,计算粒子集中每个粒子区域的颜色直方图和参考直方图之间的巴氏距离D:定义粒子权值为ω,使ω=D,N的取值为200;(2-2-3) Let the particle set at time t 0 The reference histogram for The reference histogram q * has a total of L gray levels, let the particle set at time t be The color histogram of is q t (x)=q t (n; x), n=1,2...L, x is the particle set Particles in , for the particle set at time t After each particle in is propagated independently, the obtained particle set at time t+1 Observations are made to obtain particle sets The color histogram and reference histogram of each particle region in the computed particle set The Bhattacharyachian distance D between the color histogram of each particle region in and the reference histogram: Define the particle weight as ω, make ω=D, and the value of N is 200; (2-2-4)对上述粒子权值进行后验概率计算,得到t+1时刻粒子的存在概率期望值E(xt+1):其中,ωt+1是t+1时刻的每个粒子权值;(2-2-4) Calculate the posterior probability of the above particle weights to obtain the expected value E(x t+1 ) of the particle’s existence probability at time t+1: Among them, ω t+1 is the weight of each particle at time t+1; (2-2-5)将上述期望值E(xt+1)作为待抓取目标在t+1时刻的存在概率最优估计,并将上述粒子集中的存在概率最优粒子所覆盖的区域的中心作为待抓取目标区域中心;(2-2-5) The above expected value E(x t+1 ) is used as the optimal estimate of the existence probability of the target to be grasped at time t+1, and the above particle set The center of the area covered by the particle with the optimal probability of existence in is used as the center of the target area to be grabbed; (2-2-6)重复上述步骤(2-2-2)~步骤(2-2-5),得到待抓取目标在时刻t到下一时刻t+1的存在概率最优估计和待抓取目标区域中心;(2-2-6) Repeat the above steps (2-2-2) to (2-2-5) to obtain the optimal estimate of the existence probability of the target to be grasped from time t to the next time t+1 and the Grab the center of the target area; (2-2-7)重复步骤(2-1),使摄像机镜头对准上述步骤(2-2-6)的待抓取目标区域中心;(2-2-7) Repeat step (2-1), align the camera lens with the center of the target area to be captured in the above step (2-2-6); (2-2-8)提取上述步骤(2-2-7)的待抓取目标区域的局部特征点;(2-2-8) Extracting the local feature points of the target area to be captured in the above step (2-2-7); (2-2-9)使多台摄像机中的其他几台摄像机获取待抓取目标的视频图像序列,从多个视频图像序列中分别提取局部特征点;(2-2-9) Make other several cameras in multiple cameras obtain the video image sequence of the target to be captured, and extract local feature points respectively from multiple video image sequences; (2-2-10)将步骤(2-2-8)提取到的待抓取目标区域的局部特征点与步骤(2-2-9)提取到的所有局部特征点进行匹配,得到多台摄像机中的其他几台摄像机的待抓取目标区域的精确区域;(2-2-10) Match the local feature points of the target area to be captured extracted in step (2-2-8) with all the local feature points extracted in step (2-2-9) to obtain multiple The precise area of the target area to be captured by other cameras in the camera; (2-2-11)重复上述步骤(2-2-2)~步骤(2-2-10),使每台摄像机分别跟踪得到的待抓取目标区域的精确区域,实现多摄像机联合主动跟踪目标。(2-2-11) Repeat the above steps (2-2-2) to (2-2-10), so that each camera can track the precise area of the target area to be captured separately, and realize multi-camera joint active tracking Target.
CN201510072044.XA 2015-02-11 2015-02-11 For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating Active CN104647390B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510072044.XA CN104647390B (en) 2015-02-11 2015-02-11 For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510072044.XA CN104647390B (en) 2015-02-11 2015-02-11 For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating

Publications (2)

Publication Number Publication Date
CN104647390A CN104647390A (en) 2015-05-27
CN104647390B true CN104647390B (en) 2016-02-10

Family

ID=53239282

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510072044.XA Active CN104647390B (en) 2015-02-11 2015-02-11 For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating

Country Status (1)

Country Link
CN (1) CN104647390B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447869B (en) * 2015-11-30 2019-02-12 四川华雁信息产业股份有限公司 Camera self-calibration method and device based on particle swarm optimization algorithm
CN106023139B (en) * 2016-05-05 2019-05-17 北京圣威特科技有限公司 A kind of indoor tracking and positioning method and system based on multiple-camera
CN106934353B (en) * 2017-02-28 2020-08-04 北京奥开信息科技有限公司 Face recognition and active tracking method for endowment robot
CN107150343B (en) * 2017-04-05 2019-07-23 武汉科技大学 A kind of system based on NAO robot crawl object
CN108074264A (en) * 2017-11-30 2018-05-25 深圳市智能机器人研究院 A kind of classification multi-vision visual localization method, system and device
CN111481293B (en) * 2020-04-16 2021-09-17 首都医科大学 Multi-viewpoint optical positioning method and system based on optimal viewpoint selection
CN114074320B (en) * 2020-08-10 2023-04-18 库卡机器人(广东)有限公司 Robot control method and device
CN113850860A (en) * 2021-08-13 2021-12-28 东莞理工学院 A teleoperation attitude tracking estimation system and method
CN113687627B (en) * 2021-08-18 2022-08-19 太仓中科信息技术研究院 Target tracking method based on camera robot
CN117464692B (en) * 2023-12-27 2024-03-08 中信重工机械股份有限公司 Lining plate grabbing mechanical arm control method based on structured light vision system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2861699A (en) * 1950-10-16 1958-11-25 Gen Mills Inc Method and apparatus for performing operations at a remote point
US5300869A (en) * 1992-07-30 1994-04-05 Iowa State University Research Foundation, Inc. Nonholonomic camera space manipulation
CN1590040A (en) * 2003-09-03 2005-03-09 中国科学院自动化研究所 Pickup camera self calibration method based on robot motion
WO2009059716A1 (en) * 2007-11-05 2009-05-14 Sebastian Repetzki Pointing device and method for operating the pointing device
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera
CN103209809A (en) * 2010-05-14 2013-07-17 康耐视公司 System and method for robust calibration between a machine vision system and a robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2861699A (en) * 1950-10-16 1958-11-25 Gen Mills Inc Method and apparatus for performing operations at a remote point
US5300869A (en) * 1992-07-30 1994-04-05 Iowa State University Research Foundation, Inc. Nonholonomic camera space manipulation
CN1590040A (en) * 2003-09-03 2005-03-09 中国科学院自动化研究所 Pickup camera self calibration method based on robot motion
WO2009059716A1 (en) * 2007-11-05 2009-05-14 Sebastian Repetzki Pointing device and method for operating the pointing device
CN103209809A (en) * 2010-05-14 2013-07-17 康耐视公司 System and method for robust calibration between a machine vision system and a robot
CN103170973A (en) * 2013-03-28 2013-06-26 上海理工大学 Man-machine cooperation device and method based on Kinect video camera

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于立体视觉的串联机器人跟踪检测系统;孙美霞;《中国优秀硕士学位论文全文数据库信息科技辑2012年》;中国学术期刊(光盘版)电子杂志社;20120715(第07期);全文 *
面向智能空间的多视角视觉系统关键技术研究;刘亚辉;《中国博士学位论文全文数据库信息科技辑2011年》;中国学术期刊(光盘版)电子杂志社;20111215(第12期);全文 *

Also Published As

Publication number Publication date
CN104647390A (en) 2015-05-27

Similar Documents

Publication Publication Date Title
CN104647390B (en) For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating
CN102800103B (en) Unmarked motion capturing method and device based on multi-visual angle depth camera
CN106553195B (en) Object 6DOF localization method and system during industrial robot crawl
CN104354167B (en) Robot hand-eye calibration method and device
CN102927908B (en) Robot eye-on-hand system structured light plane parameter calibration device and method
CN107471218B (en) A hand-eye coordination method for a dual-arm robot based on polyocular vision
CN102842117B (en) Method for correcting kinematic errors in microscopic vision system
CN113276106B (en) Climbing robot space positioning method and space positioning system
CN109986560B (en) Mechanical arm self-adaptive grabbing method for multiple target types
CN110555889A (en) CALTag and point cloud information-based depth camera hand-eye calibration method
CN107030693B (en) A target tracking method for live working robot based on binocular vision
WO2024027647A1 (en) Robot control method and system and computer program product
CN107914272A (en) A kind of method of seven freedom robot assemblies crawl target object
JP2011175477A (en) Three-dimensional measurement apparatus, processing method and program
CN106403924B (en) Robot based on depth camera quickly positions and Attitude estimation method
CN106780389A (en) A kind of fisheye image correcting method and device based on Coordinate Conversion
CN114851201A (en) Mechanical arm six-degree-of-freedom vision closed-loop grabbing method based on TSDF three-dimensional reconstruction
WO2020063058A1 (en) Calibration method for multi-degree-of-freedom movable vision system
CN113208731B (en) Hand-eye calibration method of surgical puncture robot based on binocular vision system
CN114299039B (en) Robot and collision detection device and method thereof
CN115578460A (en) Robot Grasping Method and System Based on Multimodal Feature Extraction and Dense Prediction
JP6410411B2 (en) Pattern matching apparatus and pattern matching method
CN106335061A (en) Hand-eye relation calibration method based on four-freedom-degree robot
CN116766194A (en) Binocular vision-based disc workpiece positioning and grabbing system and method
CN117237414A (en) Grabbing and guiding method and system based on binocular images under mobile robot platform

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant