CN113487683B - A Target Tracking System Based on Trinocular Vision - Google Patents
A Target Tracking System Based on Trinocular Vision Download PDFInfo
- Publication number
- CN113487683B CN113487683B CN202110800524.9A CN202110800524A CN113487683B CN 113487683 B CN113487683 B CN 113487683B CN 202110800524 A CN202110800524 A CN 202110800524A CN 113487683 B CN113487683 B CN 113487683B
- Authority
- CN
- China
- Prior art keywords
- ptz
- camera
- bolt
- image
- coordinates
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 claims abstract description 29
- 239000011159 matrix material Substances 0.000 claims description 48
- 238000004422 calculation algorithm Methods 0.000 claims description 36
- 238000012937 correction Methods 0.000 claims description 23
- 238000004590 computer program Methods 0.000 claims description 18
- 238000013519 translation Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 9
- 230000003287 optical effect Effects 0.000 claims description 7
- 238000002789 length control Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 4
- 230000008030 elimination Effects 0.000 claims description 4
- 238000003379 elimination reaction Methods 0.000 claims description 4
- FGUUSXIOTUKUDN-IBGZPJMESA-N C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 Chemical compound C1(=CC=CC=C1)N1C2=C(NC([C@H](C1)NC=1OC(=NN=1)C1=CC=CC=C1)=O)C=CC=C2 FGUUSXIOTUKUDN-IBGZPJMESA-N 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 7
- 238000003384 imaging method Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
本发明公开了一种基于三目视觉的目标跟踪系统,系统包括三目视觉模块、相机标定模块、目标位置获取模块、目标位置预测模块、虚拟枪机双目视觉标定模块、视差图获得模块、场景深度信息获取模块、PT参数获取模块、Z参数获取模块以及跟踪模块;提出了一种通过移动枪机构建虚拟双目视觉系统的方法来提前估计得到场景深度信息,并以目标的接地点的深度约束来估计其空间位置,达到唯一确定PTZ控制参数的目的,从而提高了跟踪的准确性。
The invention discloses a target tracking system based on trinocular vision. The system includes a trinocular vision module, a camera calibration module, a target position acquisition module, a target position prediction module, a virtual gun camera binocular vision calibration module, a disparity map acquisition module, Scene depth information acquisition module, PT parameter acquisition module, Z parameter acquisition module, and tracking module; a method of constructing a virtual binocular vision system by moving the gun is proposed to estimate the scene depth information in advance, and use the grounding point of the target Depth constraints are used to estimate its spatial position to achieve the purpose of uniquely determining PTZ control parameters, thereby improving the tracking accuracy.
Description
技术领域technical field
本发明涉及目标跟踪系统,具体涉及一种基于三目视觉的目标跟踪系统。The invention relates to a target tracking system, in particular to a target tracking system based on trinocular vision.
背景技术Background technique
在一些军事或民用设施的重点要害部位,如机场场坪、油库、化工厂等,人们对智能视频监控的性能有着更高的要求。In some key and critical parts of military or civilian facilities, such as airport yards, oil depots, chemical plants, etc., people have higher requirements for the performance of intelligent video surveillance.
现有技术中通常采用双目视觉的方法进行目标跟踪,采用枪机或全向摄像机结合PTZ球机实现的双目视觉系统,先使用枪机或全向摄像机检测运动目标,再用PTZ球机实现跟踪和放大抓拍。In the prior art, the method of binocular vision is usually used for target tracking, and the binocular vision system is realized by using a bolt or omnidirectional camera combined with a PTZ ball camera. First, the bolt or omnidirectional camera is used to detect moving targets, and then the PTZ ball camera Realize tracking and zoom-in capture.
利用PTZ球机对运动目标进行跟踪,是近年来一个研究热点,现有技术仅由目标在图像上的二维坐标进行PTZ控制参数的估算,未考虑目标与PTZ球机的距离信息,在场景深度变化较大的场合会产生较大的跟踪误差。虽然考虑了Z坐标给目标定位带来的影响,并利用消隐点等信息估计目标的深度信息,但这些特殊的约束条件对场景内容提出了附加要求,仅适用于场景内容存在可供参考的约束信息的情况。Using PTZ speed dome to track moving targets is a research hotspot in recent years. The existing technology only uses the two-dimensional coordinates of the target on the image to estimate the PTZ control parameters, without considering the distance information between the target and the PTZ speed dome. When the depth changes greatly, a large tracking error will occur. Although the impact of the Z coordinate on target positioning is considered, and the depth information of the target is estimated by using information such as blanking points, these special constraints impose additional requirements on the scene content, which is only applicable to the scene content for reference. Conditions that constrain information.
综上所述,现有的目标跟踪方法估算存在跟踪结果不准确的问题。To sum up, the existing target tracking methods have the problem of inaccurate tracking results.
发明内容Contents of the invention
本发明的目的在于提供一种基于三目视觉的目标跟踪系统,用以解决现有技术中的目标跟踪方法存在跟踪结果不准确的问题。The purpose of the present invention is to provide a target tracking system based on trinocular vision, which is used to solve the problem of inaccurate tracking results in the target tracking methods in the prior art.
为了实现上述任务,本发明采用以下技术方案:In order to achieve the above tasks, the present invention adopts the following technical solutions:
一种基于三目视觉的目标跟踪系统,其特征在于,所述的系统包括三目视觉模块、相机标定模块、目标位置获取模块、目标位置预测模块、虚拟枪机双目视觉标定模块、视差图获得模块、场景深度信息获取模块、PT参数获取模块、Z参数获取模块以及跟踪模块;A target tracking system based on trinocular vision, characterized in that the system includes a trinocular vision module, a camera calibration module, a target position acquisition module, a target position prediction module, a virtual gun camera binocular vision calibration module, and a disparity map An acquisition module, a scene depth information acquisition module, a PT parameter acquisition module, a Z parameter acquisition module and a tracking module;
所述的三目视觉模块用于采集包含有运动目标的图像,所述的三目视觉模块包括一个枪机和一对PTZ球机,所述的枪机和PTZ球机安装在滑轨上,所述的枪机沿着所述的滑轨在所述的两个PTZ球机中间移动;所述的PTZ球机零位时的光轴与枪机的朝向相同;一对PTZ球机的参数相同;The trinocular vision module is used to collect images containing moving objects. The trinocular vision module includes a bolt and a pair of PTZ ball cameras. The bolt and PTZ ball cameras are installed on slide rails. The bolt moves along the slide rail between the two PTZ ball cameras; the optical axis of the PTZ ball camera at zero position is the same as the direction of the bolt; the parameters of a pair of PTZ ball cameras same;
所述的三目视觉模块用于采集包含有同一运动目标的图像,获得枪机图像、第一PTZ球机图像和第二PTZ球机图像;The trinocular vision module is used to collect images that contain the same moving target, and obtain the bolt image, the first PTZ dome image and the second PTZ dome image;
所述的相机标定模块用于对所述的枪机和两个PTZ球机进行内外参数标定,获得枪机内参、枪机外参、第一PTZ球机内部参数第一PTZ球机初始旋转矩阵R0_1、第一PTZ球机平移向量t0_1、第二PTZ球机内部参数第二PTZ球机初始旋转矩阵R0_2以及第二PTZ球机平移向量t0_2;The camera calibration module is used to calibrate the internal and external parameters of the bolt and the two PTZ ball cameras, and obtain the internal parameters of the bolt, the external parameters of the bolt, and the internal parameters of the first PTZ ball camera. Initial rotation matrix R 0 _1 of the first PTZ ball machine, translation vector t 0 _1 of the first PTZ ball machine, internal parameters of the second PTZ ball machine The second PTZ ball machine initial rotation matrix R 0 _2 and the second PTZ ball machine translation vector t 0 _2;
所述的目标位置获取模块用于根据所述的枪机图像进行预处理,获得目标区域坐标,所述的目标区域坐标为目标外接矩阵的坐标,所述的目标区域坐标包括矩形4个顶点以及1个中心点的坐标,mi(ui,vi),i=1,2,3,4,5;The target position acquisition module is used to perform preprocessing according to the bolt image to obtain the coordinates of the target area, the coordinates of the target area are the coordinates of the target circumscribed matrix, and the coordinates of the target area include four vertices of a rectangle and Coordinates of 1 central point, m i (u i , v i ), i=1,2,3,4,5;
所述的目标位置预测模块用于根据所述的目标区域坐标进行预测,获得目标区域预测坐标,所述目标区域预测坐标与所述的目标区域坐标一一对应,所述的目标区域预测坐标中包括2个接地点m3与m4,所述的2个接地点为矩形的顶点,所述的2个接地点连成的线平行于地面且与地面之间的距离小于目标区域预测坐标中其他2个接地点连成的线与地面之间的距离;The target position prediction module is used to perform prediction according to the target area coordinates to obtain target area predicted coordinates, the target area predicted coordinates correspond to the target area coordinates one by one, and the target area predicted coordinates Including two grounding points m 3 and m 4 , the two grounding points are the vertices of a rectangle, the line formed by the two grounding points is parallel to the ground and the distance between the ground and the ground is smaller than the predicted coordinates of the target area The distance between the line connecting the other two grounding points and the ground;
所述的虚拟枪机双目视觉标定模块存储有第一计算机程序,所述的第一计算机程序在被处理器执行时实现以下步骤:The binocular vision calibration module of the virtual bolt is stored with a first computer program, and the first computer program implements the following steps when executed by a processor:
步骤A、保持标定板位置固定,控制枪机移动至第一抓拍点上,拍摄一副含有标定板的图像,获得第一标定板图像PA;Step A. Keep the position of the calibration plate fixed, control the trigger to move to the first capture point, take an image containing the calibration plate, and obtain the first calibration plate image P A ;
步骤B、控制所述的枪机固定在第一抓拍点上,拍摄含有运动目标的图像,获得第一枪机图像;Step B, controlling the bolt to be fixed on the first capture point, taking an image containing a moving target, and obtaining the first bolt image;
步骤C、控制枪机移动至第二抓拍点后拍摄含有运动目标的图像,获得第二枪机图像;Step C. After controlling the action to move to the second capture point, take an image containing a moving target to obtain a second image of the action;
步骤D、控制枪机固定在所述的第二抓拍点上,拍摄一副含有标定板的图像,获得第二标定板图像PB,,所述的标定板在拍摄获得第二标定板图像PB与拍摄获得第一标定板图像PA时均在所述枪机(3)的视场中部;Step D, control the trigger to be fixed on the second snapping point, take an image containing the calibration plate, and obtain the second calibration plate image P B, , and obtain the second calibration plate image P when the calibration plate is shot B is in the middle of the field of view of the bolt (3) when shooting and obtaining the first calibration plate image PA ;
步骤E、利用所述的第一标定板图像PA与第二标定板图像PB进行标定,获得所述枪机的旋转向量RAB以及平移向量tAB;Step E, using the first calibration plate image P A and the second calibration plate image P B to perform calibration to obtain the rotation vector R AB and translation vector t AB of the bolt;
所述的视差图获得模块中存储有第二计算机程序,所述的第二计算机程序在被处理器执行时实现以下步骤:A second computer program is stored in the disparity map obtaining module, and the second computer program implements the following steps when executed by the processor:
步骤a、根据所述的枪机(3)的旋转向量RAB以及平移向量tAB,采用立体校正算法对第一枪机图像与第二枪机图像进行立体校正,获得重投影转换矩阵Q和枪机的旋转矩阵R;获得采集第一枪机图像时枪机(3)的第一内参矩阵KA和第一投影矩阵PA;Step a, according to the rotation vector R AB and the translation vector t AB of the bolt (3), use a stereo correction algorithm to perform stereo correction on the first bolt image and the second bolt image, and obtain the reprojection transformation matrix Q and The rotation matrix R of the bolt; the first internal reference matrix K A and the first projection matrix PA of the bolt (3) when obtaining the first bolt image;
其中 in
步骤b、采用式I将第一枪机图像的T点图像坐标(uA,vA)映射至摄像机坐标中,获得T点第一摄像机坐标(x′A,y′A);Step b, using formula I to map the T-point image coordinates (u A , v A ) of the first bolt image to the camera coordinates to obtain the T-point first camera coordinates (x′ A , y′ A );
步骤c、利用旋转矩阵R对第一枪机图像进行旋转变换,采用式II获得枪机坐标系的T点第一观测坐标(xA,yA);Step c, using the rotation matrix R to rotate and transform the first bolt image, and using formula II to obtain the first observation coordinates (x A , y A ) of the point T of the bolt coordinate system;
步骤d、利用所述的重投影转换矩阵Q对第一枪机图像与第二枪机图像进行重投影后,利用立体匹配算法,获得视差图;Step d, after reprojecting the first bolt image and the second bolt image using the reprojection transformation matrix Q, using a stereo matching algorithm to obtain a disparity map;
步骤e、采用式III获得T点在视差图的图像坐标(u′A,v′A);Step e, adopt formula III to obtain the image coordinates (u' A , v' A ) of the T point in the disparity map;
步骤f、根据所述的T点在视差图的图像坐标(u′A,v′A),获得图像中T点的深度信息;Step f, according to the image coordinates (u' A , v' A ) of the T point in the disparity map, obtain the depth information of the T point in the image;
所述的场景深度信息获取模块用于将枪机图像中的接地点m3与m4分别作为T点输入至视差图获得模块中,获得接地点m3与m4的视差值d3与d4;The scene depth information acquisition module is used to input the grounding points m3 and m4 in the bolt image as T points into the disparity map obtaining module respectively, and obtain the disparity values d3 and d3 of the grounding points m3 and m4 . d 4 ;
所述的场景深度信息获取模块还用于获得近似视差值 The scene depth information acquisition module is also used to obtain an approximate parallax value
所述的目标位置预测模块还用于根据所述的近似视差值d0,获得目标区域预测坐标中中心点坐标对应在枪机坐标系下的三维坐标XW(5)=(XW(5),YW(5),ZW(5));The target position prediction module is further used to obtain the three-dimensional coordinates X W (5)=(X W ( 5), Y W (5), Z W (5));
所述的PT参数获取模块用于采用式IV获得第一PTZ球机的P向转动角度θP_1和第一PTZ球机的T向转动角度的值θT_1以及第二PTZ球机的P向转动角度θP_2和第二PTZ球机的T向转动角度θT_2:The PT parameter acquisition module is used to obtain the value of the P-direction rotation angle θ P_1 of the first PTZ dome camera and the T-direction rotation angle θ T_1 of the first PTZ dome camera and the P-direction rotation of the second PTZ dome camera by using Formula IV Angle θ P_2 and T-rotation angle θ T_2 of the second PTZ dome camera:
其中XC_1=(XC_1,YC_1,ZC_1)为中心点在第一PTZ球机中的三维坐标,XC_1=R0_1XW(5)+t0_1;XC_2=(XC_2,YC_2,ZC_2)为中心点在第二PTZ球机中的三维坐标,XC_2=R0_2XW(5)+t0_2;Where X C_1 = (X C_1 , Y C_1 , Z C_1 ) is the three-dimensional coordinates of the center point in the first PTZ dome camera, X C_1 = R 0_1 X W (5)+t 0_1 ; X C_2 = (X C_2 , Y C_2 , Z C_2 ) is the three-dimensional coordinates of the center point in the second PTZ dome camera, X C_2 = R 0_2 X W (5)+t 0_2 ;
所述的Z参数获取模块中存储有第三计算机程序,所述的第三计算机程序在被处理器执行时实现以下步骤:A third computer program is stored in the Z parameter acquisition module, and the third computer program implements the following steps when executed by the processor:
步骤1、采用式V获得第一PTZ球机旋转矩阵RPT_1以及第二PTZ球机旋转矩阵RPT_2;
步骤2、获得目标区域预测坐标中矩形的4个顶点经过PTZ球机光轴投影后的坐标(-Xmax,Ymax),(Xmax,Ymax),(Xmax,-Ymax)和(-Xmax,-Ymax);
步骤3、若矩形的长宽比例大于等于球机长宽比例时,设置X轴为主方向,否则Y轴为主方向;
步骤4、采用式VI获得X轴方向上的焦距或Y轴方向上的焦距
其中k为比例系数,k为常数,ZE表示球机的变焦控制参数,ZE为常数,W1表示第一PTZ球机的分辨率的长,W1为常数,W2表示第二PTZ球机的分辨率的长,W2为常数;H1表示第一PTZ球机的分辨率的宽,H1为常数,H2表示第二PTZ球机的分辨率的宽,H2为常数;Among them, k is a proportional coefficient, k is a constant, Z E represents the zoom control parameter of the dome camera, Z E is a constant, W 1 represents the resolution length of the first PTZ dome camera, W 1 is a constant, W 2 represents the second PTZ The length of the resolution of the ball camera, W 2 is a constant; H 1 represents the resolution width of the first PTZ ball camera, H 1 is a constant, H 2 represents the resolution width of the second PTZ ball camera, H 2 is a constant ;
步骤5、当步骤3中设置主方向为X轴时,采用牛顿法求解式VII,获得控制参数Z:Step 5. When the main direction is set as the X axis in
当步骤3中设置主方向为Y轴时,采用牛顿法求解式VIII,获得PTZ球机焦距控制参数值Z:When the main direction is set as the Y axis in
所述fx_1(Z)、fx_2(Z)、fy_1(Z)与fy_2(Z)均为相机标定模块标定得到的拟合函数;The f x_1 (Z), f x_2 (Z), f y_1 (Z) and f y_2 (Z) are fitting functions obtained by calibration of the camera calibration module;
所述的跟踪模块用于利用PT参数获取模块获得的第一PTZ球机的P向转动角度θP_1和第一PTZ球机的T向转动角度的值θT_1以及第二PTZ球机的P向转动角度θP_2和第二PTZ球机的T向转动角度θT_2控制第一PTZ球机和第二PTZ球机的PT角度;The tracking module is used to use the value of the P-direction rotation angle θ P_1 of the first PTZ dome camera and the T-direction rotation angle θ T_1 of the first PTZ dome camera obtained by the PT parameter acquisition module, and the P-direction rotation angle of the second PTZ dome camera. The rotation angle θ P_2 and the T-direction rotation angle θ T_2 of the second PTZ ball camera control the PT angles of the first PTZ ball camera and the second PTZ ball camera;
用于利用Z参数获取模块分别获得第一PTZ球机和第二PTZ球机的焦距控制参数值后,控制第一PTZ球机和第二PTZ球机的焦距参数后,完成跟踪。It is used to use the Z parameter acquisition module to respectively obtain the focal length control parameter values of the first PTZ dome camera and the second PTZ dome camera, control the focal length parameters of the first PTZ dome camera and the second PTZ dome camera, and complete the tracking.
进一步地,视差图获得模块的步骤a中的立体校正算法为Bouguet立体校正算法。Further, the stereo correction algorithm in step a of the disparity map obtaining module is a Bouguet stereo correction algorithm.
进一步地,所述的相机标定模块以及虚拟枪机双目视觉标定模块中均采用张正友标定算法进行标定。Further, both the camera calibration module and the virtual camera binocular vision calibration module use Zhang Zhengyou's calibration algorithm for calibration.
进一步地,所述的目标位置获取模块采用DACB前景检测算法对枪机图像进行处理,获得包含阴影和运动目标的前景区域后,利用阴影消除算法,获得目标区域坐标。Further, the target position acquisition module uses the DACB foreground detection algorithm to process the bolt image, and after obtaining the foreground area containing shadows and moving objects, uses the shadow elimination algorithm to obtain the coordinates of the target area.
进一步地,目标位置预测模块用于利用Kalman预测算法对目标区域坐标进行预测,获得目标区域预测坐标。Further, the target position prediction module is used to predict the coordinates of the target area by using the Kalman prediction algorithm to obtain the predicted coordinates of the target area.
本发明与现有技术相比具有以下技术效果:Compared with the prior art, the present invention has the following technical effects:
1、本发明提供的基于三目视觉的目标跟踪系统,提出了一种通过移动枪机构建虚拟双目视觉系统的方法来提前估计得到场景深度信息,并以目标的接地点的深度约束来估计其空间位置,达到唯一确定PTZ控制参数的目的,从而提高了目标跟踪的准确性;1. The target tracking system based on trinocular vision provided by the present invention proposes a method of constructing a virtual binocular vision system by moving the gun to estimate the depth information of the scene in advance, and estimate it based on the depth constraint of the target's grounding point Its spatial position achieves the purpose of uniquely determining PTZ control parameters, thereby improving the accuracy of target tracking;
2、本发明提供的基于三目视觉的目标跟踪系统,实现虚拟双目视觉系统下的场景深度获取,并获得了实时图像和视差图的坐标对应关系,提高了场景深度获取的准确性,从而提高了目标跟踪的准确性;2. The target tracking system based on trinocular vision provided by the present invention realizes the acquisition of scene depth under the virtual binocular vision system, and obtains the coordinate correspondence between real-time images and disparity maps, which improves the accuracy of scene depth acquisition, thereby Improved target tracking accuracy;
3、本发明提供的基于三目视觉的目标跟踪系统,充分利用作为主摄像机的枪机参数稳定的特性,结合系统初始化阶段得到的场景深度信息,实现运动目标的三维坐标估计,并以此实现PTZ控制参数的计算,达到精确计算的目的;3. The target tracking system based on trinocular vision provided by the present invention makes full use of the stable parameters of the bolt as the main camera, and combines the scene depth information obtained in the system initialization stage to realize the three-dimensional coordinate estimation of the moving target, and thereby realize The calculation of PTZ control parameters achieves the purpose of accurate calculation;
4、本发明提供的基于三目视觉的目标跟踪系统,利用场景深度信息和目标预测位置接地点计算得到PTZ球机的跟踪控制参数,从而提高了控制参数计算的准确性及便捷性。4. The target tracking system based on trinocular vision provided by the present invention calculates the tracking control parameters of the PTZ ball camera by using the scene depth information and the predicted target position ground point, thereby improving the accuracy and convenience of the control parameter calculation.
附图说明Description of drawings
图1为本发明提供的三目视觉模块的结构示意图;Fig. 1 is the structural representation of the trinocular vision module provided by the present invention;
图2为本发明的一个实施例中提供的枪机在滑轨上的移动路径示意图;Fig. 2 is a schematic diagram of the movement path of the bolt provided on the slide rail in one embodiment of the present invention;
图3为本发明的一个实施例中提供的PTZ控制参数计算流程示意图;Fig. 3 is a schematic flow chart of the PTZ control parameter calculation provided in one embodiment of the present invention;
图4为本发明的一个实施例中提供的平行双目视觉系统示意图;Fig. 4 is a schematic diagram of a parallel binocular vision system provided in an embodiment of the present invention;
图5为本发明的一个实施例中提供的预测目标位置与PT角度的几何关系示意图;5 is a schematic diagram of the geometric relationship between the predicted target position and the PT angle provided in one embodiment of the present invention;
图6为本发明的一个实施例中提供的枪机图像中的运动目标描述示意图;FIG. 6 is a schematic diagram illustrating a moving target in a gun image provided in an embodiment of the present invention;
图7为本发明的一个实施例中提供的PT控制参数与实际转动角度的最小二乘拟合中P方向角度拟合结果图;Fig. 7 is a graph of the angle fitting results in the P direction in the least squares fitting of the PT control parameters and the actual rotation angle provided in an embodiment of the present invention;
图8为本发明的一个实施例中提供的PT控制参数与实际转动角度的最小二乘拟合中T方向角度拟合结果图;Fig. 8 is a graph of the angle fitting results in the T direction in the least squares fitting between the PT control parameters and the actual rotation angle provided in an embodiment of the present invention;
图9为本发明的一个实施例中提供的PTZ球机PT转动后目标位置示意图;Fig. 9 is a schematic diagram of the target position after the PT of the PTZ dome machine is rotated according to an embodiment of the present invention;
图10为本发明的一个实施例中提供的坐标系中的目标矩形重建示意图。Fig. 10 is a schematic diagram of reconstruction of a target rectangle in a coordinate system provided in an embodiment of the present invention.
图中的标号代表:1-导轨,2-PTZ球机,3-枪机。The numbers in the figure represent: 1-rail, 2-PTZ ball camera, 3-bolt.
具体实施方式Detailed ways
下面结合附图和实施例对本发明进行详细说明。以便本领域的技术人员更好的理解本发明。需要特别提醒注意的是,在以下的描述中,当已知功能和设计的详细描述也许会淡化本发明的主要内容时,这些描述在这里将被忽略。The present invention will be described in detail below in conjunction with the accompanying drawings and embodiments. So that those skilled in the art can better understand the present invention. It should be noted that in the following description, when detailed descriptions of known functions and designs may dilute the main content of the present invention, these descriptions will be omitted here.
以下对本发明涉及的定义或概念内涵做以说明:The definitions or conceptual connotations involved in the present invention are described below:
枪机:枪机是监控类摄像机中一种。枪机外观长方体,前面是C/CS镜头接口。Bolt: Bolt is a kind of surveillance camera. The bolt looks like a cuboid, with a C/CS lens mount on the front.
PTZ球机:Pan-Tilt-Zoom球机,在安防监控应用中是Pan/Tilt/Zoom的简写,代表云台全方位(左右/上下)移动及镜头变倍、变焦控制的球型监控摄像机。PTZ dome camera: Pan-Tilt-Zoom dome camera, shorthand for Pan/Tilt/Zoom in security monitoring applications, represents a dome surveillance camera with omni-directional (left/right/up and down) movement of the pan/tilt and lens zoom and zoom control.
实施例Example
在实施例中公开了一种基于三目视觉的目标跟踪系统,系统包括三目视觉模块、相机标定模块、目标位置获取模块、目标位置预测模块、虚拟枪机双目视觉标定模块、视差图获得模块、场景深度信息获取模块、PT参数获取模块、Z参数获取模块以及跟踪模块;In the embodiment, a target tracking system based on trinocular vision is disclosed. The system includes a trinocular vision module, a camera calibration module, a target position acquisition module, a target position prediction module, a virtual gun machine binocular vision calibration module, and a disparity map acquisition Module, scene depth information acquisition module, PT parameter acquisition module, Z parameter acquisition module and tracking module;
如图1所示,本发明提供的三目视觉模块用于采集包含有运动目标的图像,所述的三目视觉模块包括一个枪机3和两个PTZ球机2,所述的枪机3和PTZ球机2安装在滑轨1上,滑轨1平行于水平面设置,所述的枪机3沿着所述的滑轨1在所述的两个PTZ球机2中间移动;所述的PTZ球机2零位时的光轴与枪机3的朝向相同;一对PTZ球机的参数完全相同;As shown in Figure 1, the trinocular vision module provided by the present invention is used to collect images containing moving objects, the trinocular vision module includes a
本发明利用枪机内部参数恒定的特性,如图1所示,通过将其固定在导轨1上的两个不同位置来实现虚拟的双目视觉系统,并同时得到两个位置的标定模板图像以及场景图像。图2为枪机3在导轨1上的移动路径示意图。B位置枪机相对于A位置枪机之间的位置关系用旋转向量rAB和平移向量tAB描述。The present invention utilizes the constant characteristic of the internal parameters of the bolt, as shown in Figure 1, by fixing it at two different positions on the
所述的三目视觉模块用于采集包含有同一运动目标的图像,获得枪机图像、第一PTZ球机图像和第二PTZ球机图像;The trinocular vision module is used to collect images that contain the same moving target, and obtain the bolt image, the first PTZ dome image and the second PTZ dome image;
所述的相机标定模块用于对所述的枪机和两个PTZ球机进行内外参数标定,获得枪机内参、枪机外参、第一PTZ球机内部参数第一PTZ球机初始旋转矩阵R0_1、第一PTZ球机平移向量t0_1、第二PTZ球机内部参数第二PTZ球机初始旋转矩阵R0_2以及第二PTZ球机平移向量t0_2;The camera calibration module is used to calibrate the internal and external parameters of the bolt and the two PTZ ball cameras, and obtain the internal parameters of the bolt, the external parameters of the bolt, and the internal parameters of the first PTZ ball camera. Initial rotation matrix R 0 _1 of the first PTZ ball machine, translation vector t 0 _1 of the first PTZ ball machine, internal parameters of the second PTZ ball machine The second PTZ ball machine initial rotation matrix R 0 _2 and the second PTZ ball machine translation vector t 0 _2;
本发明提供的PTZ参数获取方法如图3所示。The PTZ parameter acquisition method provided by the present invention is shown in FIG. 3 .
所述的目标位置获取模块用于根据所述的枪机图像进行预处理,获得目标区域坐标,所述的目标区域坐标为目标外接矩阵的坐标,所述的目标区域坐标包括矩形4个顶点以及1个中心点的坐标,mi(ui,vi),i=1,2,3,4,5;The target position acquisition module is used to perform preprocessing according to the bolt image to obtain the coordinates of the target area, the coordinates of the target area are the coordinates of the target circumscribed matrix, and the coordinates of the target area include four vertices of a rectangle and Coordinates of 1 central point, m i (u i , v i ), i=1,2,3,4,5;
在本实施例中,要得到目标预测位置处PTZ参数的精确控制参数,控制PTZ球机运动使得其光轴正对目标中心,则需要通过坐标变换得到目标在PTZ球机坐标系的三维坐标。In this embodiment, to obtain the precise control parameters of the PTZ parameters at the predicted position of the target, and to control the motion of the PTZ dome camera so that its optical axis is facing the center of the target, it is necessary to obtain the three-dimensional coordinates of the target in the PTZ dome camera coordinate system through coordinate transformation.
可选地,所述的目标位置获取模块采用DACB前景检测算法对枪机图像进行处理,获得包含阴影和运动目标的前景区域后,利用阴影消除算法,获得目标区域坐标。Optionally, the target position acquisition module uses the DACB foreground detection algorithm to process the bolt image, obtains the foreground area containing shadows and moving targets, and then uses the shadow elimination algorithm to obtain the coordinates of the target area.
在本实施例中,首先由DACB前景检测算法得到包含阴影和运动目标的前景区域,再由阴影消除算法剔除阴影成分得到运动目标的区域描述,获得目标区域坐标。In this embodiment, the DACB foreground detection algorithm first obtains the foreground area including shadows and moving objects, and then removes the shadow components by the shadow elimination algorithm to obtain the area description of the moving object and obtain the coordinates of the object area.
所述的目标位置预测模块用于根据所述的目标区域坐标进行预测,获得目标区域预测坐标,所述目标区域预测坐标与所述的目标区域坐标一一对应,所述的目标区域预测坐标中包括2个接地点m3与m4,所述的2个接地点为矩形的顶点,所述的2个接地点连成的线平行于地面且与地面之间的距离小于目标区域预测坐标中其他2个接地点连成的线与地面之间的距离;The target position prediction module is used to perform prediction according to the target area coordinates to obtain target area predicted coordinates, the target area predicted coordinates correspond to the target area coordinates one by one, and the target area predicted coordinates Including two grounding points m 3 and m 4 , the two grounding points are the vertices of a rectangle, the line formed by the two grounding points is parallel to the ground and the distance between the ground and the ground is smaller than the predicted coordinates of the target area The distance between the line connecting the other two grounding points and the ground;
可选地,目标位置预测模块用于利用Kalman预测算法对目标区域坐标进行预测,获得目标区域预测坐标。Optionally, the target position prediction module is used to predict the coordinates of the target area by using the Kalman prediction algorithm to obtain the predicted coordinates of the target area.
理论上,通过背景建模算法得到目标在当前帧的图像位置,结合Kalman预测算法就可得到该目标在ΔT时间后的预测位置,如下式。Theoretically, the image position of the target in the current frame is obtained through the background modeling algorithm, combined with the Kalman prediction algorithm, the predicted position of the target after ΔT time can be obtained, as shown in the following formula.
其中dx,dy为Kalman预测算法基于历史信息给出的物体在X、Y两个方向上的运动速度预测,ΔT为视频延时、控制延时等一系列延时的总和,如下式。Among them, d x and d y are the prediction of the moving speed of the object in the X and Y directions given by the Kalman prediction algorithm based on historical information, and ΔT is the sum of a series of delays such as video delay and control delay, as shown in the following formula.
ΔT=TQJ+ΔTVISCA+ΔTPT ΔT=T QJ +ΔT VISCA +ΔT PT
ΔTPT是PTZ球机将光轴从当前PT角度转动至目标中心所需的时间,转动角度越大,ΔTPT越大。而转动角度又受延迟总时间ΔT影响,即目标在ΔT延时内的移动会对PTZ球机跟踪精度带来影响。可以看出,PT转动角度和延迟时间是相互影响,互相制约的,因而难以同时得到两个参数的精确解。为此,本发明对该问题做了如下处理:ΔT PT is the time required for the PTZ ball camera to rotate the optical axis from the current PT angle to the target center, the larger the rotation angle, the greater the ΔT PT . The rotation angle is affected by the total delay time ΔT, that is, the movement of the target within the ΔT delay time will affect the tracking accuracy of the PTZ dome camera. It can be seen that the PT rotation angle and the delay time are mutually influenced and restricted by each other, so it is difficult to obtain the exact solution of the two parameters at the same time. For this reason, the present invention has done following processing to this problem:
(1)以目标当前位置代替预测位置计算PZ控制需要转动的角度,得到PT延时;(1) Use the current position of the target instead of the predicted position to calculate the angle required for PZ control to obtain the PT delay;
(2)将PT延时代入上式,得到总延时ΔT,并利用其进一步计算各项控制参数。(2) Put the PT delay into the above formula to obtain the total delay ΔT, and use it to further calculate various control parameters.
采用这种类似迭代的处理方法,可以快速得到各项控制参数的近似解,并对PTZ球机实施跟踪控制。Using this similar iterative processing method, the approximate solution of various control parameters can be quickly obtained, and the tracking control of the PTZ ball machine can be implemented.
在本实施例中,利用结合直方图统计信息的Kalman预测算法,即可得到目标在一定延时后的位置预测。并将结果用一个矩形表示,如图5、6所示,图5中标号3代表枪机,标号2代表PTZ球机,并以矩形中心点C为运动目标的中心位置,设这五点在枪机图像中的坐标为mi(ui,vi),i=1,2,3,4,5。其中3、4两点可以视为ΔT时间后目标运动至预测矩形与地面的接触点,称其为接地点。在本实施例中将借助这两个接地点近似估计目标矩形四个顶点以及中心点C的三维坐标。In this embodiment, the position prediction of the target after a certain delay can be obtained by using the Kalman prediction algorithm combined with the histogram statistical information. And the result is represented by a rectangle, as shown in Figures 5 and 6, the
所述的虚拟枪机双目视觉标定模块存储有第一计算机程序,所述的第一计算机程序在被处理器执行时实现以下步骤:The binocular vision calibration module of the virtual bolt is stored with a first computer program, and the first computer program implements the following steps when executed by a processor:
步骤A、保持标定板位置固定,控制枪机3移动至第一抓拍点上,拍摄一副含有标定板的图像,获得第一标定板图像PA;Step A, keep the position of the calibration plate fixed, control the
步骤B、控制所述的枪机3固定在第一抓拍点上,拍摄含有运动目标的图像,获得第一枪机图像;Step B, controlling the
步骤C、控制枪机3移动至第二抓拍点后拍摄含有运动目标的图像,获得第二枪机图像;Step C, controlling the
步骤D、控制枪机3固定在所述的第二抓拍点上,拍摄一副含有标定板的图像,获得第二标定板图像PB,,所述的标定板在拍摄获得第二标定板图像PB与拍摄获得第一标定板图像PA时均在所述枪机3的视场中部;Step D, control the
步骤E、利用所述的第一标定板图像PA与第二标定板图像PB进行标定,获得所述枪机3的旋转向量RAB以及平移向量tAB;Step E, using the first calibration plate image P A and the second calibration plate image P B to perform calibration to obtain the rotation vector R AB and translation vector t AB of the
为了简化操作,减少误差,本发明采用如下四步操作来实现标定和场景图像的抓拍:In order to simplify operations and reduce errors, the present invention uses the following four steps to achieve calibration and capture of scene images:
(1)将枪机固定于B点,抓拍一幅场景图像SB,并保存;(1) Fix the gun at point B, capture a scene image S B , and save it;
(2)保持枪机位置不变,并将一张标定模板固定放置于枪机视野中部稍偏左(保证枪机移至A点后,模板仍能位于枪机的视场中部),抓拍一幅标定模板图像PB;(2) Keep the position of the bolt unchanged, and place a calibration template fixedly in the middle of the bolt’s field of view slightly to the left (to ensure that the template can still be located in the middle of the bolt’s field of view after the bolt is moved to point A), and take a snapshot A calibration template image P B ;
(3)保持标定模板位置不变,将枪机移至A点并可靠固定(作为最终的工作位置),抓拍第二幅标定模板图像PA;(3) Keep the position of the calibration template unchanged, move the trigger to point A and securely fix it (as the final working position), and capture the second calibration template image P A ;
(4)移走标定模板,抓拍第二幅场景图像SA。(4) Remove the calibration template, and capture the second scene image S A .
不同于通常意义上的双目视觉系统,本发明提供的双目系统中的双目实为位于两不同位置的同一摄像机,故其内部参数完全一致且精确已知,因而仅需标定它们之间的相互位置关系。因此,仅使用两张标定模板图像也能取得较好的标定结果,得到足够精度的rAB和tAB。此处,标定使用GML MatLab Camera Calibration Toolbox工具箱。Different from the binocular vision system in the usual sense, the binocular in the binocular system provided by the present invention is actually the same camera located at two different positions, so its internal parameters are completely consistent and precisely known, so only calibration between them is required. mutual positional relationship. Therefore, only using two calibration template images can also achieve better calibration results, and obtain r AB and t AB with sufficient precision. Here, the calibration uses the GML MatLab Camera Calibration Toolbox toolbox.
可选地,步骤E中采用张正友标定算法进行标定。Optionally, in Step E, Zhang Zhengyou's calibration algorithm is used for calibration.
在本实施中对枪机3进行了标定,标定得到是经罗德里格斯变换后的的旋转向量rAB和平移向量tAB,标定结果为:In this implementation, the
需要说明的是,rAB向量的所有元素值以及tAB中的ty和tz分量理论上应全为0,而实际得到的向量元素并不为0而是三个较小的数值,这说明移动过程中产生了微小的偏差。它们的值越小,在立体校正时对图像的投影修正越少,最终获取的场景深度信息精度越高。It should be noted that all element values of the r AB vector and the t y and t z components in t AB should be all 0 in theory, but the actual vector elements obtained are not 0 but three smaller values, which is It shows that there is a slight deviation in the movement process. The smaller their values are, the less the projection correction of the image is during stereo correction, and the higher the accuracy of the finally obtained scene depth information is.
所述的视差图获得模块中存储有第二计算机程序,所述的第二计算机程序在被处理器执行时实现以下步骤:A second computer program is stored in the disparity map obtaining module, and the second computer program implements the following steps when executed by the processor:
步骤a、根据所述的枪机3的旋转向量RAB以及平移向量tAB,采用立体校正算法对第一枪机图像与第二枪机图像进行立体校正,获得重投影转换矩阵Q和枪机的旋转矩阵R;获得采集第一枪机图像时枪机(3)的第一内参矩阵KA和第一投影矩阵PA;Step a, according to the rotation vector R AB and the translation vector t AB of the
其中 in
在本发明中,如图2、4所示,由于A点位置的摄像机与B点摄像机为同一个摄像机,也就是双目摄像机中的两个摄像机完全相同,因此内参矩阵、投影矩阵等也完全相同,因此无需多次计算,仅采用左摄像机即可获得视差图。或者采用同样地方法对B点位置的摄像机(双目中的右相机),获得第二内参矩阵KB和第二投影矩阵PB,也可仅采用右摄像机获得视差图。In the present invention, as shown in Figures 2 and 4, since the camera at point A and the camera at point B are the same camera, that is, the two cameras in the binocular camera are exactly the same, so the internal reference matrix, projection matrix, etc. are also completely identical. Same, so the disparity map can be obtained with only the left camera without multiple calculations. Or use the same method to obtain the second internal reference matrix K B and the second projection matrix P B for the camera at point B (the right camera in the binocular), or only use the right camera to obtain the disparity map.
因此,A点或B点获得的第一内参矩阵KA或第二内参矩阵KB,第一投影矩阵PA或第二投影矩阵PB如下:Therefore, the first internal reference matrix K A or the second internal reference matrix K B obtained at point A or point B , the first projection matrix P A or the second projection matrix P B are as follows:
可选地,所述的步骤a中的立体校正算法为Bouguet立体校正算法。Optionally, the stereo correction algorithm in step a is a Bouguet stereo correction algorithm.
在本实施例中,得到RAB和tAB以后,利用Bouguet立体校正算法对场景图像SA和SB进行立体校正,得到校正所需的投影矩阵(由于事先已经完成畸变校正,因此该校正时不考虑畸变参数,畸变向量所有元素值为0)。对两幅图像进行重投影就可完成图像的行对齐操作,再通过立体匹配算法即可得到视差图。In this embodiment, after obtaining R AB and t AB , use the Bouguet stereo correction algorithm to perform stereo correction on the scene images SA and S B to obtain the projection matrix required for correction (since the distortion correction has been completed in advance, the correction Regardless of the distortion parameters, all elements of the distortion vector have values of 0). The row alignment operation of the images can be completed by reprojecting the two images, and then the disparity map can be obtained through the stereo matching algorithm.
在本实施例中,校正后的左右摄像机矩阵KA和KB,与投影矩阵PA和PB如下式,其中A点位置上的枪机3为双目相机中的左相机,B点位置上的枪机3为双目相机中的右相机:In this embodiment, the corrected left and right camera matrices K A and K B , and the projection matrices PA and P B are as follows, where the
(其中,αA=αB=0,αA与αB均是像素畸变比例,由于生产工艺的改进,目前市场上的摄像机该参数都可以认为是0)。(wherein, α A =α B =0, both α A and α B are the ratio of pixel distortion, due to the improvement of the production process, the parameters of cameras currently on the market can be considered as 0).
投影矩阵可通过将齐次坐标中的3D点转换至齐次坐标系下的2D点,可得屏幕坐标为(x/w,y/w)。如果给定屏幕坐标和摄像机内参矩阵数矩阵,则可将二维点进行重投影得到三维坐标,重投影矩阵Q如下:。The projection matrix can convert the 3D points in the homogeneous coordinates to the 2D points in the homogeneous coordinate system, and the screen coordinates can be obtained as (x/w, y/w). If the screen coordinates and the camera internal parameter matrix are given, the two-dimensional point can be reprojected to obtain the three-dimensional coordinates, and the reprojection matrix Q is as follows:.
式中除c′x外的所有参数都来自于第一枪机图像,c′x是主点在第二枪机图像上的x坐标。如果主光线在无穷远处相交,那么c′x=cx,并且右下角的项为0。All the parameters except c′ x in the formula come from the first bolt image, and c′ x is the x-coordinate of the principal point on the second bolt image. If the chief ray intersects at infinity, then c' x = c x , and the bottom right term is 0.
给定一个二维齐次坐标和对应的视差d,则可用下式将此点投影到三维坐标系中,得到其空间坐标(X/W,Y/W,Z/W),该坐标包含了空间点的深度信息。Given a two-dimensional homogeneous coordinate and the corresponding disparity d, the following formula can be used to project this point into a three-dimensional coordinate system to obtain its space coordinates (X/W, Y/W, Z/W), which include Depth information of spatial points.
通过上述Bouguet校正算法,可以得到重投影所需的各转换矩阵,如下式,实现图像对的立体校正,虚拟出一个如图4所示的平行双目立体视觉系统,为后续的沿极线的匹配搜索和深度求取打下基础。Through the above-mentioned Bouguet correction algorithm, the transformation matrices required for re-projection can be obtained, as shown in the following formula, to realize the stereo correction of the image pair, and virtualize a parallel binocular stereo vision system as shown in Figure 4, which can be used for the follow-up along the epipolar line. Match search and deep search lay the foundation.
在本实施例中,由于在立体匹配过程,算法需要对两幅图像进行极线校正,因此得到的场景视差图和枪机实时图像之间的图像坐标并非一一对应,而是存在一个映射转换关系。在本发明中视A点位置摄像机为双目中的左摄像机,设摄像机内参矩阵为KQJ,畸变向量为dQJ,Bouguet立体校正算法得到第一旋转矩阵RA和第一投影矩阵PA。枪机图像已经提前完成畸变校正,故此处的dQJ用全0填充。In this embodiment, since the algorithm needs to perform epipolar correction on the two images during the stereo matching process, the image coordinates between the obtained scene disparity map and the real-time image of the bolt are not one-to-one correspondence, but there is a mapping transformation relation. In the present invention, the camera at point A is regarded as the left camera in the binocular, and the internal reference matrix of the camera is K QJ , the distortion vector is d QJ , and the Bouguet stereo correction algorithm obtains the first rotation matrix R A and the first projection matrix P A . The bolt image has been corrected for distortion in advance, so the d QJ here is filled with all 0s.
步骤b、采用式I将第一枪机图像的T点图像坐标(uA,vA)映射至摄像机坐标中,获得T点第一摄像机坐标(x′A,y′A);Step b, using formula I to map the T-point image coordinates (u A , v A ) of the first bolt image to the camera coordinates to obtain the T-point first camera coordinates (x′ A , y′ A );
在本实施例中,获得T点第一摄像机坐标(x′A,y′A)和T点第二摄像机坐标(x′B,y′B)如下式:In this embodiment, the first camera coordinates (x′ A , y′ A ) at point T and the second camera coordinates (x′ B , y′ B ) at point T are obtained as follows:
步骤c、利用旋转矩阵R对第一枪机图像进行旋转变换,采用式II获得枪机坐标系的T点第一观测坐标(xA,yA);Step c, using the rotation matrix R to rotate and transform the first bolt image, and using formula II to obtain the first observation coordinates (x A , y A ) of the point T of the bolt coordinate system;
在本实施例中,获得枪机坐标系的T点第一观测坐标(xA,yA)和T点第二观测坐标(xB,yB):In this embodiment, the first observed coordinates (x A , y A ) of point T and the second observed coordinates (x B , y B ) of point T of the bolt coordinate system are obtained:
步骤d、利用所述的重投影转换矩阵Q对第一枪机图像与第二枪机图像进行重投影后,利用立体匹配算法,获得视差图;Step d, after reprojecting the first bolt image and the second bolt image using the reprojection transformation matrix Q, using a stereo matching algorithm to obtain a disparity map;
步骤e、采用式III获得T点在视差图的图像坐标(u′A,v′A);Step e, adopt formula III to obtain the image coordinates (u' A , v' A ) of the T point in the disparity map;
在本实施例中,获得T点在视差图的图像坐标(u′A,v′A)和(u′B,v′B):In this embodiment, the image coordinates (u′ A , v′ A ) and (u′ B , v′ B ) of point T in the disparity map are obtained:
步骤f、根据所述的T点在视差图的图像坐标(u′A,v′A),获得图像中T点的深度信息;Step f, according to the image coordinates (u' A , v' A ) of the T point in the disparity map, obtain the depth information of the T point in the image;
在本实施例中,可以根据视差图的图像坐标(u′A,v′A)或(u′B,v′B),获得图像中T点的深度信息,给定一个二维齐次坐标和对应的视差d,则可用将此点投影到三维坐标系中,得到其空间坐标(X/W,Y/W,Z/W),该坐标包含了空间点的深度信息。In this embodiment, the depth information of point T in the image can be obtained according to the image coordinates (u′ A , v′ A ) or (u′ B , v′ B ) of the disparity map, given a two-dimensional homogeneous coordinate And the corresponding disparity d, then available Project this point into a three-dimensional coordinate system to obtain its space coordinates (X/W, Y/W, Z/W), which contain the depth information of the space point.
所述的场景深度信息获取模块用于将枪机图像中的接地点m3与m4分别作为T点输入至视差图获得模块中,获得接地点m3与m4的视差值d3与d4;The scene depth information acquisition module is used to input the grounding points m3 and m4 in the bolt image as T points into the disparity map obtaining module respectively, and obtain the disparity values d3 and d3 of the grounding points m3 and m4 . d 4 ;
所述的场景深度信息获取模块还用于获得近似视差值 The scene depth information acquisition module is also used to obtain an approximate parallax value
在本实施例中,首先得到接地点m3与m4在视差图中的坐标位置md_3(ud_3,vd_3)和md_4(ud_4,vd_4),并从场景视差图中查询得到两点的视差值记为d3、d4,取它们的平均值d0作为整个目标矩形的四个顶点及中心点C的近似视差值,再根据式即可求得四个顶点及C点在枪机坐标系下的三维坐标XW(i)=(XW(i),YW(i),ZW(i))i=1,2,3,4,5。In this embodiment, first obtain the coordinate positions m d_3 ( u d_3 , v d_3 ) and m d_4 (u d_4 , v d_4 ) of the grounding points
所述的目标位置预测模块还用于根据所述的近似视差值d0,获得目标区域预测坐标中中心点坐标对应在枪机坐标系下的三维坐标XW(5)=(XW(5),YW(5),ZW(5));The target position prediction module is further used to obtain the three-dimensional coordinates X W (5)=(X W ( 5), Y W (5), Z W (5));
所述的PT参数获取模块用于采用式I获得第一PTZ球机的P向转动角度θP_1和第一PTZ球机的T向转动角度的值θT_1以及第二PTZ球机的P向转动角度θP_2和第二PTZ球机的T向转动角度θT_2:The PT parameter acquisition module is used to obtain the P-direction rotation angle θ P_1 of the first PTZ dome camera and the value θ T_1 of the T-direction rotation angle of the first PTZ dome camera and the P-direction rotation of the second PTZ dome camera by using Formula I Angle θ P_2 and T-rotation angle θ T_2 of the second PTZ dome camera:
其中XC_1=(XC_1,YC_1,ZC_1)为中心点在第一PTZ球机中的三维坐标,XC_1=R0_1XW(5)+t0_1;XC_2=(XC_2,YC_2,ZC_2)为中心点在第二PTZ球机中的三维坐标,XC_2=R0_2XW(5)+t0_2;Where X C_1 = (X C_1 , Y C_1 , Z C_1 ) is the three-dimensional coordinates of the center point in the first PTZ dome camera, X C_1 = R 0_1 X W (5)+t 0_1 ; X C_2 = (X C_2 , Y C_2 , Z C_2 ) is the three-dimensional coordinates of the center point in the second PTZ dome camera, X C_2 = R 0_2 X W (5)+t 0_2 ;
在本实施例中,利用枪机坐标系与两PTZ球机坐标的初始关系R0_s和t0_s(s=1,2分别代表两个PTZ球机),计算C点在第一PTZ球机第二PTZ球机的摄像机坐标系中的坐标。In this embodiment, using the initial relationship R 0_s and t 0_s (s=1, 2 respectively representing two PTZ ball cameras) between the gun camera coordinate system and the coordinates of the two PTZ ball cameras, calculate point C at the first PTZ ball camera The coordinates in the camera coordinate system of the PTZ ball camera.
在本实施例中,还提供了实际PT参数求解的方法,采用下式获得:In this embodiment, a method for solving the actual PT parameters is also provided, which is obtained by the following formula:
根据PT参数的拟合公式,利用式计算PT实际所需的控制参数。According to the fitting formula of PT parameters, use the formula to calculate the control parameters actually required by PT.
为了提高PTZ球机主动跟踪时的控制精度,本发明提出了一种时间加权的PT控制参数最小二乘拟合算法。In order to improve the control accuracy of the PTZ dome camera during active tracking, the present invention proposes a time-weighted PT control parameter least square fitting algorithm.
本发明对PT控制参数与角度之间采用如式数学模型:The present invention adopts formula mathematical model between PT control parameter and angle:
PTZ球机工作一段时间以后,可以得到N组θ与θ′的对应数据,根据最小二乘法,可以得到参数a和b在最小二乘意义下的最优解,如下式。After the PTZ dome camera works for a period of time, the corresponding data of N groups of θ and θ' can be obtained. According to the least square method, the optimal solution of parameters a and b in the sense of least square can be obtained, as shown in the following formula.
对于PT控制参数,近期的数据变化趋势更能反映PT控制系统的当前状态,对未来能提供更有用的信息,而早期数据的作用则小一些。因此,本实施例中根据数据的时间先后设置权值,权值采用指数权重法。为了防止长时间运行带来数据量过大的问题,本实施例设置了一个数据有效周期,即仅保留最近N组数据,对过期的数据不参与最小二乘运算。同时为了防止野点的干扰,本发明将判断误差过大的数据(即控制参数值与实际求解得到的精确值存在较大偏差的数据对)并将其忽略。本发明取N=20,根据N的取值,对1与权值总和的差值叠加到最新一组数据上,保证权值总和为1。For PT control parameters, recent data trends can better reflect the current state of the PT control system and provide more useful information for the future, while early data are less useful. Therefore, in this embodiment, the weights are set according to the time sequence of the data, and the weights adopt the exponential weighting method. In order to prevent the problem of excessive data volume caused by long-term operation, this embodiment sets a data validity period, that is, only the latest N sets of data are kept, and the least square calculation is not performed on expired data. At the same time, in order to prevent the interference of wild points, the present invention will judge the data with too large error (that is, the data pair with large deviation between the control parameter value and the accurate value obtained by actual solution) and ignore it. The present invention takes N=20, and according to the value of N, the difference between 1 and the sum of weights is superimposed on the latest set of data to ensure that the sum of weights is 1.
具体方法为:先假设最新数据(第N组)的权重为wN=s(0<s<1),第t组数据的权重为wt=s(1-s)N-t。由于这N个权值之和小于1,因此本发明将剩余的权值进行N等分再叠加到已有的N个权值上,得到归一化的N组数据权值如下式。表1为N取20时各组数据的权值计算结果(取s=0.2,有wRest=0.011529),使用权值后的参数求解公式如式所示。The specific method is as follows: first assume that the weight of the latest data (Nth group) is w N =s (0<s<1), and the weight of the tth group of data is w t =s(1-s) Nt . Since the sum of these N weights is less than 1, the present invention divides the remaining weights into N equal parts and superimposes them on the existing N weights to obtain normalized N sets of data weights as follows. Table 1 shows the weight calculation results of each group of data when N is 20 (take s = 0.2, w Rest = 0.011529), and the parameter solution formula after using the weight is shown in the formula.
表1 N=20时的权值分布(s=0.2)Table 1 Weight distribution when N=20 (s=0.2)
图7、8所示为左PTZ球机20次跟踪实验后的加权最小二乘法参数拟合效果,图7为P方向角度拟合结果,图8为T方向角度拟合结果。下式为得到的拟合函数。需要说明的是,该拟合函数的参数会随着跟踪的次数增多、拟合数据的更新而不断变化。这种在线更新拟合函数的机制可以有效地适应不断变化的PTZ球机转动角度误差,保证控制参数拟合的准确性和可持续性。Figures 7 and 8 show the weighted least squares parameter fitting results after 20 tracking experiments of the left PTZ ball camera. Figure 7 shows the angle fitting results in the P direction, and Figure 8 shows the angle fitting results in the T direction. The following formula is the obtained fitting function. It should be noted that the parameters of the fitting function will continue to change as the number of tracking increases and the fitting data is updated. This mechanism of updating the fitting function online can effectively adapt to the constantly changing rotation angle error of the PTZ ball machine, ensuring the accuracy and sustainability of the fitting of the control parameters.
所述的Z参数获取模块中存储有第一计算机程序,所述的第一计算机程序在被处理器执行时实现以下步骤:The first computer program is stored in the Z parameter acquisition module, and the first computer program implements the following steps when executed by the processor:
步骤a、采用式V获得第一PTZ球机旋转矩阵RPT_1以及第二PTZ球机旋转矩阵RPT_2;Step a, using Formula V to obtain the first PTZ dome rotation matrix R PT _1 and the second PTZ dome rotation matrix R PT _2;
步骤b、获得目标区域预测坐标中矩形的4个顶点经过PTZ球机光轴投影后的坐标(-Xmax,Ymax),(Xmax,Ymax),(Xmax,-Ymax)和(-Xmax,-Ymax);Step b. Obtain the coordinates (-X max , Y max ), (X max , Y max ), (X max ,-Y max ) and ( -Xmax , -Ymax );
步骤c、若矩形的长宽比例大于等于球机长宽比例时,设置X轴为主方向,否则Y轴为主方向;Step c. If the aspect ratio of the rectangle is greater than or equal to the aspect ratio of the ball machine, set the X axis as the main direction, otherwise the Y axis is the main direction;
步骤d、采用式VI获得X轴方向上的焦距或Y轴方向上的焦距 Step d, using formula VI to obtain the focal length in the X-axis direction or the focal length in the Y-axis direction
其中k为比例系数,k为常数,ZE表示球机的变焦控制参数,ZE为常数,W1表示第一PTZ球机的分辨率的长,W1为常数,W2表示第二PTZ球机的分辨率的长,W2为常数;H1表示第一PTZ球机的分辨率的宽,H1为常数,H2表示第二PTZ球机的分辨率的宽,H2为常数;Among them, k is a proportional coefficient, k is a constant, Z E represents the zoom control parameter of the dome camera, Z E is a constant, W 1 represents the resolution length of the first PTZ dome camera, W 1 is a constant, W 2 represents the second PTZ The length of the resolution of the ball camera, W 2 is a constant; H 1 represents the resolution width of the first PTZ ball camera, H 1 is a constant, H 2 represents the resolution width of the second PTZ ball camera, H 2 is a constant ;
步骤e、当步骤c中设置的主方向为X轴时,采用牛顿法求解式VII,获得控制参数Z:Step e, when the main direction set in step c is the X axis, Newton's method is used to solve formula VII to obtain the control parameter Z:
当步骤c中设置的主方向为Y轴时,采用牛顿法求解式VIII,获得控制参数Z:When the main direction set in step c is the Y axis, Newton's method is used to solve formula VIII to obtain the control parameter Z:
所述fx_1(Z)、fx_2(Z)、fy_1(Z)与fy_2(Z)均为相机标定模块标定得到的拟合函数。The f x_1 (Z), f x_2 (Z), f y_1 (Z) and f y_2 (Z) are fitting functions calibrated by the camera calibration module.
在本实施例中,为了计算Z控制参数,利用得到的PT控制参数,将以如下五个步骤介绍Z参数的计算过程。由于两PTZ球机控制参数计算过程相同,故此处不考虑PTZ球机1、2的区别。In this embodiment, in order to calculate the Z control parameter, using the obtained PT control parameter, the calculation process of the Z parameter will be introduced in the following five steps. Since the calculation process of the control parameters of the two PTZ ball machines is the same, the difference between
(1)计算PT运动后目标在PTZ球机坐标系中的三维坐标下式描述了利用PTZ球机的PT动作生成的旋转矩阵RPT(摄像机作纯旋转运动,故不考虑t向量),将PTZ球机坐标系中目标矩形四个顶点进行旋转变换得到其在PTZ球机坐标系中的三维坐标的过程。(1) Calculate the three-dimensional coordinates of the target in the PTZ dome camera coordinate system after the PT movement. The process of rotating and transforming the four vertices of the target rectangle in the dome coordinate system to obtain its three-dimensional coordinates in the PTZ dome coordinate system.
(2)目标矩形重构(2) Target rectangle reconstruction
得到如图9所示的PT跟踪结果后,就可以认为PTZ球机的光轴穿过目标的中心,原来定义的矩形由于投射的影响而不再是一个矩形(此处不考虑各点在Z轴坐标上的差异),如图10中的P′1P′2P′3P′4。因此,本发明选取X,Y两个方向上的最大坐标值Xmax和Ymax,如式,并以它们为参数重新构建矩形P″1P″2P″3P″4。重构后,P″1P″2P″3P″4四个顶点的X、Y方向坐标分别为(-Xmax,Ymax),(Xmax,Ymax),(Xmax,-Ymax)和(-Xmax,-Ymax)。After obtaining the PT tracking result shown in Figure 9, it can be considered that the optical axis of the PTZ ball camera passes through the center of the target, and the originally defined rectangle is no longer a rectangle due to the influence of projection (here do not consider the position of each point in Z axis coordinates), such as P′ 1 P′ 2 P′ 3 P′ 4 in Figure 10. Therefore, the present invention selects the maximum coordinate values Xmax and Ymax in the two directions of X and Y, such as formula, and uses them as parameters to reconstruct the rectangle P″ 1 P″ 2 P″ 3 P″ 4 . After reconstruction, the X and Y coordinates of the four vertices of P″ 1 P″ 2 P″ 3 P″ 4 are (-Xmax, Ymax), (Xmax, Ymax), (Xmax, -Ymax) and (-Xmax ,-Ymax).
(3)主方向选取(3) Main direction selection
在本实施例中,PTZ球机成像分辨率为704×576,即有长宽比例q=W/H=704/576=1.222。定义矩形中长宽比例高于q时X为主方向,否则Y为主方向,如下式。In this embodiment, the imaging resolution of the PTZ ball camera is 704×576, that is, there is an aspect ratio q=W/H=704/576=1.222. Define X as the main direction when the aspect ratio in the rectangle is higher than q, otherwise Y is the main direction, as shown in the following formula.
(4)理论焦距求解(4) Theoretical focal length solution
在本实施例中期望最终成像大小在整个PTZ球机图像占据一个合适的比例,该比例由主方向上的长或宽确定,如图10。设该比例为k(k<1),因此在确定主方向后,就可以根据小孔成像模型中的三角几何关系,求出实现主方向上成像大小占PTZ球机图像大小比例为k时的最佳焦距,式为以主方向为X时的焦距计算过程。In this embodiment, it is expected that the final imaging size occupies an appropriate proportion in the entire PTZ dome camera image, and the proportion is determined by the length or width in the main direction, as shown in FIG. 10 . Let the ratio be k (k<1), so after determining the main direction, the ratio of the imaging size in the main direction to the image size of the PTZ dome camera can be calculated according to the triangular geometric relationship in the pinhole imaging model. The best focal length, the formula is the focal length calculation process when the main direction is X.
(5)Z控制参数求解(5) Z control parameter solution
根据PTZ球机标定时得到的fx(或fy)拟合函数fx(Z),构造等式,并利用牛顿法求解使等式成立时的Z值,该值即为PTZ球机的焦距控制参数值。当主方向为Y时,步骤(4)、(5)中的焦距则用fy代替,式中的W=704用H=576代替。According to the f x (or f y ) fitting function f x (Z) obtained when the PTZ ball machine is calibrated, construct the equation, and use Newton's method to solve the Z value when the equation is true, and this value is the PTZ ball machine Focal length control parameter value. When the main direction is Y, the focal length in steps (4) and (5) is replaced by fy, and W=704 in the formula is replaced by H=576.
在本实施例中利用三目视觉系统灵活的系统架构,实现了场景深度的估计,并结合运动目标的预测位置得到PTZ球机的跟踪控制参数,提高PTZ控制参数获取的准确性。In this embodiment, the flexible system architecture of the trinocular vision system is used to estimate the depth of the scene, and the tracking control parameters of the PTZ ball camera are obtained in combination with the predicted position of the moving target, so as to improve the accuracy of the acquisition of the PTZ control parameters.
跟踪模块用于利用PT参数获取模块获得的第一PTZ球机的P向转动角度θP_1和第一PTZ球机的T向转动角度的值θT_1以及第二PTZ球机的P向转动角度θP_2和第二PTZ球机的T向转动角度θT_2控制第一PTZ球机和第二PTZ球机的PT角度;The tracking module is used to use the value of the P-direction rotation angle θ P_1 of the first PTZ dome camera and the T-direction rotation angle θ T_1 of the first PTZ dome camera obtained by the PT parameter acquisition module, and the P-direction rotation angle θ of the second PTZ dome camera P_2 and the T-rotation angle θ of the second PTZ ball camera T_2 controls the PT angle of the first PTZ ball camera and the second PTZ ball camera;
用于利用Z参数获取模块分别获得第一PTZ球机和第二PTZ球机的焦距控制参数值后,控制第一PTZ球机和第二PTZ球机的焦距参数后,完成跟踪。It is used to use the Z parameter acquisition module to respectively obtain the focal length control parameter values of the first PTZ dome camera and the second PTZ dome camera, control the focal length parameters of the first PTZ dome camera and the second PTZ dome camera, and complete the tracking.
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本发明可借助软件加必需的通用硬件的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘,硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the present invention can be realized by means of software plus necessary general-purpose hardware, and of course also by hardware, but in many cases the former is a better embodiment . Based on this understanding, the essence of the technical solution of the present invention or the part that contributes to the prior art can be embodied in the form of a software product, and the computer software product is stored in a readable storage medium, such as a floppy disk of a computer , a hard disk or an optical disk, etc., including several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) to execute the methods of various embodiments of the present invention.
Claims (5)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110800524.9A CN113487683B (en) | 2021-07-15 | 2021-07-15 | A Target Tracking System Based on Trinocular Vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110800524.9A CN113487683B (en) | 2021-07-15 | 2021-07-15 | A Target Tracking System Based on Trinocular Vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113487683A CN113487683A (en) | 2021-10-08 |
CN113487683B true CN113487683B (en) | 2023-02-10 |
Family
ID=77939756
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110800524.9A Active CN113487683B (en) | 2021-07-15 | 2021-07-15 | A Target Tracking System Based on Trinocular Vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113487683B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117916684A (en) * | 2021-12-24 | 2024-04-19 | 深圳市大疆创新科技有限公司 | Mobile control method and device for movable platform and movable platform |
CN115713565A (en) * | 2022-12-16 | 2023-02-24 | 盐城睿算电子科技有限公司 | Target positioning method for binocular servo camera |
CN117788781B (en) * | 2024-02-28 | 2024-06-07 | 深圳市易检车服科技有限公司 | Calibration object identification method and device, electronic equipment and storage medium |
CN118247315B (en) * | 2024-05-29 | 2024-08-16 | 深圳天海宸光科技有限公司 | Panoramic target tracking method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102497543A (en) * | 2012-01-06 | 2012-06-13 | 合肥博微安全电子科技有限公司 | Multi-target tracking method based on DSP and system thereof |
WO2012151777A1 (en) * | 2011-05-09 | 2012-11-15 | 上海芯启电子科技有限公司 | Multi-target tracking close-up shooting video monitoring system |
CN103024350A (en) * | 2012-11-13 | 2013-04-03 | 清华大学 | Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same |
CN106709953A (en) * | 2016-11-28 | 2017-05-24 | 广东非思智能科技股份有限公司 | Single-point calibration method for multi-target automatic tracking and monitoring system |
CN110415278A (en) * | 2019-07-30 | 2019-11-05 | 中国人民解放军火箭军工程大学 | Master-slave tracking method of linear moving PTZ camera assisted binocular PTZ vision system |
WO2021004548A1 (en) * | 2019-07-08 | 2021-01-14 | 中原工学院 | Vehicle speed intelligent measurement method based on binocular stereo vision system |
-
2021
- 2021-07-15 CN CN202110800524.9A patent/CN113487683B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012151777A1 (en) * | 2011-05-09 | 2012-11-15 | 上海芯启电子科技有限公司 | Multi-target tracking close-up shooting video monitoring system |
CN102497543A (en) * | 2012-01-06 | 2012-06-13 | 合肥博微安全电子科技有限公司 | Multi-target tracking method based on DSP and system thereof |
CN103024350A (en) * | 2012-11-13 | 2013-04-03 | 清华大学 | Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same |
CN106709953A (en) * | 2016-11-28 | 2017-05-24 | 广东非思智能科技股份有限公司 | Single-point calibration method for multi-target automatic tracking and monitoring system |
WO2021004548A1 (en) * | 2019-07-08 | 2021-01-14 | 中原工学院 | Vehicle speed intelligent measurement method based on binocular stereo vision system |
CN110415278A (en) * | 2019-07-30 | 2019-11-05 | 中国人民解放军火箭军工程大学 | Master-slave tracking method of linear moving PTZ camera assisted binocular PTZ vision system |
Non-Patent Citations (4)
Title |
---|
An Intelligent Object Detection and Measurement System based on Trinocular Vision;Yunpeng Ma et al;《IEEE TRANSACTIONS ON CIRCUITS AND SYSRTEMS FOR VIDEO TECHNOLOGY》;20190203;1-14页 * |
Cooperative object tracking using dual-pan-tilt-zoom cameras based on planar ground assumption;Zhigao Cui et al;《IET Computer Vis.》;20151231;第9卷(第1期);149-161页 * |
三目混合立体视觉系统检测技术研究;翁翔宇;《中国知网》;20210415;1-151页 * |
采用地平面约束的双目PTZ主从跟踪方法;崔智高等;《红外与激光工程》;20130825(第08期);2252-2261页 * |
Also Published As
Publication number | Publication date |
---|---|
CN113487683A (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113487683B (en) | A Target Tracking System Based on Trinocular Vision | |
CN110728715B (en) | A method for self-adaptive adjustment of the camera angle of an intelligent inspection robot | |
CN111750820B (en) | Image positioning method and system | |
US11210804B2 (en) | Methods, devices and computer program products for global bundle adjustment of 3D images | |
WO2021139176A1 (en) | Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium | |
CN111025283B (en) | Method and device for linking radar and dome camera | |
CN109919911B (en) | Mobile three-dimensional reconstruction method based on multi-view photometric stereo | |
CN113534737B (en) | PTZ (Pan/Tilt/zoom) dome camera control parameter acquisition system based on multi-view vision | |
CN111127524A (en) | A method, system and device for trajectory tracking and three-dimensional reconstruction | |
CN112288826B (en) | Calibration method and device of binocular camera and terminal | |
KR101342393B1 (en) | Georeferencing Method of Indoor Omni-Directional Images Acquired by Rotating Line Camera | |
Koryttsev et al. | Practical Aspects of Range Determination and Tracking of Small Drones by Their Video Observation | |
CN106871900A (en) | Image matching positioning method in ship magnetic field dynamic detection | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
Savoy et al. | Cloud base height estimation using high-resolution whole sky imagers | |
CN113240749B (en) | A long-distance dual target determination and ranging method for UAV recovery on offshore ship platforms | |
CN117152243A (en) | Alarm positioning method based on monocular zooming of PTZ camera | |
CN108444452B (en) | Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device | |
CN115690623A (en) | Remote target damage assessment method based on three-dimensional reconstruction | |
CN113489964B (en) | A Scene Depth Information Acquisition System Based on Trinocular Vision | |
CN115019167B (en) | Fusion positioning method, system, equipment and storage medium based on mobile terminal | |
CN113487677B (en) | Outdoor medium-long distance scene calibration method based on multi-PTZ camera with random distributed configuration | |
CN112017138B (en) | Image splicing method based on scene three-dimensional structure | |
CN114255457A (en) | Direct geolocation method and system based on airborne LiDAR point cloud assistance for same-camera images | |
CN113674356A (en) | Camera screening method and related device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |