[go: up one dir, main page]

CN103024350B - A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method - Google Patents

A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method Download PDF

Info

Publication number
CN103024350B
CN103024350B CN201210454674.XA CN201210454674A CN103024350B CN 103024350 B CN103024350 B CN 103024350B CN 201210454674 A CN201210454674 A CN 201210454674A CN 103024350 B CN103024350 B CN 103024350B
Authority
CN
China
Prior art keywords
camera
slave
ptz
master
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210454674.XA
Other languages
Chinese (zh)
Other versions
CN103024350A (en
Inventor
周杰
崔智高
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201210454674.XA priority Critical patent/CN103024350B/en
Publication of CN103024350A publication Critical patent/CN103024350A/en
Application granted granted Critical
Publication of CN103024350B publication Critical patent/CN103024350B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)

Abstract

本发明提出了一种双目PTZ视觉系统的主从跟踪方法及应用该方法的系统,该主从跟踪方法包括如下步骤:对两个PTZ摄像机分别进行标定,得到摄像机坐标系模型;建立球面坐标系,使两个PTZ摄像机坐标系上的对应点在球面坐标系上的经度值保持一致,纬度差用来表达视角差异,对两个PTZ摄像机坐标系分别求取变换矩阵;选定主摄像机和从摄像机,在主摄像机的观测图像上选定跟踪目标,使主摄像机在任意pan-tilt-zoom参数(PM,TM,ZM)下估计从摄像机pan-tilt-zoom参数输出高分辨率全景图。本发明的主从跟踪方法既能实现大范围场景下的监控,又能降低硬件开销。本发明的主从跟踪系统的两个PTZ摄像机设计对称,二者的地位可以根据任务不同而互换,灵活性强,并且有利于后期的信息融合。

The present invention proposes a master-slave tracking method of a binocular PTZ vision system and a system applying the method. The master-slave tracking method includes the following steps: respectively calibrate two PTZ cameras to obtain a camera coordinate system model; establish spherical coordinates system, so that the longitude values of the corresponding points on the spherical coordinate system on the two PTZ camera coordinate systems are consistent, and the latitude difference is used to express the difference in viewing angle, and the transformation matrix is obtained for the two PTZ camera coordinate systems respectively; select the main camera and From the camera, select the tracking target on the observation image of the main camera, so that the main camera can estimate the pan-tilt-zoom parameters of the slave camera under any pan-tilt-zoom parameters (P M , T M , Z M ) Output high-resolution panoramas. The master-slave tracking method of the present invention can not only realize monitoring in a wide range of scenarios, but also reduce hardware overhead. The design of the two PTZ cameras of the master-slave tracking system of the present invention is symmetrical, and the positions of the two cameras can be exchanged according to different tasks, which has strong flexibility and is beneficial to later information fusion.

Description

一种双目PTZ视觉系统的主从跟踪方法及应用该方法的系统A master-slave tracking method of a binocular PTZ vision system and a system applying the method

技术领域technical field

本发明涉及计算机视觉智能监控技术领域,特别涉及一种采用双目PTZ摄像机的视觉系统进行主从跟踪系统及其跟踪方法。The invention relates to the technical field of computer vision intelligent monitoring, in particular to a master-slave tracking system and a tracking method using a binocular PTZ camera vision system.

背景技术Background technique

随着世界范围内对公共安全以及军事安全的迫切需求,智能视觉监控技术受到越来越多的重视。在智能视觉监控技术早期的应用中,智能程度很低,安全防范功能主要依靠人对监控视频中的事件进行判断,可靠度和自动化程度都很低。随着计算机处理性能的飞速发展和计算机视觉理论的不断完善,智能视觉监控系统得到了巨大发展,其发展前景非常广阔,引起了很多学者和工程技术人员的高度重视。With the urgent need for public security and military security worldwide, intelligent visual surveillance technology has received more and more attention. In the early application of intelligent visual surveillance technology, the degree of intelligence was very low, and the security protection function mainly relied on people to judge the events in the surveillance video, and the reliability and automation were very low. With the rapid development of computer processing performance and the continuous improvement of computer vision theory, the intelligent visual monitoring system has been greatly developed, and its development prospects are very broad, which has attracted the attention of many scholars and engineers.

智能视觉监控系统的一个核心任务是对感兴趣的目标进行实时跟踪。传统的智能视觉监控系统多采用静止摄像机,由于摄像机视场固定,分辨率单一,无法获得跟踪目标的高分辨率图像,为日后的查询、举证等工作带来困难;随着硬件水平的提高,基于单目PTZ(Pan/Tilt/Zoom,云台全方位移动及镜头变倍、变焦控制)摄像机的主动跟踪系统得到了广泛研究与应用,这些系统可使目标以较大尺度出现在图像中心,但由于视场狭小丢失了全景信息,难以直观获得目标在场景中的位置。A core task of an intelligent visual surveillance system is to track objects of interest in real time. Traditional intelligent visual monitoring systems mostly use static cameras. Due to the fixed field of view and single resolution of the camera, it is impossible to obtain high-resolution images of tracking targets, which will bring difficulties to future inquiries and proof work. With the improvement of hardware level, Active tracking systems based on monocular PTZ (Pan/Tilt/Zoom, omni-directional movement of the pan and tilt and lens zoom, zoom control) cameras have been widely studied and applied. These systems can make the target appear in the center of the image on a larger scale, However, due to the loss of panoramic information due to the narrow field of view, it is difficult to intuitively obtain the position of the target in the scene.

针对上述缺陷,包含PTZ摄像机的多目视觉系统成为智能监控中的研究热点,这些系统一般工作在主从模式下,主摄像机实现目标在全景下的跟踪,并控制PTZ摄像机对目标进行主动跟踪。现有主从系统中包括:一个静止摄像机与一个PTZ摄像机相结合的系统构架,该系统主要问题在于监控视场有限,仅局限于静止摄像机的监控视场,无法适应于大场景下的监控;多静止摄像机与PTZ摄像机相结合的系统配置,该系统配置虽然为了扩大监控范围,但大大增大了硬件开销;全向摄像机与PTZ摄像机相结合的跟踪系统配置,该系统一般应用于室内场景,且由于全向摄像机分辨率低,较难实现两摄像机之间的信息融合。因此,寻找一种既能实现大范围场景下的监控,又能降低硬件开销的视觉跟踪方法是一个亟需解决的技术问题。In response to the above defects, multi-eye vision systems including PTZ cameras have become a research hotspot in intelligent surveillance. These systems generally work in master-slave mode. The master camera realizes the tracking of the target in the panorama, and controls the PTZ camera to actively track the target. The existing master-slave system includes: a system architecture combining a static camera and a PTZ camera. The main problem of this system is that the monitoring field of view is limited, which is limited to the monitoring field of view of the static camera and cannot be adapted to the monitoring of large scenes; The system configuration combining multiple still cameras and PTZ cameras, although this system configuration is to expand the monitoring range, greatly increases the hardware overhead; the tracking system configuration combining omnidirectional cameras and PTZ cameras, this system is generally used in indoor scenes, Moreover, due to the low resolution of the omnidirectional camera, it is difficult to realize information fusion between the two cameras. Therefore, it is an urgent technical problem to find a visual tracking method that can realize monitoring in a wide range of scenes and reduce hardware overhead.

发明内容Contents of the invention

本发明旨在至少解决现有技术中存在的技术问题,特别创新地提出了一种双目PTZ视觉系统的主从跟踪系统及其跟踪方法。The present invention aims at at least solving the technical problems existing in the prior art, and particularly innovatively proposes a master-slave tracking system and a tracking method of a binocular PTZ vision system.

为了实现本发明的上述目的,根据本发明的一个方面,本发明提供了一种双目PTZ视觉系统的主从跟踪方法,包括如下步骤:In order to achieve the above object of the present invention, according to one aspect of the present invention, the present invention provides a master-slave tracking method of a binocular PTZ vision system, comprising the steps of:

S1:对两个PTZ摄像机分别进行标定,得到摄像机坐标系模型;S1: Calibrate the two PTZ cameras respectively to obtain the camera coordinate system model;

S2:建立球面坐标系,使所述两个PTZ摄像机坐标系上的对应点在球面坐标系上的经度值保持一致,纬度差用来表达视角差异,对两个PTZ摄像机坐标系分别求取变换矩阵;S2: Establish a spherical coordinate system, so that the longitude values of the corresponding points on the two PTZ camera coordinate systems on the spherical coordinate system are consistent, and the latitude difference is used to express the difference in viewing angle, and the transformation is obtained for the two PTZ camera coordinate systems respectively matrix;

S3:选定主摄像机和从摄像机,并在主摄像机的观测图像上选定跟踪目标,使主摄像机在任意pan-tilt-zoom参数(PM,TM,ZM)下,由跟踪目标在从摄像机观测图像IM上的运动轨迹估计从摄像机pan-tilt-zoom参数 ( P S t , T S t , Z S t ) ( t = 1 . . . n ) ; S3: Select the main camera and the secondary camera, and select the tracking target on the observation image of the main camera, so that the main camera is under any pan-tilt-zoom parameters (P M , T M , Z M ), and the tracking target is at Observe the motion track on the image I M from the camera Estimated pan-tilt-zoom parameters from the camera ( P S t , T S t , Z S t ) ( t = 1 . . . no ) ;

S4:输出高分辨率全景图。S4: Output a high-resolution panorama.

本发明的双目PTZ视觉系统的主从跟踪方法既能实现大范围场景下的监控,又能降低硬件开销。The master-slave tracking method of the binocular PTZ vision system of the present invention can not only realize monitoring in a large-scale scene, but also reduce hardware overhead.

在本发明的一种优选实施方式中,两摄像机中的任意一台作为主摄像机,另外一台作为从摄像机。In a preferred embodiment of the present invention, any one of the two cameras is used as the master camera, and the other one is used as the slave camera.

在本发明的另一种优选实施方式中,两个摄像机根据监控任务分别在监控场景中进行视觉巡视,当任何一个摄像机发现可疑物体后,即作为主摄像机采用静止摄像机下的跟踪方法对其进行跟踪,并进入主从工作模式,控制另一个摄像机作为从摄像机对可疑物体主动跟踪。In another preferred embodiment of the present invention, the two cameras perform visual inspections in the monitoring scene respectively according to the monitoring tasks, and when any one of the cameras finds suspicious objects, it acts as the main camera and adopts the tracking method under a static camera to carry out visual inspection on it. Tracking, and enter the master-slave working mode, control another camera as a slave camera to actively track suspicious objects.

本发明在大范围场景的监控中,实现两摄像机任意参数下的主从模式跟踪,并获得跟踪目标的高分辨率全景图像。由于实际应用中不同的监控场景或监控任务对应的pan-tilt-zoom参数不同,实现任意参数下的主从模式跟踪,具有更大的实际应用价值,也更加具有通用性,本发明只需要两摄像机安装固定,可方便处理两摄像机参数变化的情形,不受摄像机参数改变的影响。同时,本发明以高分辨率全景作为结果输出,在显示高分辨率目标图像的同时,更加直观地显示目标在整个监控场景中的运动信息,增强了可视性和实用性,并可用于后期目标的行为分析、姿态识别、步态分析等。In the monitoring of a large-scale scene, the invention realizes the master-slave mode tracking under any parameter of two cameras, and obtains a high-resolution panoramic image of the tracking target. Due to the different pan-tilt-zoom parameters corresponding to different monitoring scenarios or monitoring tasks in practical applications, realizing master-slave mode tracking under any parameter has greater practical application value and is more versatile. The present invention only needs two The camera is installed fixedly, which can conveniently handle the situation where the parameters of the two cameras change, and is not affected by the change of the camera parameters. At the same time, the present invention uses high-resolution panorama as the result output, while displaying the high-resolution target image, it can more intuitively display the motion information of the target in the entire monitoring scene, which enhances visibility and practicability, and can be used in later stages Target behavior analysis, gesture recognition, gait analysis, etc.

为了实现本发明的上述目的,根据本发明的二个方面,本发明提供了一种双目PTZ视觉系统的主从跟踪系统,包括主机;第一摄像机和第二摄像机,所述第一摄像机和第二摄像机分别与主机相连,两摄像机中的任意一台作为主摄像机,另外一台作为从摄像机,图像捕获模块,所述图像捕获模块分别与所述第一摄像机和第二摄像机进行信息交互;第一摄像机控制模块和第二摄像机控制模块,所述第一摄像机控制模块控制第一摄像机的运转和观测模式,所述第二摄像机控制模块控制第二摄像机的运转和观测模式;所述第一摄像机和第二摄像机内还分别包括目标跟踪模块、目标预测模块、球面坐标模型模块和PTZ参数模块,在所述主摄像机内,所述目标跟踪模块对可疑对象进行跟踪,并将跟踪结果传输给所述目标预测模块,所述目标预测模块与从摄像机的球面坐标模型模块相连,所述目标预测模块对可疑对象的位置进行预测,并就预测结果传输给从摄像机,所述从摄像机的球面坐标模型模块对坐标进行变换,并将变换结果传输给PTZ参数模块计算PTZ参数,所述PTZ参数模块将计算的PTZ参数传输给相应的摄像机控制模块。In order to achieve the above object of the present invention, according to two aspects of the present invention, the present invention provides a master-slave tracking system of a binocular PTZ vision system, including a host; a first camera and a second camera, the first camera and The second cameras are respectively connected to the host computer, any one of the two cameras is used as the main camera, and the other is used as the slave camera, and an image capture module, and the image capture module performs information interaction with the first camera and the second camera respectively; A first camera control module and a second camera control module, the first camera control module controls the operation and observation mode of the first camera, and the second camera control module controls the operation and observation mode of the second camera; the first camera control module controls the operation and observation mode of the second camera; The camera and the second camera also respectively include a target tracking module, a target prediction module, a spherical coordinate model module and a PTZ parameter module. In the main camera, the target tracking module tracks suspicious objects and transmits the tracking results to The target prediction module, the target prediction module is connected with the spherical coordinate model module of the slave camera, the target prediction module predicts the position of the suspicious object, and transmits the prediction result to the slave camera, the spherical coordinate of the slave camera The model module transforms the coordinates, and transmits the transformation result to the PTZ parameter module to calculate the PTZ parameters, and the PTZ parameter module transmits the calculated PTZ parameters to the corresponding camera control module.

本发明的双目PTZ视觉系统的主从跟踪系统的两个PTZ摄像机设计对称,二者的地位可以根据任务不同而互换,灵活性强,并且有利于后期的信息融合。The design of the two PTZ cameras of the master-slave tracking system of the binocular PTZ vision system of the present invention is symmetrical, and the positions of the two can be interchanged according to different tasks, which has strong flexibility and is beneficial to later information fusion.

本发明的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实践了解到。Additional aspects and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.

附图说明Description of drawings

本发明的上述和/或附加的方面和优点从结合下面附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present invention will become apparent and comprehensible from the description of the embodiments in conjunction with the following drawings, wherein:

图1是本发明双目PTZ视觉系统的主从跟踪系统的结构示意图;Fig. 1 is the structural representation of the master-slave tracking system of the binocular PTZ vision system of the present invention;

图2是本发明两个摄像机的PTZ参数几何关系示意图;Fig. 2 is a schematic diagram of the PTZ parameter geometric relationship of two cameras of the present invention;

图3是本发明估计从摄像机参数的流程图;Fig. 3 is the flow chart of the present invention estimation from camera parameter;

图4是根据场景深度范围估计从摄像机纬度范围的示意图;Fig. 4 is a schematic diagram of estimating the latitude range from the camera according to the depth range of the scene;

图5是Pan-tilt参数估计示意图;Figure 5 is a schematic diagram of Pan-tilt parameter estimation;

图6是在本发明第一优选实施例中主从跟踪与高分辨率全景结果图;Fig. 6 is master-slave tracking and high-resolution panorama result figure in the first preferred embodiment of the present invention;

图7是在本发明第二优选实施例中主从跟踪与高分辨率全景结果图。Fig. 7 is a result diagram of master-slave tracking and high-resolution panorama in the second preferred embodiment of the present invention.

具体实施方式Detailed ways

下面详细描述本发明的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention.

在本发明的描述中,除非另有规定和限定,需要说明的是,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是机械连接或电连接,也可以是两个元件内部的连通,可以是直接相连,也可以通过中间媒介间接相连,对于本领域的普通技术人员而言,可以根据具体情况理解上述术语的具体含义。In the description of the present invention, unless otherwise specified and limited, it should be noted that the terms "installation", "connection" and "connection" should be understood in a broad sense, for example, it can be mechanical connection or electrical connection, or two The internal communication of each element may be directly connected or indirectly connected through an intermediary. Those skilled in the art can understand the specific meanings of the above terms according to specific situations.

本发明提供了一种双目PTZ视觉系统的主从跟踪系统,如图1所示,包括主机(图中没有示出),第一摄像机和第二摄像机,该第一摄像机和第二摄像机分别与主机相连,两摄像机中的任意一台作为主摄像机,另外一台作为从摄像机;该系统还包括图像捕获模块、第一摄像机控制模块和第二摄像机控制模块,该图像捕获模块分别与第一摄像机和第二摄像机进行信息交互;该第一摄像机控制模块控制第一摄像机的运转和观测模式,第二摄像机控制模块控制第二摄像机的运转和观测模式;第一摄像机和第二摄像机内还分别包括目标跟踪模块、目标预测模块、球面坐标模型模块和PTZ参数模块,在主摄像机内,目标跟踪模块对可疑对象进行跟踪,并将跟踪结果传输给所述目标预测模块,目标预测模块与从摄像机的球面坐标模型模块相连,目标预测模块对可疑对象的位置进行预测,并就预测结果传输给从摄像机,从摄像机的球面坐标模型模块对坐标进行变换,并将变换结果传输给PTZ参数模块计算PTZ参数,PTZ参数模块将计算的PTZ参数传输给相应的摄像机控制模块。The present invention provides a master-slave tracking system of a binocular PTZ vision system, as shown in Figure 1, including a host (not shown in the figure), a first camera and a second camera, the first camera and the second camera are respectively It is connected with the host, and any one of the two cameras is used as the main camera, and the other is used as the slave camera; the system also includes an image capture module, a first camera control module and a second camera control module, and the image capture module is connected to the first camera respectively. The camera and the second camera carry out information interaction; the first camera control module controls the operation and observation mode of the first camera, and the second camera control module controls the operation and observation mode of the second camera; the first camera and the second camera also respectively It includes a target tracking module, a target prediction module, a spherical coordinate model module and a PTZ parameter module. In the main camera, the target tracking module tracks suspicious objects and transmits the tracking result to the target prediction module. The target prediction module communicates with the slave camera The spherical coordinate model module of the camera is connected, the target prediction module predicts the position of the suspicious object, and transmits the prediction result to the slave camera, the spherical coordinate model module of the slave camera transforms the coordinates, and transmits the transformation result to the PTZ parameter module to calculate the PTZ parameters, the PTZ parameter module transmits the calculated PTZ parameters to the corresponding camera control module.

本发明两个摄像机根据监控任务分别在监控场景中进行视觉巡视,当任何一个摄像机发现可疑物体后,即作为主摄像机采用静止摄像机下的跟踪方法对其进行跟踪,并进入主从工作模式,控制另一个摄像机作为从摄像机对可疑物体主动跟踪。According to the monitoring tasks, the two cameras of the present invention carry out visual inspection in the monitoring scene respectively. When any camera finds suspicious objects, it will be used as the main camera to track it by using the tracking method under the static camera, and enter the master-slave working mode to control Another camera acts as a slave camera to actively track suspicious objects.

本发明采用双目PTZ摄像机实现主从模式跟踪,之所以选择这样的硬件组合,是因为PTZ摄像机是一种最简单的主动摄像机,既可以通过改变pan和tilt参数改变视角,也可以通过改变zoom参数来改变图像分辨率,它的集成度很高,有比较成熟产品,随着硬件成本的降低已经越来越多的用到实际应用中。另外,采用两个PTZ摄像机,由于两摄像机的对称性,二者的地位可以根据任务不同而互换,灵活性强,并且有利于后期的信息融合。同时,由于摄像机参数的可变性和可控性,可以通过切换pan-tilt-zoom参数切换监控视场,实现用最少摄像机获得更大监控视场的目的。The present invention adopts binocular PTZ cameras to realize master-slave mode tracking. The reason for choosing such a hardware combination is that PTZ cameras are the simplest active cameras, and the angle of view can be changed by changing the pan and tilt parameters, or by changing the zoom Parameters to change the image resolution, its integration is very high, there are more mature products, with the reduction of hardware costs has been more and more used in practical applications. In addition, two PTZ cameras are used. Due to the symmetry of the two cameras, the positions of the two cameras can be interchanged according to different tasks, which is flexible and beneficial to later information fusion. At the same time, due to the variability and controllability of camera parameters, the monitoring field of view can be switched by switching the pan-tilt-zoom parameter to achieve the purpose of obtaining a larger monitoring field of view with the fewest cameras.

本发明还提供了一种双目PTZ视觉系统的主从跟踪方法,其可以但不限于采用本发明图1所示的跟踪系统,该主从跟踪方法其包括如下步骤:The present invention also provides a master-slave tracking method of a binocular PTZ vision system, which can, but is not limited to, adopt the tracking system shown in Figure 1 of the present invention, and the master-slave tracking method comprises the following steps:

S1:对两个PTZ摄像机分别进行标定,得到摄像机坐标系模型;S1: Calibrate the two PTZ cameras respectively to obtain the camera coordinate system model;

S2:建立球面坐标系,使两个PTZ摄像机坐标系上的对应点在球面坐标系上的经度值保持一致,纬度差用来表达视角差异,对两个PTZ摄像机坐标系分别求取变换矩阵;S2: Establish a spherical coordinate system, so that the longitude values of the corresponding points on the spherical coordinate system on the two PTZ camera coordinate systems are consistent, and the latitude difference is used to express the difference in viewing angle, and the transformation matrix is respectively obtained for the two PTZ camera coordinate systems;

S3:选定主摄像机和从摄像机,并在主摄像机的观测图像上选定跟踪目标,使主摄像机在任意pan-tilt-zoom参数(PM,TM,ZM)下,由跟踪目标在从摄像机观测图像IM上的运动轨迹估计从摄像机pan-tilt-zoom参数 ( P S t , T S t , Z S t ) ( t = 1 . . . n ) ; S3: Select the main camera and the secondary camera, and select the tracking target on the observation image of the main camera, so that the main camera is under any pan-tilt-zoom parameters (P M , T M , Z M ), and the tracking target is at Observe the motion track on the image I M from the camera Estimated pan-tilt-zoom parameters from the camera ( P S t , T S t , Z S t ) ( t = 1 . . . no ) ;

S4:输出高分辨率全景图。S4: Output a high-resolution panorama.

在本发明的一个优选实施方式中,本发明在主机中依次进行如下步骤:In a preferred embodiment of the present invention, the present invention carries out the following steps sequentially in the host computer:

第一步,对两个PTZ摄像机分别进行标定,得到摄像机坐标系模型。在本实施方式中,利用特征点匹配的方法对两个PTZ摄像机分别标定外参矩阵R和内参矩阵K,其中,外参矩阵R仅与pan和tilt参数有关,且pan和tilt参数的旋转轴垂直相交于摄像机中心。标定内参矩阵K的步骤为:In the first step, the two PTZ cameras are calibrated separately to obtain the camera coordinate system model. In this embodiment, the method of feature point matching is used to calibrate the external parameter matrix R and the internal parameter matrix K respectively for the two PTZ cameras, wherein the external parameter matrix R is only related to the pan and tilt parameters, and the rotation axes of the pan and tilt parameters perpendicular to the camera center. The steps to calibrate the internal reference matrix K are:

首先,设定摄像机的像素纵横比ar和倾斜度S,在本实施方式中,取摄像机的像素纵横比ar=1,倾斜度S=0,且不考虑图像畸变影响;First, set the pixel aspect ratio ar and inclination S of the camera. In this embodiment, the pixel aspect ratio ar =1 and the inclination S=0 of the camera are set, and the influence of image distortion is not considered;

然后,采用zoom中心代替主点(u0,v0),保证摄像机的pan和tilt参数不变,通过改变zoom参数获得图像序列,对每幅图像进行SIFT特征点提取,并对相邻两帧图像中的特征点进行匹配,最后用最小二乘方法得到zoom中心;Then, use the zoom center instead of the main point (u 0 , v 0 ), keep the pan and tilt parameters of the camera unchanged, obtain the image sequence by changing the zoom parameter, extract SIFT feature points from each image, and compare two adjacent frames The feature points in the image are matched, and finally the zoom center is obtained by the least square method;

最后,构造焦距f随zoom参数的变化模型,对于每个固定zoom参数值,通过少量改变pan和tilt参数得到多张图像,并通过图像配准估计焦距f,利用多组离散采样值{zoom,f}构造模型f=fz(zoom)。Finally, construct a model of the focal length f varying with the zoom parameter. For each fixed zoom parameter value, multiple images are obtained by changing the pan and tilt parameters a small amount, and the focal length f is estimated through image registration, using multiple sets of discrete sampling values {zoom, f}Construct model f=f z (zoom).

通过以上步骤,PTZ摄像机的坐标系模型可以表达为:Through the above steps, the coordinate system model of the PTZ camera can be expressed as:

xx ~~ == κKκK (( zoomzoom )) RR (( panthe pan ,, tilttilt )) Xx == κκ ff zz (( zoomzoom )) 00 uu 00 00 ff zz (( zoomzoom )) vv 00 00 00 11 coscos (( panthe pan )) 00 sinsin (( panthe pan )) -- sinsin (( panthe pan )) sinsin (( tilttilt )) coscos (( tilttilt )) coscos (( panthe pan )) sinsin (( tilttilt )) -- sinsin (( panthe pan )) coscos (( tilttilt )) -- sinsin (( tilttilt )) coscos (( panthe pan )) coscos (( tilttilt )) Xx

其中,κ为尺度因子,代表图像齐次坐标,X代表对应的摄像机系下坐标。Among them, κ is the scale factor, Represents the homogeneous coordinates of the image, and X represents the coordinates of the corresponding camera system.

第二步:建立球面坐标系,使两个PTZ摄像机坐标系上的对应点在球面坐标系上的经度值保持一致,纬度差用来表达视角差异,对两个PTZ摄像机坐标系分别求取变换矩阵,具体步骤如下:The second step: establish a spherical coordinate system, so that the longitude values of the corresponding points on the two PTZ camera coordinate systems on the spherical coordinate system are consistent, and the latitude difference is used to express the difference in viewing angle, and the transformation is obtained separately for the two PTZ camera coordinate systems Matrix, the specific steps are as follows:

首先,手动采集两个摄像机在同一监控场景中的N对图像,使每对图像的视场基本保持一致,以获得更多的匹配特征点对,并使采集图像尽可能多的覆盖整个监控场景。First, manually collect N pairs of images of two cameras in the same monitoring scene, so that the field of view of each pair of images is basically consistent, so as to obtain more matching feature point pairs, and make the collected images cover the entire monitoring scene as much as possible .

然后,分别对每对图像进行SIFT特征点提取与匹配,然后用RANSAC方法估计每对图像的基础矩阵Fj(j=1,...,N),并记录匹配的特征点对其中,j为N对图像的第j对图像,k为第j对图像的第k对特征点。Then, perform SIFT feature point extraction and matching on each pair of images, and then use the RANSAC method to estimate the fundamental matrix F j (j=1,...,N) of each pair of images, and record the matched feature point pairs Among them, j is the jth pair of images of N pairs of images, and k is the kth pair of feature points of the jth pair of images.

再后,估计球面坐标系的极点对{E1,E2},令以及则有 F j ( 1 ) E 1 = E 2 T F j ( 2 ) = 0 , 定义 A 1 = [ ( F 1 ( 1 ) ) T , ( F 2 ( 1 ) ) T , . . . , ( F N ( 1 ) ) T ] T A 2 = [ F 1 ( 2 ) , F 2 ( 2 ) , . . . , F N ( 2 ) ] , 然后利用奇异值分解的方法,求解A1E1=0和得到{E1,E2}。Then, estimate the pole pair {E 1 ,E 2 } of the spherical coordinate system, let as well as then there is f j ( 1 ) E. 1 = E. 2 T f j ( 2 ) = 0 , definition A 1 = [ ( f 1 ( 1 ) ) T , ( f 2 ( 1 ) ) T , . . . , ( f N ( 1 ) ) T ] T A 2 = [ f 1 ( 2 ) , f 2 ( 2 ) , . . . , f N ( 2 ) ] , Then use the singular value decomposition method to solve A 1 E 1 =0 and Get {E 1 ,E 2 }.

随后,估计球面坐标系的参考零经度线,利用第一步得到的摄像机成像模型,将获得的每对图像的匹配特征点对变换到各自摄像机坐标系下,得到在两摄像机球面坐标系上分别任意选定一条等经度线作为参考零经度线,在其以及获得的极点所确定的球面坐标系下,分别将转化为经纬坐标,计算所有特征点经度分量的绝对偏差,调整其中一个参考零经度线,使得平均偏差最小,即可得到最终的参考零经度线n1_ref和n2_refThen, estimate the reference zero longitude line of the spherical coordinate system, use the camera imaging model obtained in the first step, and use the matching feature point pairs of each pair of images obtained Transformed into the respective camera coordinate system, we get On the spherical coordinate system of the two cameras, an isolongitude line is arbitrarily selected as the reference zero longitude line, and in the spherical coordinate system determined by it and the obtained pole, the Convert to longitude and latitude coordinates, calculate the absolute deviation of the longitude components of all feature points, adjust one of the reference zero longitude lines to minimize the average deviation, and then obtain the final reference zero longitude lines n 1_ref and n 2_ref .

最后,令Cross_E1_n1_ref=E1×n1_ref以及Cross_E2_n2_ref=E2×n2_ref,得到变换矩阵R1,R2,R1=[E1 n1_ref Cross_E1_n1_ref],R2=[E2 n2_ref Cross_E2_n2_ref]。Finally, set Cross_E 1 _n 1_ref =E 1 ×n 1_ref and Cross_E 2 _n 2_ref =E 2 ×n 2_ref to obtain transformation matrices R 1 , R 2 , R 1 =[E 1 n 1_ref Cross_E 1 _n 1_ref ], R 2 =[E 2 n 2_ref Cross_E 2 _n 2_ref ].

第三步:选定主摄像机和从摄像机,并在主摄像机的观测图像上选定跟踪目标,使主摄像机在任意pan-tilt-zoom参数(PM,TM,ZM)下,由跟踪目标在从摄像机观测图像IM上的运动轨迹估计从摄像机pan-tilt-zoom参数 ( P S t , T S t , Z S t ) ( t = 1 . . . n ) . Step 3: Select the main camera and the slave camera, and select the tracking target on the observation image of the main camera, so that the main camera can be tracked by any pan-tilt-zoom parameters (P M , T M , Z M The trajectory of the target on the image I M observed from the camera Estimated pan-tilt-zoom parameters from the camera ( P S t , T S t , Z S t ) ( t = 1 . . . no ) .

由于第一步和第二步一般可离线进行,只要两摄像机安装固定,估计的变换矩阵R1,R2将不再变化。实际在线应用时,由于两摄像机的对称性,二者均可作为主摄像机,当两摄像机中的任意一台作为主摄像机,则另外一台作为从摄像机,当两个摄像机根据监控任务分别在监控场景中进行视觉巡视,当任何一个摄像机发现可疑物体后,即作为主摄像机采用静止摄像机下的跟踪方法对其进行跟踪,并进入主从工作模式,控制另一个摄像机作为从摄像机对可疑物体主动跟踪。本发明为表述方便,将其中任一摄像机作为主摄像机,记为Cam-M,另一摄像机为从摄像机,记为Cam-S,并令Cam-M对应的摄像机坐标系到球面坐标系的变换矩阵为RM,Cam-S对应的摄像机坐标系到球面坐标系的变换矩阵为RS,两摄像机几何关系示意图如图2所示,设IM为主摄像机Cam-M在参数(PM,TM,ZM)下的观测图像,为跟踪目标P在t时刻在图像IM上的观测,为Cam-S在t时刻使跟踪目标P处于其观测图像中心的参数,该第三步的具体操作步骤如下:Since the first step and the second step can generally be performed offline, as long as the two cameras are fixed, the estimated transformation matrices R 1 and R 2 will not change. In actual online application, due to the symmetry of the two cameras, both can be used as the main camera. When any one of the two cameras is used as the main camera, the other one is used as the slave camera. Carry out visual inspection in the scene, when any camera finds suspicious objects, it will act as the main camera to track it using the tracking method under the static camera, and enter the master-slave working mode, control another camera as a slave camera to actively track suspicious objects . In the present invention, for the convenience of expression, any camera is used as the main camera, which is recorded as Cam-M, and the other camera is the slave camera, which is recorded as Cam-S, and the transformation from the camera coordinate system corresponding to Cam-M to the spherical coordinate system The matrix is R M , the transformation matrix from the camera coordinate system corresponding to Cam-S to the spherical coordinate system is R S , the schematic diagram of the geometric relationship between the two cameras is shown in Figure 2, and I M is assumed to be the main camera Cam-M in parameters (P M , T M , Z M ) observation image, To track the observation of the target P on the image I M at time t, For Cam-S to make the tracking target P in its observation image at time t The parameters of the center, the specific operation steps of the third step are as follows:

首先,在主摄像机Cam-M的观测图像IM上选中跟踪目标,利用Opencv提供的Mean-shift跟踪算法逐帧跟踪目标,为了抵消图像处理和摄像机机械运动的延时误差,采用Opencv提供的Kalman滤波方法对目标位置进行预测,并设t时刻跟踪目标中心位置的预测值为 First, select the tracking target on the observation image I M of the main camera Cam-M, and use the Mean-shift tracking algorithm provided by Opencv to track the target frame by frame. The filtering method predicts the position of the target, and sets the predicted value of tracking the center position of the target at time t as

然后,估计每一时刻从摄像机Cam-S的参数值如附图3所示,步骤如下:Then, estimate the parameter value from the camera Cam-S at each moment As shown in Figure 3, the steps are as follows:

步骤1,将主摄像机Cam-M的观测图像坐标变换到摄像机坐标,设为t时刻跟踪目标的预测值对应的图像坐标,利用第一步得到的单目PTZ摄像机坐标系模型,可以计算所确定的摄像机坐标系下的一条射线,如下式:Step 1, transform the observed image coordinates of the main camera Cam-M into camera coordinates, set is the image coordinate corresponding to the predicted value of the tracking target at time t, using the monocular PTZ camera coordinate system model obtained in the first step, it can be calculated A ray under the determined camera coordinate system is as follows:

YY Mm tt == κκ RR -- 11 (( PP Mm ,, TT Mm )) KK -- 11 (( ZZ Mm )) pp Mm →&Right Arrow; PP tt ~~

其中,为t时刻跟踪目标预测值在主摄像机的摄像机坐标系下的归一化坐标,代表齐次坐标,κ为尺度因子,以满足(PM,TM,ZM)代表主摄像机Cam-M的参数。in, is the normalized coordinate of the predicted value of the tracking target at time t in the camera coordinate system of the main camera, Represents homogeneous coordinates, κ is a scale factor, to satisfy (P M , T M , Z M ) represent the parameters of the main camera Cam-M.

步骤2,将主摄像机的摄像机坐标变换到主摄像的机球面坐标,将变换到主摄像机Cam-M对应的球面坐标系,并计算t时刻跟踪目标预测值在主摄像机的球面坐标系下的经纬度经度和纬度计算公式如下:Step 2, transform the camera coordinates of the main camera to the spherical coordinates of the main camera, and set Transform to the spherical coordinate system corresponding to the main camera Cam-M, and calculate the longitude, latitude and longitude of the predicted value of the tracking target at time t in the spherical coordinate system of the main camera and latitude Calculated as follows:

YY Mm →&Right Arrow; rr tt == RR Mm YY Mm tt

αα Mm tt == aa tanthe tan 22 (( YY Mm →&Right Arrow; rr tt (( 33 )) ,, YY Mm →&Right Arrow; rr tt (( 22 )) ))

ββ Mm tt == aa coscos (( YY Mm →&Right Arrow; rr tt (( 11 )) ))

其中,RM为主摄像机Cam-M的摄像机坐标系到球面坐标系的变换矩阵,对应的球面坐标系上的笛卡尔坐标,表示向量的第n个元素。Among them, R M is the transformation matrix from the camera coordinate system of the main camera Cam-M to the spherical coordinate system, for Cartesian coordinates on the corresponding spherical coordinate system, representation vector The nth element of .

步骤3,将主摄像机Cam-M的球面坐标映射到从摄像机Cam-S的球面坐标系,此步骤的目的是由主摄像机Cam-M球面坐标系下的经纬度估计对应点在从摄像机Cam-S的球面坐标系下的经纬度由于第二步建立的球面坐标系已经使两摄像机坐标系下对应点在各自球面坐标系下的经度值保持一致,因此为了估计假设t时刻目标深度已知,用Dt表示,深度定义为t时刻目标P到基线OMOS的距离,其与基线交于点Ot,b为两摄像机的基线长度,如附图3所示。Step 3. Map the spherical coordinates of the main camera Cam-M to the spherical coordinate system of the slave camera Cam-S. The purpose of this step is to obtain the latitude and longitude in the spherical coordinate system of the main camera Cam-M Estimate the latitude and longitude of the corresponding point in the spherical coordinate system from the camera Cam-S Since the spherical coordinate system established in the second step has made the longitude values of the corresponding points in the two camera coordinate systems consistent in their respective spherical coordinate systems, so to estimate Assuming that the target depth at time t is known, denoted by D t , the depth is defined as the distance from the target P to the baseline O M O S at time t, which intersects with the baseline at point O t , and b is the baseline length of the two cameras, as shown in Figure 3 shown.

则有如下几何关系:set up Then there is the following geometric relationship:

tanthe tan ββ Mm tt == DD. tt xx Mm tt

tanthe tan (( ππ -- ββ SS tt )) == DD. tt xx SS tt

bb == xx Mm tt ++ xx SS tt

可得到:available:

ββ SS tt == aa tanthe tan (( -- DD. tt ·· tanthe tan ββ Mm tt bb ·· tanthe tan ββ Mm tt -- DD. tt ))

由于场景中目标的深度Dt是未知的,因此无法准确估计目标在从摄像机Cam-S的球面坐标系下的纬度在本实施方式中,由于计算目标深度Dt和计算目标在从摄像机Cam-S的球面坐标系下的纬度是基本等价的,可看作chicken-and-egg问题进行解决。考虑到大场景监控中,两摄像机的基线长度b相对场景深度非常小,因此可以给定场景深度范围Dmin和Dmax,分别估计对应的纬度如图4所示。Since the depth D t of the target in the scene is unknown, it is impossible to accurately estimate the latitude of the target in the spherical coordinate system of the camera Cam-S In this embodiment, due to the calculation of the target depth D t and the latitude of the calculation target in the spherical coordinate system from the camera Cam-S It is basically equivalent and can be solved as a chicken-and-egg problem. Considering that in large scene monitoring, the baseline length b of the two cameras is very small relative to the scene depth, so the scene depth ranges D min and D max can be given to estimate the corresponding latitudes respectively and As shown in Figure 4.

由于场景深度远大于两摄像机之间的距离,之间的差异很小,此外考虑到主从跟踪中的实时性要求,采用线性加权的方法估计从摄像机纬度值计算公式如下所示:Since the depth of the scene is much greater than the distance between the two cameras, and The difference between them is very small. In addition, considering the real-time requirements in the master-slave tracking, the linear weighted method is used to estimate the latitude value of the slave camera The calculation formula is as follows:

ββ SS →&Right Arrow; maxmax tt == aa tanthe tan (( -- DD. minmin ·&Center Dot; tanthe tan ββ Mm tt bb ·&Center Dot; tanthe tan ββ Mm tt -- DD. minmin ))

ββ SS →&Right Arrow; minmin tt == aa tanthe tan (( -- DD. maxmax ·&Center Dot; tanthe tan ββ Mm tt bb ·· tanthe tan ββ Mm tt -- DD. maxmax ))

ββ SS tt == λλ ·&Center Dot; ββ SS →&Right Arrow; maxmax tt ++ γγ ·&Center Dot; ββ SS →&Right Arrow; minmin tt

其中λ和γ为加权系数,满足λ+γ=1。Among them, λ and γ are weighting coefficients, satisfying λ+γ=1.

步骤4,将从摄像机Cam-S的面坐标变换到从摄像机的摄像机坐标及估计从摄像机的参数先将从摄像机Cam-S球面坐标系下的经纬度坐标变换到从摄像机Cam-S的摄像机坐标系下,此过程类似于步骤2逆过程,如下式,其中为目标在从摄像机Cam-S的球面坐标系下的笛卡尔坐标,为从摄像机Cam-S的摄像机坐标系到球面坐标系变换矩阵RS的逆矩阵,为目标在Cam-S摄像机坐标系下的坐标:Step 4, transform the surface coordinates of the slave camera Cam-S to the camera coordinates of the slave camera and estimate the parameters of the slave camera First, the latitude and longitude coordinates from the camera Cam-S spherical coordinate system and Transform to the camera coordinate system from the camera Cam-S, this process is similar to the inverse process of step 2, as follows, where is the Cartesian coordinates of the target in the spherical coordinate system of the slave camera Cam-S, is the inverse matrix of the transformation matrix R S from the camera coordinate system of the camera Cam-S to the spherical coordinate system, is the coordinates of the target in the Cam-S camera coordinate system:

YY SS →&Right Arrow; rr tt (( 11 )) == coscos ββ SS tt

YY SS →&Right Arrow; rr tt (( 22 )) == sinsin ββ SS tt coscos αα SS tt

YY SS →&Right Arrow; rr tt (( 33 )) == sinsin ββ SS tt sinsin αα SS tt

YY SS tt == RR SS -- 11 YY SS →&Right Arrow; rr tt

其中,为t时刻跟踪目标预测值在从摄像机球面坐标系下的笛卡尔坐标。in, is the Cartesian coordinates of the predicted value of the tracking target at time t in the spherical coordinate system of the camera.

根据pan-tilt参数在摄像机坐标系下的物理意义,给定从摄像机Cam-S的摄像机坐标系下的观测点可计算其对应的pan-tilt参数使得从摄像机Cam-S运动到该参数值时,点在图像上的观测位于主点位置(即光轴穿过),如图5所示,对于zoom参数可以根据实际应用取值,在本实施方式中,优选取值为 According to the physical meaning of the pan-tilt parameter in the camera coordinate system, the observation point in the camera coordinate system from the camera Cam-S is given Can calculate its corresponding pan-tilt parameter and When the camera Cam-S moves to this parameter value, click The observation on the image is located at the principal point position (that is, the optical axis passes through ), as shown in Figure 5, for the zoom parameter The value can be selected according to the actual application. In this embodiment, the preferred value is

最后,跟踪过程中,计算跟踪目标中心位置与主摄像的图像边界的距离,距离小于阈值T时,跟踪结束,两摄像机恢复预先设定位置。Finally, during the tracking process, the distance between the center position of the tracking target and the image boundary of the main camera is calculated. When the distance is less than the threshold T, the tracking ends and the two cameras return to their preset positions.

第四步:输出高分辨率全景图,设为t时刻主摄像机Cam-M捕获图像,为t时刻从摄像机Cam-S捕获图像。具体包括如下步骤:Step 4: Output high-resolution panorama, set Capture an image for the main camera Cam-M at time t, An image is captured from camera Cam-S for time instant t. Specifically include the following steps:

首先,对主摄像机Cam-M的图像求取背景模型像素(x,y)处的更新公式为:First, the image of the main camera Cam-M Find the background model The update formula at pixel (x,y) is:

II MBMB tt (( xx ,, ythe y )) == (( 11 -- αα )) II MBMB tt -- 11 (( xx ,, ythe y )) ++ αα II Mm tt (( xx ,, ythe y ))

其中,α为更新系数,在本实施方式中取为0.05,设初始时刻背景模型 I MB 0 ( x , y ) = I M 0 ( x , y ) , 如果 | I M t ( x , y ) - I MB t ( x , y ) | > th , 在(x,y)属于前景区域,否则,该像素属于背景区域,其中,th为比较阈值,在本实施方式中,th取为20。Among them, α is the update coefficient, which is taken as 0.05 in this embodiment, and the background model at the initial moment is set I MB 0 ( x , the y ) = I m 0 ( x , the y ) , if | I m t ( x , the y ) - I MB t ( x , the y ) | > the th , but If (x, y) belongs to the foreground area, otherwise, the pixel belongs to the background area, where th is the comparison threshold, and in this embodiment, th is set to 20.

然后,采用基于特征点和直接像素相结合的方法对t时刻主摄像机Cam-M捕获图像和t时刻从摄像机捕获的图像进行配准,由于基线长度相对于监控场景深度可忽略不计,配准模型可选择仿射模型。Then, the main camera Cam-M at time t is used to capture images based on the combination of feature points and direct pixels and the image captured from the camera at time t For registration, since the baseline length is negligible relative to the depth of the monitoring scene, the registration model can choose an affine model.

随后,通过得到的同步帧配准模型以及得到的主摄像机Cam-M图像的背景区域和前景区域,计算的背景区域和前景区域;再后,将高分辨率全景图IH的大小设为原始图像大小的k倍,设代表主摄像机低分辨率图像和从摄像机高分辨率图像的配准模型,则IH之间的配准模型为:Subsequently, the model is registered through the obtained synchronous frame and the obtained main camera Cam-M image The background area and the foreground area, calculate background area and foreground area; then, set the size of the high-resolution panorama I H to k times the size of the original image, set Represents the main camera low-resolution image and high resolution images from the camera registration model, then I H and The registration model between is:

AA tt == AA MSMS tt ×× 11 // kk 00 11 -- 11 // kk 00 11 // kk 11 -- 11 // kk 00 00 11

依次将从摄像机高分辨率图像的背景区域映射到IH中,对于重叠区域用0.5的衰减因子进行更新;In turn, the high-resolution images from the camera The background area of is mapped to I H , and the overlapping area is updated with an attenuation factor of 0.5;

最后,依次将从摄像机高分辨率图像的前景区域映射到IH中,生成每一时刻的高分辨率全景图。图6和图7是在本发明的优选实施例中主从跟踪与高分辨率全景结果图,在图6a中,左摄像机作为主摄像机,右摄像机作为从摄像机,主摄像机的参数为pan=-63.52,tilt=-11.32,zoom=11.50,图6b是将从摄像机的四帧前景映射到一张背景中得到高分辨率全景。在图7a中,右摄像机作为主摄像机,左摄像机作为从摄像机,主摄像机的参数为pan=-90.66,tilt=-13.50,zoom=10.00,图7b是将从摄像机的四帧前景映射到一张背景中得到高分辨率全景。Finally, the high-resolution image from the camera is sequentially The foreground region of is mapped to IH to generate a high-resolution panorama at each moment. Fig. 6 and Fig. 7 are master-slave tracking and high-resolution panorama result figure in the preferred embodiment of the present invention, in Fig. 6 a, left camera is as master camera, right camera is as slave camera, and the parameter of master camera is pan=- 63.52, tilt=-11.32, zoom=11.50, Figure 6b is a high-resolution panorama obtained by mapping the four frames of foreground from the camera to a background. In Figure 7a, the right camera is used as the main camera, and the left camera is used as the slave camera. The parameters of the master camera are pan=-90.66, tilt=-13.50, and zoom=10.00. Figure 7b maps the four frames of the foreground from the camera to a A high-resolution panorama is obtained in the background.

本发明相对于静止加主动相机的主从跟踪系统,可以增大监控范围;相对于多静止加主动相机的系统,可减小硬件开销;相对于全向加主动相机的系统,更有利于信息融合。本发明设计了基于球面坐标模型的主从控制方法,可方便实现两相机在任意pan-tilt-zoom参数下的主从模式跟踪,实现对目标的多尺度视觉关注;利用主从跟踪过程中不同分辨率视频之间的配准模型,采用分层处理的方法获得高分辨率全景输出,通过这样处理后的结果可以用于犯罪取证、监控记录保存、运动目标的行为分析等应用。Compared with the master-slave tracking system of static and active cameras, the present invention can increase the monitoring range; compared with the system of multiple static and active cameras, the hardware cost can be reduced; compared with the system of omnidirectional plus active cameras, it is more conducive to information fusion. The invention designs a master-slave control method based on a spherical coordinate model, which can conveniently realize the master-slave mode tracking of two cameras under any pan-tilt-zoom parameter, and realize multi-scale visual attention to the target; The registration model between high-resolution videos adopts a layered processing method to obtain high-resolution panoramic output. The processed results can be used in applications such as criminal evidence collection, surveillance record keeping, and behavior analysis of moving targets.

在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施例或示例中以合适的方式结合。In the description of this specification, descriptions referring to the terms "one embodiment", "some embodiments", "example", "specific examples", or "some examples" mean that specific features described in connection with the embodiment or example , structure, material or characteristic is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the specific features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.

尽管已经示出和描述了本发明的实施例,本领域的普通技术人员可以理解:在不脱离本发明的原理和宗旨的情况下可以对这些实施例进行多种变化、修改、替换和变型,本发明的范围由权利要求及其等同物限定。Although the embodiments of the present invention have been shown and described, those skilled in the art can understand that various changes, modifications, substitutions and modifications can be made to these embodiments without departing from the principle and spirit of the present invention. The scope of the invention is defined by the claims and their equivalents.

Claims (7)

1. A master-slave tracking method of a binocular PTZ visual system is characterized by comprising the following steps:
s1: respectively calibrating the two PTZ cameras to obtain a camera coordinate system model;
s2: establishing a spherical coordinate system, keeping the longitude values of corresponding points on the two PTZ camera coordinate systems on the spherical coordinate system consistent, expressing visual angle difference by using the latitude difference, and respectively solving a transformation matrix for the two PTZ camera coordinate systems;
s3, selecting the master camera and the slave camera and taking the picture at the masterSelecting a tracking target on the observation image of the camera to make the main camera in any pan-tilt-zoom parameter (P)M,TM,ZM) Next, an image I is observed from the camera by the tracking targetMUpper motion trackEstimating pan-tilt-zoom parameters from a cameraIn step S3, one of the two cameras is used as a master camera, the other camera is used as a slave camera, and the two cameras respectively perform visual inspection in a monitored scene according to a monitoring task, when any one of the cameras finds a suspicious object, the suspicious object is tracked by using a tracking method under a stationary camera as the master camera, and enters a master-slave working mode, and the other camera is controlled to actively track the suspicious object as the slave camera;
s4: and outputting the high-resolution panoramic image.
2. The binocular PTZ vision system master-slave tracking method of claim 1, wherein the step S1 includes calibrating an extrinsic reference matrix R and an intrinsic reference matrix K for the two PTZ cameras, respectively, the extrinsic reference matrix R being related to pan and tilt parameters only, and the axes of rotation of pan and tilt parameters intersecting perpendicularly at the camera center.
3. The binocular PTZ vision system of claim 2, wherein the step of calibrating the internal reference matrix K is:
s31: setting the pixel aspect ratio a of a camerarAnd a gradient S;
s32: using zoom centers instead of principal points (u)0,v0) Ensuring pan and tilt parameters of the camera to be unchanged, obtaining an image sequence by changing zoom parameters, extracting SIFT feature points of each image, matching the feature points in two adjacent frames of images, and finally using the feature pointsObtaining a zoom center by a least square method;
s33: and constructing a variation model of the focal length f along with the zoom parameter.
4. The binocular PTZ vision system master-slave tracking method of claim 1, wherein the step S2 includes the steps of:
s41: manually acquiring N pairs of images of two cameras in the same monitoring scene, and keeping the view field of each pair of images basically consistent;
s42: extracting and matching SIFT feature points of each pair of images respectively, and estimating a basic matrix F of each pair of imagesj(j ═ 1...., N), matching pairs of feature points are recordedJ is a j-th pair of images of the N pairs of images, k is a k-th pair of feature points of the j-th pair of images, and N is a positive integer;
s43: estimating the pole pair { E ] of the spherical coordinate system1,E2};
S44: estimating a reference zero longitude line of the spherical coordinate system;
s45: obtaining a transformation matrix R1,R2
5. The binocular PTZ vision system master-slave tracking method of claim 1, wherein the step S3 includes the steps of:
s71 selecting a target to be tracked in the observation image of the main camera;
s72: estimating parameter values of a slave camera at each moment in time
S73: and in the tracking process, calculating the distance between the central position of the tracking target and the image boundary, finishing tracking when the distance is less than a threshold value T, and restoring the preset positions of the two cameras.
6. The binocular PTZ vision system master-slave tracking method according to claim 5, wherein the step S72 includes the steps of:
s81: transforming the coordinates of the observation image of the main camera to the camera coordinates of the main camera;
s82: transforming the camera coordinates of the main camera to spherical coordinates of the main camera;
s83: mapping the spherical coordinates of the master camera to the spherical coordinates of the slave camera;
s84: transforming slave camera spherical coordinates to slave camera coordinates and estimating slave camera parameters
7. The binocular PTZ vision system master-slave tracking method of claim 1, wherein the step S4 includes the steps of:
s91: images to the main cameraCalculating background modelThe update formula at pixel (x, y) is:
wherein alpha is an updating coefficient and is set as an initial moment background modelIf it is notThen(x, y) belongs to the foreground region, otherwise, the pixel belongs to the background region, wherein th is a comparison threshold;
s92: method for capturing images of Cam-M of main camera at time t by adopting combination of feature points and direct pixelsAnd the image captured from the camera at time tCarrying out registration;
s93: the synchronized frame registration model obtained at step S92 and the main camera image obtained at step S91The background area and the foreground area of the image, calculatingA background region and a foreground region;
s94: high resolution panorama IHIs set to be k times of the original image sizeRepresenting low resolution images of a master cameraAnd high resolution images from the cameraThe registration model of (1), thenHAndthe registration model between is:
sequentially high-resolution images from the cameraIs mapped to IHUpdating the overlap region with an attenuation factor;
s95: sequentially high-resolution images from the cameraIs mapped to IHIn (3), a high-resolution panorama is generated at each time.
CN201210454674.XA 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method Active CN103024350B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210454674.XA CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210454674.XA CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Publications (2)

Publication Number Publication Date
CN103024350A CN103024350A (en) 2013-04-03
CN103024350B true CN103024350B (en) 2015-07-29

Family

ID=47972429

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210454674.XA Active CN103024350B (en) 2012-11-13 2012-11-13 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method

Country Status (1)

Country Link
CN (1) CN103024350B (en)

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824278B (en) * 2013-12-10 2016-09-21 清华大学 The scaling method of CCTV camera and system
CN103826071A (en) * 2014-03-11 2014-05-28 深圳市中安视科技有限公司 Three-dimensional camera shooting method for three-dimensional identification and continuous tracking
JP6371849B2 (en) * 2014-07-24 2018-08-08 国立研究開発法人科学技術振興機構 Image alignment apparatus, image alignment method, and image alignment program
CN105354813B (en) * 2014-08-18 2018-11-23 杭州海康威视数字技术股份有限公司 Holder is driven to generate the method and device of stitching image
CN105141841B (en) * 2015-08-25 2018-05-08 上海兆芯集成电路有限公司 Picture pick-up device and its method
CN105335977B (en) * 2015-10-28 2018-05-25 苏州科达科技股份有限公司 The localization method of camera system and target object
CN106534693B (en) * 2016-11-25 2019-10-25 努比亚技术有限公司 A kind of photo processing method, device and terminal
CN108010089B (en) * 2017-12-22 2021-09-07 中国人民解放军火箭军工程大学 A high-resolution image acquisition method based on binocular motion camera
CN108377368A (en) * 2018-05-08 2018-08-07 扬州大学 A kind of one master and multiple slaves formula intelligent video monitoring apparatus and its control method
CN108683849B (en) * 2018-05-15 2021-01-08 维沃移动通信有限公司 Image acquisition method and terminal
CN109460077B (en) * 2018-11-19 2022-05-17 深圳博为教育科技有限公司 Automatic tracking method, automatic tracking equipment and automatic tracking system
CN111599018B (en) * 2019-02-21 2024-05-28 浙江宇视科技有限公司 Target tracking method and system, electronic equipment and storage medium
CN110415278B (en) * 2019-07-30 2020-04-17 中国人民解放军火箭军工程大学 Master-slave tracking method of auxiliary binocular PTZ (Pan-Tilt-zoom) visual system of linear moving PTZ (pan-Tilt-zoom) camera
CN110991306B (en) * 2019-11-27 2024-03-08 北京理工大学 Self-adaptive wide-field high-resolution intelligent sensing method and system
CN111131697B (en) * 2019-12-23 2022-01-04 北京中广上洋科技股份有限公司 Multi-camera intelligent tracking shooting method, system, equipment and storage medium
CN111355926B (en) * 2020-01-17 2022-01-11 高新兴科技集团股份有限公司 Linkage method of panoramic camera and PTZ camera, storage medium and equipment
CN111526280A (en) * 2020-03-23 2020-08-11 深圳市大拿科技有限公司 Control method and device of camera device, electronic equipment and storage medium
CN111698467B (en) * 2020-05-08 2022-05-06 北京中广上洋科技股份有限公司 Intelligent tracking method and system based on multiple cameras
CN113487677B (en) * 2021-06-07 2024-04-12 电子科技大学长三角研究院(衢州) Outdoor medium-long distance scene calibration method based on multi-PTZ camera with random distributed configuration
CN113256713B (en) * 2021-06-10 2021-10-15 浙江华睿科技股份有限公司 Pallet position identification method and device, electronic equipment and storage medium
CN113487683B (en) * 2021-07-15 2023-02-10 中国人民解放军火箭军工程大学 A Target Tracking System Based on Trinocular Vision
CN114384568A (en) * 2021-12-29 2022-04-22 达闼机器人有限公司 Position measuring method and device based on mobile camera, processing equipment and medium
CN115713565A (en) * 2022-12-16 2023-02-24 盐城睿算电子科技有限公司 Target positioning method for binocular servo camera

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
US6724421B1 (en) * 1994-11-22 2004-04-20 Sensormatic Electronics Corporation Video surveillance system with pilot and slave cameras
CN101699862A (en) * 2009-11-16 2010-04-28 上海交通大学 High-resolution region-of-interest image acquisition method of PTZ camera
CN101794448A (en) * 2010-04-07 2010-08-04 上海交通大学 Full automatic calibration method of master-slave camera chain
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102006461A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Joint tracking detection system for cameras
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video surveillance system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6724421B1 (en) * 1994-11-22 2004-04-20 Sensormatic Electronics Corporation Video surveillance system with pilot and slave cameras
US6215519B1 (en) * 1998-03-04 2001-04-10 The Trustees Of Columbia University In The City Of New York Combined wide angle and narrow angle imaging system and method for surveillance and monitoring
CN101699862A (en) * 2009-11-16 2010-04-28 上海交通大学 High-resolution region-of-interest image acquisition method of PTZ camera
CN101794448A (en) * 2010-04-07 2010-08-04 上海交通大学 Full automatic calibration method of master-slave camera chain
CN101969548A (en) * 2010-10-15 2011-02-09 中国人民解放军国防科学技术大学 Active video acquiring method and device based on binocular camera shooting
CN102006461A (en) * 2010-11-18 2011-04-06 无锡中星微电子有限公司 Joint tracking detection system for cameras
CN102148965A (en) * 2011-05-09 2011-08-10 上海芯启电子科技有限公司 Multi-target tracking close-up shooting video surveillance system

Also Published As

Publication number Publication date
CN103024350A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
CN103024350B (en) A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method
Mitrokhin et al. EV-IMO: Motion segmentation dataset and learning pipeline for event cameras
WO2019214568A1 (en) Depth-based light field splicing method
CN109272530B (en) Target tracking method and device for space-based monitoring scene
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
US7583815B2 (en) Wide-area site-based video surveillance system
CN101883261B (en) Method and system for abnormal target detection and relay tracking under large-range monitoring scene
US9898829B2 (en) Monitoring apparatus and system using 3D information of images and monitoring method using the same
CN101616310B (en) Target image stabilizing method of binocular vision system with variable visual angle and resolution ratio
CN103473542B (en) Multi-clue fused target tracking method
US10078899B2 (en) Camera system and image registration method thereof
CN102592146B (en) Face detection and camera tripod control method applied to video monitoring
CN102156863A (en) Cross-camera tracking method for multiple moving targets
CN107038714B (en) Multi-type visual sensing cooperative target tracking method
CN107341815B (en) Vigorous motion detection method based on multi-eye stereo vision scene stream
CN104331901A (en) TLD-based multi-view target tracking device and method
CN108156419A (en) More focal length lens linkage imaging camera machine system based on multiple features combining and Camshift algorithms
CN116194951A (en) Method and apparatus for stereoscopic based 3D object detection and segmentation
CN111783675A (en) Intelligent city video self-adaptive HDR control method based on vehicle semantic perception
CN111696044A (en) Large-scene dynamic visual observation method and device
Callemein et al. Anyone here? Smart embedded low-resolution omnidirectional video sensor to measure room occupancy
Zhao et al. Distributed NeRF Learning for Collaborative Multi-Robot Perception
CN111291609A (en) Method for detecting dynamic background target of airport enclosure inspection robot
CN115965745A (en) Three-level video perception-based dynamic three-dimensional reconstruction system and method for unmanned aerial vehicle
Yue et al. An intelligent identification and acquisition system for UAVs based on edge computing using in the transmission line inspection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20200113

Address after: 100080, 1, Zhongguancun Avenue, Beijing, Haidian District, 3, 318

Patentee after: BEIJING HORIZON ROBOTICS TECHNOLOGY RESEARCH AND DEVELOPMENT CO., LTD.

Address before: 100084 Haidian District 100084-82 mailbox Beijing

Patentee before: Tsinghua University