[go: up one dir, main page]

CN101794448B - Full automatic calibration method of master-slave camera chain - Google Patents

Full automatic calibration method of master-slave camera chain Download PDF

Info

Publication number
CN101794448B
CN101794448B CN2010101399769A CN201010139976A CN101794448B CN 101794448 B CN101794448 B CN 101794448B CN 2010101399769 A CN2010101399769 A CN 2010101399769A CN 201010139976 A CN201010139976 A CN 201010139976A CN 101794448 B CN101794448 B CN 101794448B
Authority
CN
China
Prior art keywords
image
sub
images
camera
width
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101399769A
Other languages
Chinese (zh)
Other versions
CN101794448A (en
Inventor
宋利
王嘉
徐奕
李铀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiao Tong University
Original Assignee
Shanghai Jiao Tong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiao Tong University filed Critical Shanghai Jiao Tong University
Priority to CN2010101399769A priority Critical patent/CN101794448B/en
Publication of CN101794448A publication Critical patent/CN101794448A/en
Application granted granted Critical
Publication of CN101794448B publication Critical patent/CN101794448B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

一种图像处理技术领域的主从摄像机系统的全自动标定方法,本发明通过将转动动态摄像机过程中自动获得的图像拼接成马赛克图像,再自动将该图像与主摄像机图像进行特征点的提取与匹配,从而获得静态摄像机图像像素坐标与动态摄像机控制参数之间的标定结果,实现全自动标定主从摄像机系统。通过马赛克拼接图像表示动态摄像机的视场范围,利用了SURF特征点估计两个不同成像平面之间的关系,只需要事先制定好动态摄像机的运动轨迹,就可以自动的标定出该主从摄像机系统,能够在保持标定精度较好的条件下(误差率为3%-5%),实现主从摄像机系统的自动标定,节省了人力与时间,降低了标定过程中的复杂程度。

Figure 201010139976

A fully automatic calibration method for a master-slave camera system in the field of image processing technology. The present invention stitches the images automatically obtained during the process of rotating a dynamic camera into a mosaic image, and then automatically extracts and compares the feature points of the image with the master camera image. Matching, so as to obtain the calibration result between the pixel coordinates of the static camera image and the control parameters of the dynamic camera, and realize the automatic calibration of the master-slave camera system. The field of view of the dynamic camera is represented by mosaic stitching images, and the relationship between two different imaging planes is estimated by using SURF feature points. The master-slave camera system can be automatically calibrated only by formulating the motion trajectory of the dynamic camera in advance. , can realize the automatic calibration of the master-slave camera system under the condition of maintaining good calibration accuracy (the error rate is 3%-5%), which saves manpower and time, and reduces the complexity of the calibration process.

Figure 201010139976

Description

主从摄像机系统的全自动标定方法Fully automatic calibration method for master-slave camera system

技术领域 technical field

本发明涉及的是一种图像处理技术领域的方法,具体是一种主从摄像机系统的全自动标定方法。The invention relates to a method in the technical field of image processing, in particular to a fully automatic calibration method for a master-slave camera system.

背景技术 Background technique

目前,公共场所普遍配备了摄像监控系统,安装大量的摄像头用于大型覆盖区域。随着监控系统的发展,对高分辨率监控图像的需求也越来越多。主从摄像机监控系统便是一种解决该问题的方案。在这样一个系统中,一个(或多个)固定的摄像机作为一个领导者,引领一台(或多台)动态(PTZ)摄像机去聚焦感兴趣目标。这样就即可以获得一个宽广区域的宏观监控图像,又可以获得目标的高分辨率图像。主从摄像机系统的标定正是解决这个问题的方法。经过标定的主从摄像机监控系统,当主摄像机扑捉到感兴趣目标时,动态摄像机可以自动对准该目标,从而获得细节图像。At present, public places are generally equipped with camera monitoring systems, and a large number of cameras are installed for large coverage areas. With the development of surveillance systems, there is an increasing demand for high-resolution surveillance images. The master-slave camera monitoring system is a solution to this problem. In such a system, one (or more) fixed cameras act as a leader, leading one (or more) dynamic (PTZ) cameras to focus on objects of interest. In this way, a macro surveillance image of a wide area can be obtained, and a high-resolution image of the target can be obtained. The calibration of the master-slave camera system is the solution to this problem. After the calibrated master-slave camera monitoring system, when the master camera captures a target of interest, the dynamic camera can automatically aim at the target to obtain detailed images.

经过对现有技术的文献检索发现,相关文献如下:Find through document retrieval to prior art, relevant document is as follows:

1、X.Zhou等人在The First ACM International Workshop on Video Surveillance(美国计算机协会第一届国际视频监控研讨会)所发表的“A master-slave system to acquirebiometric imagery of humans at distance(一种在远距离获取人物生物图像特征的主从系统)”一文中从主摄像机图像中选取一些样本点,手动移动动态摄像机,使得动态摄像机图像中心与选取的样本点相对应,然后记录下样本点坐标与控制参数。其他静态摄像机图像上的点对应的动态摄像机转动参数能够通过对已经获得的参数进行插值得到。但是该技术需要手动的调整动态摄像机参数,因此耗时且不方便大规模应用。1. "A master-slave system to acquire biometric imagery of humans at distance" published by X.Zhou et al. in The First ACM International Workshop on Video Surveillance The master-slave system for obtaining the characteristics of human biological images)" selects some sample points from the main camera image, manually moves the dynamic camera so that the center of the dynamic camera image corresponds to the selected sample points, and then records the coordinates of the sample points and controls parameter. The dynamic camera rotation parameters corresponding to the points on other static camera images can be obtained by interpolating the obtained parameters. However, this technique requires manual adjustment of dynamic camera parameters, which is time-consuming and inconvenient for large-scale application.

2、A.W.Senior等2005年在The seventh IEEE Workshops on Application of ComputerVision(国际电气电子工程师协会第七届计算机视觉应用研讨会)所发表的“AcquiringMulti-Scale Images by Pan-Tilt-Zoom Control and Automatic Multi-Camera Calibration(通过PTZ控制与多摄像机自动标定获得多尺度图像)”中,该技术通过计算两个摄像机图像之间的单应矩阵来估计主摄像机与动态摄像机的关系。但是该技术也需要手动的调整动态摄像机参数,同样的耗时且不方便大规模应用。2. "Acquiring Multi-Scale Images by Pan-Tilt-Zoom Control and Automatic Multi- Camera Calibration (multi-scale images obtained through PTZ control and multi-camera automatic calibration)", this technology estimates the relationship between the main camera and the dynamic camera by calculating the homography matrix between the two camera images. However, this technology also requires manual adjustment of dynamic camera parameters, which is also time-consuming and inconvenient for large-scale application.

发明内容 Contents of the invention

本发明针对现有技术的上述不足,提出了一种主从摄像机系统的全自动标定方法。本发明通过将转动动态摄像机过程中自动获得的图像拼接成马赛克图像,再自动将该图像与主摄像机图像进行特征点的提取与匹配,从而获得静态摄像机图像像素坐标与动态摄像机位置参数之间的标定结果,实现全自动标定主从摄像机系统。Aiming at the above-mentioned deficiencies in the prior art, the present invention proposes a fully automatic calibration method for a master-slave camera system. The present invention stitches the images automatically obtained in the process of rotating the dynamic camera into a mosaic image, and then automatically extracts and matches the feature points of the image with the main camera image, thereby obtaining the relationship between the pixel coordinates of the static camera image and the position parameters of the dynamic camera. Calibration results, to achieve fully automatic calibration of the master-slave camera system.

本发明是通过以下技术方案实现的,包括如下步骤:The present invention is achieved through the following technical solutions, comprising the steps of:

步骤一,预先设定动态摄像机的运动轨迹,动态摄像机沿着该运动轨迹自动旋转,并每隔时间t采样一幅子图像,同时按采集到的时间顺序,对每一幅子图像标号,记录采集每幅子图像时动态摄像机转动的参数,即:水平转动角度与垂直转动角度,并将每幅子图像及其对应的标号和对应的动态摄像机转动的参数和位置信息一并进行存储。Step 1, pre-set the motion track of the dynamic camera, the dynamic camera automatically rotates along the motion track, and samples a sub-image every time t, and labels and records each sub-image according to the sequence of time collected. The parameters of the dynamic camera rotation when collecting each sub-image, namely: the horizontal rotation angle and the vertical rotation angle, and store each sub-image and its corresponding label together with the corresponding parameters and position information of the dynamic camera rotation.

步骤二,动态摄像机旋转结束后,对得到的N幅子图像按拍摄位置进行分组排列处理,得到K组子图像集合,对每组子图像集合中相邻的两幅子图像进行特征点提取和匹配处理,得到匹配特征点对,进而得到同一组子图像集合中相邻的两幅子图像间的变换矩阵M。Step 2: After the dynamic camera rotation is completed, the obtained N sub-images are grouped and arranged according to the shooting positions to obtain K groups of sub-image sets, and the feature points are extracted and summed for the two adjacent sub-images in each group of sub-image sets. Matching processing, to obtain matching feature point pairs, and then to obtain the transformation matrix M between two adjacent sub-images in the same group of sub-image sets.

所述的分组排列处理是将子图像对应的动态摄像机水平转动角度差小于阈值的子图像划分为一组子图像集合,从而得到K组子图像集合,并将每组中的子图像按照对应的动态摄像机的垂直转动角度大小进行排列。The grouping arrangement process is to divide the sub-images corresponding to the sub-images whose dynamic camera horizontal rotation angle difference is smaller than the threshold into a group of sub-image sets, thereby obtaining K groups of sub-image sets, and divide the sub-images in each group according to the corresponding The vertical rotation angle of the dynamic camera is arranged.

所述的特征点提取和匹配处理是通过SURF方法完成的,即由快速Hessian提取器提取每组子图像集合中相邻两幅子图像的特征点,再得到特征点的SURF特征向量。The feature point extraction and matching process is completed by the SURF method, that is, the fast Hessian extractor extracts the feature points of two adjacent sub-images in each group of sub-image sets, and then obtains the SURF feature vector of the feature points.

所述的同一组子图像集合中相邻的两幅子图像间的变换矩阵M是通过RANSAC方法得到的,具体是:The transformation matrix M between two adjacent sub-images in the same group of sub-image sets is obtained by the RANSAC method, specifically:

xx ′′ ythe y ′′ zz ′′ == mm 00 mm 11 mm 22 mm 33 mm 44 mm 55 mm 66 mm 77 mm 88 ** xx ythe y zz oror uu ′′ == MuMu ,,

其中:(x′,y′,z′)和(x,y,z)分别是同一特征点在相邻的两幅子图像坐标系中的坐标,子图像坐标系以图像左上角为原点、x轴水平向右为正、y轴垂直向下为正建立的。Among them: (x′, y′, z′) and (x, y, z) are the coordinates of the same feature point in the coordinate system of two adjacent sub-images respectively, and the sub-image coordinate system takes the upper left corner of the image as the origin, The x-axis is positive horizontally to the right, and the y-axis is vertically downward.

步骤三,根据得到的变换矩阵,对每组子图像集合进行图像拼接处理,从而将每组子图像集合拼接成一幅列拼接图像,共得到K幅列拼接图像。Step 3: Perform image mosaic processing on each group of sub-image sets according to the obtained transformation matrix, so that each group of sub-image sets is stitched into a row-stitched image, and a total of K row-stitched images are obtained.

所述的图像拼接处理,具体是:以每组子图像集合中最中间的子图像为该组子图像集合的基准图像,将该组子图像集合中其他子图像根据得到的变换矩阵变换至基准图像平面坐标系中,对图像的重合区域的像素点进行带权重的灰度拼接处理,对图像的非重合区域中的像素点的灰度保持不变,从而将每组子图像集合拼接成一幅列拼接图像。The image splicing process specifically includes: taking the middlemost sub-image in each group of sub-image sets as the reference image of the group of sub-image sets, transforming other sub-images in the group of sub-image sets to the reference image according to the obtained transformation matrix In the image plane coordinate system, weighted grayscale splicing is performed on the pixels in the overlapping area of the image, and the grayscale of the pixels in the non-overlapping area of the image remains unchanged, so that each group of sub-images is stitched into a single image. Stitched images in columns.

所述的带权重的灰度拼接处理,具体是:The grayscale splicing process with weights is specifically:

ff resres (( PP )) == ΣΣ ii == 11 WW ff ii (( PP )) dd ii nno ΣΣ ii == 11 WW dd ii nno ,,

其中:fres(P)是拼接后P点的像素值,fi(p)是第i幅图像中P点的像素值,W为参加拼接的图像数量,n为一常数,di为P点至参与拼接的第i幅图像边界的最短距离。Wherein: f res (P) is the pixel value of point P after splicing, f i (p) is the pixel value of point P in the i-th image, W is the number of images participating in splicing, n is a constant, and d i is P The shortest distance from the point to the boundary of the i-th image participating in stitching.

步骤四,将K幅列拼接图像作为一个图像集合,对其依次进行特征点提取、匹配处理和图像拼接处理,从而得到一幅马赛克拼接图像。Step 4: Taking K series of stitched images as an image set, performing feature point extraction, matching processing, and image stitching processing on them in sequence, so as to obtain a mosaic stitching image.

步骤五,由静态摄像机采集一幅静态图像,对该静态图像和马赛克拼接图像进行特征点提取与匹配处理,且利用极线几何原理进一步搜索特征点,获得静态图像和马赛克拼接图像间的匹配特征点对,且使匹配的特征点尽可能均匀分布在静态图像和马赛克拼接图像上。Step 5: Collect a static image by a static camera, perform feature point extraction and matching processing on the static image and the mosaic stitched image, and further search for feature points using the principle of epipolar geometry to obtain matching features between the static image and the mosaic stitched image point pairs, and make the matched feature points distributed as evenly as possible on the static image and the mosaic image.

所述的极线几何原理是:一幅图像的点在另一幅图像上的对应点位于相应极线上。The epipolar line geometry principle is: the corresponding point of a point in one image on another image is located on the corresponding epipolar line.

所述的进一步搜索特征点是使静态图像的特征点在马赛克拼接图像上的对应点位于相应极线上,从而增加特征点的数量。The further search for feature points is to make the corresponding points of the feature points of the static image on the mosaic stitched image located on the corresponding epipolar line, thereby increasing the number of feature points.

步骤六,对静态图像和马赛克拼接图像进行全局标定处理,从而得到静态图像与马赛克拼接图像间的映射关系,即主从摄像机之间的标定关系。In step six, the global calibration process is performed on the static image and the mosaic image, so as to obtain the mapping relationship between the static image and the mosaic image, that is, the calibration relationship between the master and slave cameras.

所述的全局标定处理,包括步骤为:The global calibration process includes the steps of:

1)在静态图像中的任一点Ps(xs,ys)邻近区域内,搜索静态图像特征点,NR(Ps)为距离点Ps距离小于或者等于R的所有静态图像特征点构成的集合,即:1) In the vicinity of any point P s (x s , y s ) in the static image, search for static image feature points, NR (P s ) is all static image feature points whose distance from point P s is less than or equal to R A set consisting of:

N R ( P s ) = { ( P s 1 , r 1 ) , ( P s 2 , r 2 ) , . . . } , 其中:Ps i是距离Ps为ri的特征点; N R ( P the s ) = { ( P the s 1 , r 1 ) , ( P the s 2 , r 2 ) , . . . } , Among them: P s i is the feature point whose distance P s is r i ;

2)找到NR(Ps)集合中的每个静态图像特征点Ps i对应的马赛克图像特征点Pd i,得到包括Pd i的子图像,进而从包括Pd i的子图像中选择Pd i最接近其所在子图像中心的一幅子图像Ir i2) Find the mosaic image feature point P d i corresponding to each static image feature point P s i in the NR (P s ) set, and obtain the sub-image including P d i , and then from the sub-image including P d i Select a sub-image I r i that P d i is closest to the center of the sub-image where it is located;

3)查找子图像Ir i在所有子图像集合中的标号,得到动态摄像机在拍摄子图像Ir i时所处的位置参数Si3) Find the label of the sub-image I r i in all sub-image sets, and obtain the position parameter S i of the dynamic camera when shooting the sub-image I r i ;

4)对动态摄像机的位置参数Si进行插值处理,得到静态图像中任一点Ps(xs,ys)与动态摄像机的位置参数的对应关系,具体是:4) Perform interpolation processing on the position parameter S i of the dynamic camera to obtain the corresponding relationship between any point P s (x s , y s ) in the static image and the position parameter of the dynamic camera, specifically:

S=S1*f1(r1,f2,...rn)+S2*f2(r1,r2,...rn)+...+Sn*fn(r1,r2,...rn),S=S 1 *f 1 (r 1 , f 2 ,...r n )+S 2 *f 2 (r 1 , r 2 ,...r n )+...+S n *f n ( r 1 , r 2 ,...r n ),

其中:fi为插值函数。Among them: f i is the interpolation function.

与现有技术相比,本发明的有益效果是:通过马赛克拼接图像表示动态摄像机的视场范围,利用了SURF特征点估计两个不同成像平面之间的关系,只需要事先制定好动态摄像机的运动轨迹,就可以自动的标定出该主从摄像机系统,能够在保持标定精度较好的条件下(误差率为3%-5%),实现主从摄像机系统的自动标定,节省了人力与时间,降低了标定过程中的复杂程度。Compared with the prior art, the beneficial effect of the present invention is: the field of view range of the dynamic camera is represented by the mosaic mosaic image, and the relationship between two different imaging planes is estimated by using SURF feature points, only the dynamic camera needs to be prepared in advance. The master-slave camera system can be calibrated automatically, and the master-slave camera system can be automatically calibrated under the condition of maintaining good calibration accuracy (the error rate is 3%-5%), which saves manpower and time , reducing the complexity of the calibration process.

附图说明 Description of drawings

图1是实施例动态摄像机转动轨迹示意图;Fig. 1 is the schematic diagram of embodiment dynamic camera rotation track;

其中:(a)是拍摄64幅子图像时的动态摄像机的位置;(b)是动态摄像机拍摄的若干子图像。Among them: (a) is the position of the dynamic camera when shooting 64 sub-images; (b) is a number of sub-images taken by the dynamic camera.

图2是带权重的灰度拼接处理的示意图。Fig. 2 is a schematic diagram of weighted gray-level mosaic processing.

图3是实施例得到的马赛克拼接图像。Fig. 3 is the mosaic mosaic image obtained in the embodiment.

图4是实施例得到的匹配特征点;Fig. 4 is the matching characteristic point that embodiment obtains;

其中:(a)是单纯的利用SURF方法得到的匹配特征点;(b)是进一步利用极线几何原理得到的匹配特征点。Among them: (a) is the matching feature point obtained by simply using the SURF method; (b) is the matching feature point obtained by further using the epipolar geometric principle.

图5是全局标定处理的示意图。FIG. 5 is a schematic diagram of global calibration processing.

图6是实施例的标定结果;Fig. 6 is the calibration result of embodiment;

其中:(a)静态摄像机拍摄的一幅图像;(b)是相对于(a)的动态摄像机拍摄的图像;(c)是静态摄像机拍摄的另一幅图像;(d)是相对于(c)的静态摄像机拍摄的图像。Among them: (a) is an image captured by a static camera; (b) is an image captured by a dynamic camera relative to (a); (c) is another image captured by a static camera; (d) is an image relative to (c ) images captured by a static camera.

图7是实施例标定误差率示意图;Fig. 7 is the schematic diagram of embodiment calibration error rate;

其中:(a)是实施例X方向的误差率示意图;(b)是实施例Y方向的误差率示意图。Wherein: (a) is a schematic diagram of the error rate in the X direction of the embodiment; (b) is a schematic diagram of the error rate in the Y direction of the embodiment.

具体实施方式 Detailed ways

下面结合附图对本发明的实施例作详细说明:本实施例在以本发明技术方案为前提下进行实施,给出了详细的实施方式和过程,但本发明的保护范围不限于下述的实施例。The embodiments of the present invention are described in detail below in conjunction with the accompanying drawings: the present embodiment is implemented on the premise of the technical solution of the present invention, and detailed implementation methods and processes are provided, but the protection scope of the present invention is not limited to the following implementations example.

实施例Example

本实施例中对某室内安装于不同地点的一台静态摄像机与一台动态摄像机进行标定。静态摄像机与动态摄像机的分辨率均为320*240。In this embodiment, a static camera and a dynamic camera installed in different places in a certain room are calibrated. The resolution of static camera and dynamic camera is 320*240.

本实施例包括如下步骤:This embodiment includes the following steps:

步骤一,预先设定动态摄像机的运动轨迹是“之”字形,动态摄像机沿着给定的运动轨迹自动旋转,并每隔时间t采样一幅子图像,同时按采集的时间顺序,对每一幅子图像标号,记录每次采集图像时动态摄像机转动的参数,即:水平转动角度与垂直转动角度,并将每幅子图像及其对应的标号和对应的动态摄像机转动的参数和位置信息一并进行存储。Step 1, pre-set the motion track of the dynamic camera as a zigzag, the dynamic camera automatically rotates along the given motion track, and samples a sub-image every time t, and at the same time, according to the time sequence of acquisition, for each The sub-image label records the parameters of the dynamic camera rotation each time the image is captured, namely: the horizontal rotation angle and the vertical rotation angle, and combines each sub-image and its corresponding label with the corresponding dynamic camera rotation parameters and position information. and store it.

本实施例中动态摄像机的运动轨迹如图1(a)中箭头所示,分别在图2中圆点处采样一幅子图像,共得到64幅子图像,得到的若干子图像如图1(b)所示。In the present embodiment, the motion track of the dynamic camera is shown in the arrow in Fig. 1 (a), and a sub-image is sampled at the dots in Fig. 2 respectively, and 64 sub-images are obtained altogether, and some sub-images obtained are as shown in Fig. 1 ( b) as shown.

步骤二,动态摄像机旋转结束后,对得到的N幅子图像按拍摄位置进行分组排列处理,得到K组子图像集合,对每组子图像集合中相邻两幅子图像进行特征点提取和匹配处理,得到匹配特征点对,进而得到同一组子图像集合中相邻的两幅子图像间的变换矩阵M。Step 2: After the dynamic camera rotation is completed, the obtained N sub-images are grouped and arranged according to the shooting position to obtain K groups of sub-image sets, and feature point extraction and matching are performed on two adjacent sub-images in each group of sub-image sets Processing to obtain matching feature point pairs, and then obtain the transformation matrix M between two adjacent sub-images in the same group of sub-image sets.

所述的分组排列处理是将对64幅子图像按摄像机水平转动角度从小到大分为8组,每组包含8幅子图像,并将每组中的子图像按照对应的动态摄像机的垂直转动角度由小到大进行排序。The grouping arrangement process is to divide the 64 sub-images into 8 groups according to the horizontal rotation angle of the camera, each group contains 8 sub-images, and the sub-images in each group are divided into 8 groups according to the vertical rotation angle of the corresponding dynamic camera. Sort from smallest to largest.

所述的特征点提取和匹配处理是通过SURF方法完成的,即由快速Hessian提取器提取每组子图像集合中相邻两幅子图像的特征点,再得到特征点的SURF特征向量。The feature point extraction and matching process is completed by the SURF method, that is, the fast Hessian extractor extracts the feature points of two adjacent sub-images in each group of sub-image sets, and then obtains the SURF feature vector of the feature points.

所述的同一组子图像集合中相邻的两幅子图像间的变换矩阵M是通过RANSAC方法得到的,具体是:The transformation matrix M between two adjacent sub-images in the same group of sub-image sets is obtained by the RANSAC method, specifically:

xx ′′ ythe y ′′ zz ′′ == mm 00 mm 11 mm 22 mm 33 mm 44 mm 55 mm 66 mm 77 mm 88 ** xx ythe y zz oror uu ′′ == MuMu ,,

其中:(x′,y′,z′)和(x,y,z)分别是同一特征点在相邻的两幅子图像坐标系中的坐标,子图像坐标系是以图像左上角为原点、x轴水平向右为正、y轴垂直向下为正建立的。Among them: (x′, y′, z′) and (x, y, z) are the coordinates of the same feature point in the coordinate system of two adjacent sub-images respectively, and the sub-image coordinate system is based on the upper left corner of the image as the origin , The x-axis is positive when it is horizontal to the right, and the positive is established when the y-axis is vertically downward.

步骤三,根据得到的变换矩阵,对每组子图像集合进行图像拼接处理,从而将每组子图像集合拼接成一幅列拼接图像,共得到8幅列拼接图像。Step 3: According to the obtained transformation matrix, image stitching is performed on each group of sub-image sets, so that each group of sub-image sets is stitched into a column-stitched image, and a total of 8 column-stitched images are obtained.

所述的图像拼接处理,具体是:以每组子图像集合中最中间的子图像(本实施例为第五幅子图像)为该组子图像集合的基准图像,将该组子图像集合中其他子图像根据得到的变换矩阵变换至基准图像平面坐标系中,对图像的重合区域的像素点进行带权重的灰度拼接处理,对图像的非重合区域中的像素点的灰度保持不变,从而将每组子图像集合拼接成一幅列拼接图像。The image splicing process specifically includes: taking the middlemost sub-image (the fifth sub-image in this embodiment) in each group of sub-image sets as the reference image of the group of sub-image sets, The other sub-images are transformed into the reference image plane coordinate system according to the obtained transformation matrix, and the pixels in the overlapping areas of the images are processed with weighted grayscale splicing, and the grayscales of the pixels in the non-overlapping areas of the images remain unchanged , so that each group of sub-image sets is stitched into a column-stitched image.

如图2所示,所述的带权重的灰度拼接处理,具体是:As shown in Figure 2, the gray-scale mosaic processing with weights is specifically:

ff resres (( PP )) == ΣΣ ii == 11 88 ff ii (( PP )) dd ii nno ΣΣ ii == 11 88 dd ii nno ,,

其中:fres(P)是拼接后P点的像素值,fi(p)是第i幅图像中P点的像素值,n为一常数,di为P点至参与拼接的第i幅图像边界的最短距离。Among them: f res (P) is the pixel value of point P after splicing, f i (p) is the pixel value of point P in the i-th image, n is a constant, and d i is from point P to the i-th frame involved in splicing The shortest distance to the image border.

本实施例中n为3。n is 3 in this embodiment.

步骤四,将8幅列拼接图像作为一个图像集合,按照步骤二和步骤三的方法对其依次进行特征点提取、匹配处理和图像拼接处理,从而得到一幅马赛克拼接图像。Step 4: Take the 8 column-stitched images as an image set, and perform feature point extraction, matching processing, and image stitching processing on them in sequence according to the methods in Step 2 and Step 3, so as to obtain a mosaic stitching image.

本实施例得到的马赛克拼接图像如图3所示,其中的白线条为每幅子图像的边界。The mosaic stitched image obtained in this embodiment is shown in FIG. 3 , where the white line is the boundary of each sub-image.

步骤五,由静态摄像机采集一幅静态图像,对该静态图像和马赛克拼接图像进行特征点提取与匹配处理,且利用极线几何原理进一步搜索特征点,获得静态图像和马赛克拼接图像间的匹配特征点对,且使匹配的特征点尽可能均匀分布在两幅图像上。Step 5: Collect a static image by a static camera, perform feature point extraction and matching processing on the static image and the mosaic stitched image, and further search for feature points using the principle of epipolar geometry to obtain matching features between the static image and the mosaic stitched image Point pairs, and make the matching feature points distributed on the two images as evenly as possible.

所述的极线几何原理是:一幅图像的点在另一幅图像上的对应点位于相应极线上。The epipolar line geometry principle is: the corresponding point of a point in one image on another image is located on the corresponding epipolar line.

本实施例单纯的利用SURF方法得到的匹配特征点如图4(a)所示,进一步利用极线几何原理得到的匹配特征点如图4(b)所示。In this embodiment, the matching feature points obtained simply by using the SURF method are shown in FIG. 4( a ), and the matching feature points obtained by further using the epipolar geometric principle are shown in FIG. 4 ( b ).

步骤六,对静态图像和马赛克拼接图像进行全局标定处理,从而得到静态图像与马赛克拼接图像间的映射关系,即主从摄像机之间的标定关系。In step six, the global calibration process is performed on the static image and the mosaic image, so as to obtain the mapping relationship between the static image and the mosaic image, that is, the calibration relationship between the master and slave cameras.

如图5所示,所述的全局标定处理,包括步骤为:As shown in Figure 5, the described global calibration process includes the following steps:

1)在静态图像中的任一点Ps(xs,ys)邻近区域内,搜索静态图像SURF特征点,NR(Ps)为距离点Ps距离小于或者等于R的所有静态图像SURF特征点构成的集合:1) In the vicinity of any point P s (x s , y s ) in the static image, search for the static image SURF feature points, NR (P s ) is all the static image SURF whose distance from point P s is less than or equal to R A collection of feature points:

N R ( P s ) = { ( P s 1 , r 1 ) , ( P s 2 , r 2 ) , . . . } , 其中:Ps i是距离Ps为ri的特征点; N R ( P the s ) = { ( P the s 1 , r 1 ) , ( P the s 2 , r 2 ) , . . . } , Among them: P s i is the feature point whose distance P s is r i ;

2)找到NR(Ps)集合中的每个静态图像SURF特征点Ps i对应的马赛克图像特征点Pd i,得到包括Pd i的子图像,进而从包括Pd i的子图像中选择Pd i最接近子图像中心的子图像Ir i2) Find the mosaic image feature point P d i corresponding to each static image SURF feature point P s i in the NR (P s ) set, obtain the sub-image including P d i , and then obtain the sub-image including P d i Select the sub-image I r i that P d i is closest to the center of the sub-image;

3)查找子图像Ir i在所有子图像集合中的标号,得到动态摄像机在拍摄子图像Ir i时所处的位置Sii,βi,Zi)i=1,2,3...;3) Find the label of the sub-image I r i in all sub-image sets, and obtain the position S ii , β i , Z i )i=1, 2, where the dynamic camera is when shooting the sub-image I r i 3...;

4)对Sii,βi,Zi)进行插值处理,得到静态图像中任一点Ps(xs,ys)与动态摄像机的位置参数的对应关系,具体是:4) Perform interpolation processing on S ii , β i , Z i ) to obtain the corresponding relationship between any point P s (x s , y s ) in the static image and the position parameters of the dynamic camera, specifically:

S=S1*f1(r1,r2,...rn)+S2*f2(r1,r2,...rn)+...+Sn*fn(r1,r2,...rn),S=S 1 *f 1 (r 1 , r 2 ,...r n )+S 2 *f 2 (r 1 ,r 2 ,...r n )+...+S n *f n ( r 1 , r 2 ,...r n ),

其中:fi为插信函数。Among them: f i is the interpolation function.

本实施例中插值函数具体为: f i ( r 1 , r 2 , . . . r n ) = r i / Σ i = 1 n r i 2 . In this embodiment, the interpolation function is specifically: f i ( r 1 , r 2 , . . . r no ) = r i / Σ i = 1 no r i 2 .

使用静态摄像机拍摄一幅图像如图6(a)所示,使用本实施例方法对图6(a)中的感兴趣目标(人)进行动态摄像机的标定,从而旋转动态摄像机得到的图像如图6(b)所示;类似的,静态摄像机拍摄另一幅图像如图6(c)所示,相应的得到动态摄像机的图像如图6(d)所示。Use a static camera to shoot an image as shown in Figure 6(a), and use the method of this embodiment to calibrate the dynamic camera for the object of interest (person) in Figure 6(a), so that the image obtained by rotating the dynamic camera is shown in Figure 6(a). 6(b); similarly, the static camera captures another image as shown in Figure 6(c), and the corresponding dynamic camera image is shown in Figure 6(d).

本实施例选取静态图像中的任意8个点,按得到的主从摄像机之间的标定关系计算出动态摄像机对应的旋转角度,旋转动态摄像机后,计算动态摄像机图像中心与选取点对应的点在X、Y方向上的距离,从而根据X、Y方向上的距离与动态摄像机分辨率的比值,得到主从摄像机标定的误差率,本实施例得到的X方向的误差率如图7(a)所示,Y方向的误差率如图7(b)所示,由图7可知本实施例方法简单且标定精度高。This embodiment selects any 8 points in the static image, and calculates the corresponding rotation angle of the dynamic camera according to the obtained calibration relationship between the master and slave cameras. After rotating the dynamic camera, calculate the point corresponding to the center of the dynamic camera image and the selected point at The distance in the X and Y directions, so that according to the ratio of the distance in the X and Y directions to the resolution of the dynamic camera, the error rate of the master-slave camera calibration is obtained. The error rate in the X direction obtained in this embodiment is shown in Figure 7 (a) As shown, the error rate in the Y direction is shown in FIG. 7(b), and it can be seen from FIG. 7 that the method of this embodiment is simple and the calibration accuracy is high.

Claims (7)

1. the full automatic calibration method of a master-slave camera chain is characterized in that, may further comprise the steps:
Step 1; Preestablish the movement locus of dynamic camera; Dynamic camera rotates along this movement locus automatically; And per interval t width of cloth subimage of sampling; Simultaneously by the time sequencing that collects; To each width of cloth subimage label, the parameter that dynamic camera rotates during the every width of cloth subimage of record acquisition, and every width of cloth subimage and corresponding label thereof stored with parameter and positional information that corresponding dynamic camera rotates;
Step 2; After the dynamic camera rotation finishes; The N width of cloth subimage that obtains is carried out arranged in groups by the camera site to be handled; Obtain K group set of sub-images; Two width of cloth subimages to adjacent in every group of set of sub-images carry out feature point extraction and matching treatment; It is right to obtain matched feature points, and then obtains the transform matrix M between two adjacent in same group of set of sub-images width of cloth subimages;
Step 3 based on the transformation matrix that obtains, is carried out image mosaic to every group of set of sub-images and is handled, thereby every group of set of sub-images is spliced into a width of cloth row stitching image, obtains K width of cloth row stitching image altogether;
Step 4 as an image collection, is carried out K width of cloth row stitching image successively feature point extraction, matching treatment and image mosaic to it and is handled, thereby obtain a width of cloth mosaic stitching image;
Step 5; Gather a width of cloth still image by still camera; This still image and mosaic stitching image are carried out feature point extraction and matching treatment; And utilize the further search characteristics point of polar curve geometrical principle; The matched feature points that obtains between still image and mosaic stitching image is right, and the characteristic point of coupling is evenly distributed on still image and the mosaic stitching image as far as possible;
The described further search characteristics point of polar curve geometrical principle that utilizes is that the characteristic point of still image is positioned on the corresponding polar curve in the corresponding points on the mosaic stitching image, thereby increases the quantity of characteristic point.
Step 6 is carried out the global calibration processing to still image and mosaic stitching image, thereby is obtained the mapping relations between still image and mosaic stitching image, be i.e. demarcation between principal and subordinate's video camera relation;
Described global calibration is handled, and comprises that step is:
1) any some P in still image s(x s, y s) in the adjacent domain, search still image SURF characteristic point, N R(P s) be range points P sDistance is smaller or equal to the set of all still image SURF characteristic points formations of R, that is:
Figure FDA0000131918690000011
Wherein
Figure FDA0000131918690000012
It is distance P sBe r iCharacteristic point;
2) find N R(P s) each still image characteristic point in the set
Figure FDA0000131918690000013
Corresponding mosaic image characteristic point
Figure FDA0000131918690000014
Comprised
Figure FDA0000131918690000015
Subimage, and then from comprising
Figure FDA0000131918690000016
Subimage in select
Figure FDA0000131918690000017
Belong to the subimage at subimage center near it
Figure FDA0000131918690000018
3) search subimage
Figure FDA0000131918690000021
Label in all set of sub-images obtains dynamic camera and is taking subimage The time residing location parameter S i
4) to the location parameter S of dynamic camera iCarry out interpolation processing, obtain any point P in the still image s(x s, y s) with the corresponding relation of the location parameter of dynamic camera be:
S=S 1*f 1(r 1,r 2,...r n)+S 2*f 2(r 1,r 2,...r n)+...+S n*f n(r 1,r 2,...r n),
Wherein: f iBe interpolating function.
2. the full automatic calibration method of master-slave camera chain according to claim 1 is characterized in that, the parameter that the dynamic camera described in the step 1 rotates is meant: horizontally rotate angle and vertical rotation angle.
3. the full automatic calibration method of master-slave camera chain according to claim 1; It is characterized in that; It is that the dynamic camera that subimage is corresponding horizontally rotates differential seat angle and is divided into one group of set of sub-images less than the subimage of threshold value that arranged in groups described in the step 2 is handled; Thereby obtain K group set of sub-images, and the subimage in every group is arranged according to the vertical rotation angular dimension of the dynamic camera of correspondence.
4. the full automatic calibration method of master-slave camera chain according to claim 1; It is characterized in that; Feature point extraction described in the step 2 and matching treatment are to be extracted the characteristic point of adjacent two width of cloth subimages in every group of set of sub-images by quick Hessian extractor, obtain the SURF characteristic vector of characteristic point again.
5. the full automatic calibration method of master-slave camera chain according to claim 1 is characterized in that, the transform matrix M in the same group of set of sub-images described in the step 2 between adjacent two width of cloth subimages is:
Figure FDA0000131918690000023
Wherein: (x ', y ', z ') ⊥ is with (z) ⊥ is respectively the coordinate of same characteristic point in two adjacent width of cloth subimage coordinate systems for x, y, and the subimage coordinate system is an initial point with the image upper left corner, and the x axle horizontal is to the right for just, and the y axle is what just setting up vertically downward.
6. the full automatic calibration method of master-slave camera chain according to claim 1; It is characterized in that; Image mosaic described in the step 3 is handled: with the benchmark image of subimage middle in every group of set of sub-images for this group set of sub-images; Other subimages in this group set of sub-images are converted in the benchmark image plane coordinate system according to the transformation matrix that obtains; The pixel in the coincidence of image zone is carried out the heavy gray scale splicing of cum rights handle, the gray scale of the pixel in the non-coincidence zone of image is remained unchanged, thereby every group of set of sub-images is spliced into a width of cloth row stitching image.
7. the full automatic calibration method of master-slave camera chain according to claim 6 is characterized in that, the gray scale splicing that described cum rights is heavy is handled and is:
Figure FDA0000131918690000031
Wherein: f Res(P) be the pixel value that splicing back P is ordered, f i(p) be the pixel value that P is ordered in the i width of cloth image, W is for participating in spliced image quantity, and n is a constant, d iBe the beeline of P point to the i width of cloth image boundary of participating in splicing.
CN2010101399769A 2010-04-07 2010-04-07 Full automatic calibration method of master-slave camera chain Expired - Fee Related CN101794448B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101399769A CN101794448B (en) 2010-04-07 2010-04-07 Full automatic calibration method of master-slave camera chain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101399769A CN101794448B (en) 2010-04-07 2010-04-07 Full automatic calibration method of master-slave camera chain

Publications (2)

Publication Number Publication Date
CN101794448A CN101794448A (en) 2010-08-04
CN101794448B true CN101794448B (en) 2012-07-04

Family

ID=42587120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101399769A Expired - Fee Related CN101794448B (en) 2010-04-07 2010-04-07 Full automatic calibration method of master-slave camera chain

Country Status (1)

Country Link
CN (1) CN101794448B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102842121A (en) * 2011-06-24 2012-12-26 鸿富锦精密工业(深圳)有限公司 Picture splicing system and picture splicing method
CN102693543B (en) * 2012-05-21 2014-08-20 南开大学 Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN103024350B (en) * 2012-11-13 2015-07-29 清华大学 A kind of principal and subordinate's tracking of binocular PTZ vision system and the system of application the method
CN103105858A (en) * 2012-12-29 2013-05-15 上海安维尔信息科技有限公司 Method capable of amplifying and tracking goal in master-slave mode between fixed camera and pan tilt zoom camera
US9400943B2 (en) * 2013-08-02 2016-07-26 Qualcomm Incorporated Identifying IoT devices/objects/people using out-of-band signaling/metadata in conjunction with optical images
CN103438798B (en) * 2013-08-27 2016-01-20 北京航空航天大学 Initiative binocular vision system overall calibration
CN103841333B (en) * 2014-03-27 2017-04-05 成都动力视讯科技股份有限公司 A kind of presetting bit method and control system
CN104301674A (en) * 2014-09-28 2015-01-21 北京正安融翰技术有限公司 Panoramic monitoring and PTZ camera linkage method based on video feature matching
CN104537659B (en) * 2014-12-23 2017-10-27 金鹏电子信息机器有限公司 The automatic calibration method and system of twin camera
CN104574425B (en) * 2015-02-03 2016-05-11 中国人民解放军国防科学技术大学 A kind of demarcation of the master-slave camera chain based on rotating model and interlock method
CN105096324B (en) * 2015-07-31 2017-11-28 深圳市大疆创新科技有限公司 A kind of camera device scaling method and camera device
CN105430333B (en) * 2015-11-18 2018-03-23 苏州科达科技股份有限公司 A kind of method and device for being back-calculated gunlock distortion factor in real time
CN105516661B (en) * 2015-12-10 2019-03-29 吴健辉 Principal and subordinate's target monitoring method that fisheye camera is combined with ptz camera
CN106652026A (en) * 2016-12-23 2017-05-10 安徽工程大学机电学院 Three-dimensional space automatic calibration method based on multi-sensor fusion
CN108469254A (en) * 2018-03-21 2018-08-31 南昌航空大学 A kind of more visual measuring system overall calibration methods of big visual field being suitable for looking up and overlooking pose
CN109613462A (en) * 2018-11-21 2019-04-12 河海大学 A Calibration Method for CT Imaging
CN112308924B (en) * 2019-07-29 2024-02-13 浙江宇视科技有限公司 Method, device, equipment and storage medium for calibrating camera in augmented reality
CN113393529B (en) * 2020-03-12 2024-05-10 浙江宇视科技有限公司 Method, device, equipment and medium for calibrating camera
CN113781548B (en) * 2020-06-10 2024-06-14 华为技术有限公司 Multi-equipment pose measurement method, electronic equipment and system
CN114115629B (en) 2020-08-26 2025-01-10 华为技术有限公司 Interface display method and device
CN111243035B (en) * 2020-04-29 2020-08-14 成都纵横自动化技术股份有限公司 Camera calibration method and device, electronic equipment and computer-readable storage medium
CN114764298B (en) 2020-07-29 2023-03-03 华为技术有限公司 Cross-device object dragging method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100788643B1 (en) * 2001-01-09 2007-12-26 삼성전자주식회사 Image retrieval method based on combination of color and texture
CN100583151C (en) * 2006-09-22 2010-01-20 东南大学 Double-camera calibrating method in three-dimensional scanning system

Also Published As

Publication number Publication date
CN101794448A (en) 2010-08-04

Similar Documents

Publication Publication Date Title
CN101794448B (en) Full automatic calibration method of master-slave camera chain
Aghaei et al. PV power plant inspection by image mosaicing techniques for IR real-time images
CN103517041B (en) Based on real time panoramic method for supervising and the device of polyphaser rotation sweep
CN105809640B (en) Low-light video image enhancement method based on multi-sensor fusion
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
CN112348775B (en) Vehicle-mounted looking-around-based pavement pit detection system and method
CN114973028B (en) Aerial video image real-time change detection method and system
CN110246082B (en) A remote sensing panorama image stitching method
CN112470189B (en) Occlusion cancellation for light field systems
CN110599522A (en) Method for detecting and removing dynamic target in video sequence
CN109919007A (en) A method of generating infrared image markup information
CN113313659A (en) High-precision image splicing method under multi-machine cooperative constraint
CN110210292A (en) A kind of target identification method based on deep learning
CN110060304A (en) A kind of organism three-dimensional information acquisition method
CN111800576B (en) Method and device for rapidly positioning picture shot by pan-tilt camera
Wu et al. MM-Gaussian: 3D Gaussian-based multi-modal fusion for localization and reconstruction in unbounded scenes
Huang et al. Image registration among UAV image sequence and Google satellite image under quality mismatch
CN105184736B (en) A kind of method of the image registration of narrow overlapping double-view field hyperspectral imager
CN110853145A (en) High spatial resolution portable anti-shake hyperspectral imaging method and device
CN110969135A (en) Vehicle logo recognition method in natural scene
CN104809688B (en) Sheep body body measurement method and system based on affine transformation registration Algorithm
CN103870847A (en) Detecting method for moving object of over-the-ground monitoring under low-luminance environment
CN103996187B (en) To-ground moving target photoelectric detection system, and data processing method and image processing method thereof
CN113989428B (en) A global three-dimensional reconstruction method and device for metallurgical storage area based on depth vision
CN116579921A (en) Infrared thermal image scanning on-line heat loss measurement technology based on image stitching

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20120704

Termination date: 20150407

EXPY Termination of patent right or utility model