[go: up one dir, main page]

CN102682445B - Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision - Google Patents

Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision Download PDF

Info

Publication number
CN102682445B
CN102682445B CN201110460701.XA CN201110460701A CN102682445B CN 102682445 B CN102682445 B CN 102682445B CN 201110460701 A CN201110460701 A CN 201110460701A CN 102682445 B CN102682445 B CN 102682445B
Authority
CN
China
Prior art keywords
camera
angle
target
algorithm
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110460701.XA
Other languages
Chinese (zh)
Other versions
CN102682445A (en
Inventor
于乃功
许锋
阮晓钢
李均
王彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201110460701.XA priority Critical patent/CN102682445B/en
Publication of CN102682445A publication Critical patent/CN102682445A/en
Application granted granted Critical
Publication of CN102682445B publication Critical patent/CN102682445B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

本发明属于计算机视觉范畴,尤其是一种能够同时进行目标跟踪,定位与摄像头标定的计算机视觉算法。本发明提出了一套应用于仿照蜥蜴亚目避役科生物视觉系统构架的主动、立体视觉系统的摄像头标定方法,目标跟踪与定位方法。其最主要的特点为摄像头标定,兴趣目标的跟踪与定位是同步进行的,其摄像头的标定过程无需要求兴趣目标的持续静止,亦无需在目标跟踪与定位之前执行。本发明所采取的标定方法化连续的标定过程为一个离散的标定过程,利用目标在视觉算法运行过程中离散的静止状态进行摄像头的标定,降低了摄像头标定的要求且实现了视觉算法与标定过程的同步进行。其自标定的特性减少了人工的标定工作,大幅简化了算法的前期准备工作。

The invention belongs to the category of computer vision, in particular to a computer vision algorithm capable of simultaneously performing target tracking, positioning and camera calibration. The invention proposes a set of camera calibration method and target tracking and positioning method applied to the active and stereoscopic vision system modeled on the structure of the lizard suborder evasionidae biological vision system. Its main feature is camera calibration. The tracking and positioning of the target of interest are carried out simultaneously. The calibration process of the camera does not require the target of interest to remain still, nor does it need to be performed before the target tracking and positioning. The calibration method adopted in the present invention transforms the continuous calibration process into a discrete calibration process, uses the discrete static state of the target during the operation of the visual algorithm to calibrate the camera, reduces the requirements for camera calibration and realizes the visual algorithm and calibration process is performed synchronously. Its self-calibration feature reduces the manual calibration work and greatly simplifies the preparatory work of the algorithm.

Description

一种仿蜥蜴亚目避役科生物视觉坐标提取算法An Algorithm for Extracting Visual Coordinates of Imitation Lizards

技术领域 technical field

本发明属于计算机视觉范畴,尤其是一种能够同时进行目标跟踪,定位与摄像头标定的计算机视觉算法。其仿照的生物系统是蜥蜴亚目避役科生物的视觉系统。The invention belongs to the category of computer vision, in particular to a computer vision algorithm capable of simultaneously performing target tracking, positioning and camera calibration. The biological system that it imitates is the visual system of the lizard suborder evasion family organisms.

背景技术 Background technique

计算机视觉是人工智能领域重要的研究方向,相较于其他信息检测手段而言,基于计算机视觉理念所构架的检测装置具有信息量大、成本低、对环境无干扰,便于控制等特点。视觉系统的构架方式有很多,以摄像头的数量分类可以分为:单目视觉,双目视觉,多目视觉;以机械结构约束条件可以分为:主动视觉,被动视觉;以视觉信息之间的关联程度可以分为:普通视觉,立体视觉等。Computer vision is an important research direction in the field of artificial intelligence. Compared with other information detection methods, the detection device based on the concept of computer vision has the characteristics of large amount of information, low cost, no interference to the environment, and easy control. There are many ways to structure the vision system, which can be divided into: monocular vision, binocular vision, and multi-eye vision based on the number of cameras; it can be divided into: active vision and passive vision based on mechanical structure constraints; The degree of association can be divided into: ordinary vision, stereo vision, etc.

以最简单的单目视觉来说,其具有结构简单,视觉和控制算法简单,成本低廉的优点。但由于基于普通单目视觉的视觉过程是一个由高维的现实世界向低维的图像空间映射的一个过程,此过程中不可避免的要损失掉很多信息,其中最重要的便是常规的单目视觉无法获取目标的深度信息。For the simplest monocular vision, it has the advantages of simple structure, simple vision and control algorithm, and low cost. However, since the visual process based on ordinary monocular vision is a process of mapping from the high-dimensional real world to the low-dimensional image space, a lot of information will inevitably be lost in this process, the most important of which is the conventional monocular vision. Visual vision cannot obtain the depth information of the target.

基于双目视觉的立体视觉可以弥补普通单目视觉无法获取深度信息这一缺陷。通过保证视觉系统中两台固定的摄像机之间具有适宜的共同视域,再辅以精确的摄像头标定与立体匹配算法,基于双目视觉的立体视觉系统即可得到目标的深度信息。但此种视觉系统构架方式本质上仍然是一种开环系统,其在运行视觉算法时无法通过改变自身的参数以适应环境的变化。Stereo vision based on binocular vision can make up for the defect that ordinary monocular vision cannot obtain depth information. By ensuring that the two fixed cameras in the vision system have a suitable common field of view, supplemented by precise camera calibration and stereo matching algorithms, the stereo vision system based on binocular vision can obtain the depth information of the target. However, this kind of vision system architecture is still an open-loop system in essence, and it cannot adapt to changes in the environment by changing its own parameters when running the vision algorithm.

解决这一缺陷,令视觉系统变为一种稳定的闭环系统的方法便是以主动视觉的架构方式来架构视觉系统,即给摄像头增加一定的运动能力,使视觉系统能够以其获得的图像信息为反馈量,驱动摄像头转动,闭环的控制整个视觉系统。The way to solve this defect and make the vision system a stable closed-loop system is to build the vision system with the architecture of active vision, that is, to add a certain movement ability to the camera, so that the vision system can use the image information it obtains As the feedback amount, the camera is driven to rotate, and the entire vision system is controlled in a closed loop.

由于基于主动视觉理念架构的视觉系统与传统的基于被动视觉理念架构的视觉系统在机械结构上已有所不同,因此其适用的深度信息获取方式,兴趣目标定位方式,摄像头标定方式也与被动视觉所采取的方法有所不同。对于以主动视觉理念架构的视觉系统来说,如果需要进行后续的视觉算法处理,其首先需要对所使用的摄像头进行标定。而常规的标定方式要求在视觉算法运行前执行,且要求标定过程中兴趣目标静止不动。这两个要求使得主动视觉系统的使用过程变得繁琐,特别是当视觉系统中的摄像头经常更换,或是系统使用变焦摄像头时会使视觉处理过程异常繁琐。Since the vision system based on the active vision concept architecture is different from the traditional vision system based on the passive vision concept architecture in terms of mechanical structure, its applicable depth information acquisition method, interest target positioning method, and camera calibration method are also different from passive vision. The approaches taken vary. For a vision system based on the concept of active vision, if subsequent vision algorithm processing is required, it first needs to calibrate the camera used. However, the conventional calibration method requires that the vision algorithm be executed before the operation of the vision algorithm, and the target of interest is required to be stationary during the calibration process. These two requirements make the use of active vision systems cumbersome, especially when the cameras in the vision system are changed frequently, or the system uses zoom cameras, which can make the vision processing process extremely cumbersome.

发明内容 Contents of the invention

本发明提出了一套应用于仿照蜥蜴亚目避役科生物视觉系统构架的主动、立体视觉系统的摄像头标定方法,目标跟踪与定位方法。其最主要的特点为摄像头标定,兴趣目标的跟踪与定位是同步进行的,其摄像头的标定过程无需要求兴趣目标的持续静止,亦无需在目标跟踪与定位之前执行。所应用的视觉系统采用广角定焦摄像头构架,且两台广角定焦摄像头可以相互独立的运动。The invention proposes a set of camera calibration method and target tracking and positioning method applied to the active and stereoscopic vision system modeled on the structure of the lizard suborder evasionidae biological vision system. Its main feature is camera calibration. The tracking and positioning of the target of interest are carried out simultaneously. The calibration process of the camera does not require the target of interest to remain still, nor does it need to be performed before the target tracking and positioning. The applied vision system uses a wide-angle fixed-focus camera structure, and two wide-angle fixed-focus cameras can move independently of each other.

本发明所应用的仿蜥蜴亚目避役科生物视觉系统其总体结构可以概述如下:Its overall structure of the imitation lizard suborder evasion family biological visual system applied in the present invention can be summarized as follows:

整个视觉系统仿照蜥蜴亚目避役科生物的视觉系统架构:其主要构件为2台广角定焦摄像头,每台广角定焦摄像头配备了2台步进电机为其提供独立的水平方向与竖直方向的2自由度运动能力。相较于常规的视觉系统,其具有的两个广角定焦摄像头之间的水平运动是相互独立的,俯仰运动也是独立的。其仿照的生物系统是蜥蜴亚目避役科(俗称变色龙)生物的视觉系统,其视觉系统的特点为两眼球可互相独立的运动,而不是像灵长类一样相关运动。特别注意的是,每台广角定焦摄像头与其配套的2台步进电机三者各自的几何中心点是在同一条垂直于水平面的直线上的,这一特点保证了后文将要详细介绍的深度信息及世界坐标提取算法的顺利进行。The entire visual system is modeled on the visual system architecture of lizard suborder escaping creatures: its main components are 2 wide-angle fixed-focus cameras, and each wide-angle fixed-focus camera is equipped with 2 stepping motors to provide independent horizontal and vertical directions. Directional 2-DOF motion capability. Compared with conventional vision systems, the horizontal motion between the two wide-angle fixed-focus cameras is independent, and the pitch motion is also independent. The biological system that it imitates is the visual system of the lizard suborder Refugeidae (commonly known as chameleons). The characteristic of its visual system is that the two eyeballs can move independently of each other, rather than related movements like primates. It should be noted that the respective geometric center points of each wide-angle fixed-focus camera and the two matching stepping motors are on the same straight line perpendicular to the horizontal plane. This feature guarantees the depth The information and world coordinate extraction algorithm is carried out smoothly.

本发明主要内容为通过控制两个广角定焦摄像头协同工作,调用下文详述的角度信息标定学习算法、深度信息及世界坐标提取算法,并以Cam shift跟踪算法作为辅助算法,对仿蜥蜴亚目避役科生物视觉系统进行运动控制和参数计算,最终实时地获取视觉系统跟踪目标相对于视觉系统两广角定焦摄像头各自的几何中心点连线中点的世界坐标。The main content of the present invention is to control two wide-angle fixed-focus cameras to work together, call the angle information calibration learning algorithm, depth information and world coordinate extraction algorithm detailed below, and use the Cam shift tracking algorithm as an auxiliary algorithm to simulate lizards. The biological vision system of the avoidance department performs motion control and parameter calculation, and finally obtains the world coordinates of the midpoint of the line between the geometric center points of the tracking target of the vision system relative to the two wide-angle fixed-focus cameras of the vision system in real time.

本发明的技术方案为:Technical scheme of the present invention is:

1、令两广角定焦摄像头分别搜索目标,当A摄像头发现目标后,启用基于Cam shift算法的单目视觉跟踪算法保持对目标的跟踪,并实时返回A摄像头此时像平面法向量相对于A摄像头初始位置时摄像头像平面法向量的水平夹角和竖直夹角;所述的A摄像头为首先搜索到目标的摄像头,B摄像头为另一台摄像头;所述的初始位置为:使广角定焦摄像头的像平面法向量与水平面相互平行,并且使两台广角定焦摄像头的像平面法向量垂直于两广角定焦摄像头几何中心点连线所构成的线段的位置;1. Let the two wide-angle fixed-focus cameras search for the target separately. When the A camera finds the target, start the monocular vision tracking algorithm based on the Cam shift algorithm to keep tracking the target, and return the normal vector of the image plane of the A camera at this time relative to the A in real time. The horizontal angle and the vertical angle of the camera head plane normal vector during the initial position of the camera; the A camera is the camera that first searches for the target, and the B camera is another camera; the initial position is: make the wide-angle fixed The image plane normal vector of the focus camera is parallel to the horizontal plane, and the image plane normal vector of the two wide-angle fixed-focus cameras is perpendicular to the position of the line segment formed by the geometric center point connection of the two wide-angle fixed-focus cameras;

2、B摄像头跟随A摄像头进行目标搜索,B摄像头搜索到目标后,亦启用基于Cam shift算法的单目视觉跟踪算法保持对目标的跟踪;2. The camera B follows the camera A to search for the target. After the camera B searches the target, it also enables the monocular vision tracking algorithm based on the Cam shift algorithm to keep track of the target;

3、调用角度信息标定学习算法分别实时计算A、B摄像头均跟踪到目标后,各自的像平面法向量相对于各自初始位置摄像头像平面法向量的水平夹角和竖直夹角;3. Call the angle information calibration learning algorithm to calculate the horizontal and vertical angles of the respective image plane normal vectors relative to the respective initial position camera head plane normal vectors after the cameras A and B track the target in real time;

4、根据步骤3)得到的结果,使用深度信息及世界坐标提取算法,实时计算并输出目标深度信息及目标在世界坐标系中的世界坐标;所述的世界坐标系为:以两广角定焦摄像头几何中心点连线的中点为原点,以与水平面平行、垂直于两广角定焦摄像头几何中心点连线、且沿视觉追踪的方向为x轴正方向,以垂直于水平面向上的方向为z轴正方向的右手坐标系;4. According to the result obtained in step 3), use the depth information and the world coordinate extraction algorithm to calculate and output the target depth information and the world coordinates of the target in the world coordinate system in real time; the world coordinate system is: focus with two wide angles The midpoint of the line connecting the geometric center points of the cameras is the origin, the direction parallel to the horizontal plane, perpendicular to the line connecting the geometric center points of the two wide-angle fixed-focus cameras, and the direction along the visual tracking is the positive direction of the x-axis, and the direction perpendicular to the horizontal plane is upward The right-handed coordinate system in the positive direction of the z-axis;

5、若目标未丢失,返回步骤3),若果目标丢失,则返回步骤1)。5. If the target is not lost, return to step 3), if the target is lost, return to step 1).

角度信息标定学习算法包括以下步骤:The angle information calibration learning algorithm includes the following steps:

2.1)初始化学习入口角度(Δθ1、Δη1)2.1) Initialize learning entrance angle (Δθ 1 , Δη 1 )

所述的学习入口角度是指标定过程中目标发生运动时,摄像头图像中目标从原点移动到发生运动的位置,所对应的摄像头转动的水平角度和竖直角度;The learning entrance angle refers to the horizontal angle and vertical angle of the corresponding camera rotation when the target moves from the origin to the position in the camera image when the target moves during the calibration process;

2.2)分别计算在摄像头图像中,摄像头图像中心点与目标的水平距离P和竖直距离Q相对于摄像头图像主对角线长度的百分比f(P)和g(Q);2.2) Calculate respectively in the camera image, the percentage f(P) and g(Q) of the horizontal distance P and the vertical distance Q of the camera image center point and the target with respect to the main diagonal length of the camera image;

2.3)判断目标是否在摄像头图像中心点,2.3) Determine whether the target is at the center of the camera image,

若(|f(P)|<ε)∧(|g(Q)|<ε),阈值ε>0,则目标在中心点,跳转至步骤2.7);If (|f(P)|<ε)∧(|g(Q)|<ε), threshold ε>0, then the target is at the center point, skip to step 2.7);

若(|f(P)|>ε)∨(|g(Q)|>ε),阈值ε>0,则目标不在中心点,跳转至步骤2.4);If (|f(P)|>ε)∨(|g(Q)|>ε), the threshold ε>0, then the target is not at the center point, skip to step 2.4);

2.4)查标定信息表,2.4) Check the calibration information table,

若标定信息表中存在对应于|f(P)|和|g(Q)|的角度信息,则算法跳转至步骤2.8);If there is angle information corresponding to |f(P)| and |g(Q)| in the calibration information table, the algorithm jumps to step 2.8);

若标定信息表中不存在对应于|f(P)|和|g(Q)|的角度信息,则跳转至步骤2.5);If there is no angle information corresponding to |f(P)| and |g(Q)| in the calibration information table, then jump to step 2.5);

2.5)逐步转动摄像头,使目标在摄像头图像中与图像中心点重合;2.5) Turn the camera step by step so that the target coincides with the center point of the image in the camera image;

2.6)当目标与摄像头图像中心点重合时,跳转至步骤2.7);否则返回步骤2.5);2.6) When the target coincides with the center point of the camera image, jump to step 2.7); otherwise, return to step 2.5);

2.7)当目标与图像中心点重合时,读出摄像头此时的像平面法向量相对于该摄像头初始位置摄像头像平面法向量的水平夹角θ和竖直夹角η并输出,转至步骤2.10);2.7) When the target coincides with the center point of the image, read out the horizontal angle θ and the vertical angle η of the normal vector of the image plane of the camera at this time relative to the normal vector of the camera’s initial position of the camera, and output it, and go to step 2.10 );

2.8)若标定信息表中存在对应于|f(P)|和|g(Q)|的角度信息,则摄像头转动该角度使目标几何中心点在摄像头图像中与图像中心点重合,转动方向根据标定信息换算示意表确定;2.8) If there is angle information corresponding to |f(P)| and |g(Q)| in the calibration information table, the camera rotates the angle so that the geometric center point of the target coincides with the image center point in the camera image, and the rotation direction is according to The calibration information conversion table is determined;

2.9)输出世界坐标系下,目标相对于初始位置摄像头法向量的水平夹角和竖直夹角;延时T时间,其中T为预先设置的延时量,返回步骤2.2);2.9) Under the output world coordinate system, the horizontal angle and the vertical angle of the target relative to the normal vector of the initial position camera; delay T time, wherein T is the preset delay amount, return to step 2.2);

2.10)通过预设学习标志位判断是否学习完毕,2.10) Judging whether the learning is completed through the preset learning flag bit,

若学习完毕,则转至步骤2.2);If the study is completed, then go to step 2.2);

若未学习完毕,则算法继续,开始标定;If the learning is not completed, the algorithm continues and calibration begins;

2.11)判断是否是第一次标定,2.11) Judging whether it is the first calibration,

如果学习入口角度等于初始值则为第一次标定,此时算法继续;If the learning entrance angle is equal to the initial value, it is the first calibration, and the algorithm continues at this time;

如果学习入口角度不等于初始值则不为第一次标定,此时跳转至步骤2.13);If the learning entrance angle is not equal to the initial value, it is not the first calibration, and then skip to step 2.13);

2.12)令摄像头在水平方向和竖直方向分别转动(θε,ηε),θε,ηε为阈值角度,跳转至步骤2.14)2.12) Let the camera rotate in the horizontal direction and the vertical direction (θ ε , η ε ), θ ε , η ε are threshold angles, skip to step 2.14)

2.13)摄像头转过前一次标定中断时记录的学习入口角度,跳转至步骤2.14);2.13) The camera turns over the learning entrance angle recorded when the previous calibration was interrupted, and jumps to step 2.14);

2.14)若为初始标定,将摄像头在水平方向和竖直方向分别转动的阈值角度(θε,ηε)以及与其对应的摄像头图像内目标几何中心点的|f(P)|和|g(Q)|写入标定信息表;若不为初始标定,首先判断目标在进入此步骤前是否发生了运动;如果目标运动了,则更新学习入口角度,并跳转至步骤2.2);如果目标没运动,则更新标定信息表;2.14) If it is the initial calibration, the threshold angle (θ ε , η ε ) of the camera rotating in the horizontal direction and the vertical direction respectively and the corresponding |f(P)| and |g( Q)|Write into the calibration information table; if it is not the initial calibration, first judge whether the target has moved before entering this step; if the target has moved, update the learning entrance angle, and jump to step 2.2); if the target does not Movement, update the calibration information table;

2.15)保持摄像头竖直角度不变,进行行标定,每当摄像头在水平方向转过一个单位水平角度,判断此时目标几何中心点是否离开摄像头图像;若己离开,则转至步骤2.17);若未离开,则继续算法;2.15) Keeping the vertical angle of the camera unchanged, carry out line calibration, whenever the camera rotates a unit horizontal angle in the horizontal direction, judge whether the geometric center point of the target leaves the camera image at this time; if it has left, go to step 2.17); If not, continue the algorithm;

2.16)判断摄像头在水平方向每转过一个单位水平角度后,目标是否移动,若没有发生移动,则更新标定信息表,并跳转至步骤2.15);若发生移动,则更新学习入口角度,并跳转至步骤2.2):2.16) Determine whether the target moves after the camera rotates a unit horizontal angle in the horizontal direction. If there is no movement, update the calibration information table and jump to step 2.15); if it moves, update the learning entrance angle, and Skip to step 2.2):

2.17)改变摄像头竖直角度,进行另一行的水平标定,如果目标几何中心点离开摄像头图像,则标定信息表建立完毕,转至步骤2.18);若未离开,则转至步骤2.14);2.17) Change the vertical angle of the camera and perform another line of horizontal calibration. If the geometric center of the target leaves the camera image, the calibration information table is established, and then go to step 2.18); if not, then go to step 2.14);

2.18)标定信息表建立完毕后,清除学习标志位,转至步骤2.2);2.18) After the calibration information table is established, clear the learning flag and go to step 2.2);

所述的标定信息表为记录有摄像头成像标定信息的一张二维向量表;表中记录了目标位置信息,以及摄像头图像中,目标从原点移动到目标位置,摄像头像平面法向量对应转过的水平夹角和竖直夹角;所述的目标位置信息由摄像头图像中心点与目标的水平距离P和竖直距离Q相对于摄像头图像主对角线长度的百分比f(P)和g(Q)表示。The calibration information table is a two-dimensional vector table that records the camera imaging calibration information; the target position information is recorded in the table, and in the camera image, the target moves from the origin to the target position, and the camera head plane normal vector corresponds to the turned level Included angle and vertical included angle; Described target location information is by the percentage f(P) and g(Q) of the horizontal distance P of the camera image center point and the target and the vertical distance Q relative to the main diagonal length of the camera image express.

标定信息换算示意表可描述为:The calibration information conversion table can be described as:

首先将摄像头图像依照所在象限划分为4个区域:First, the camera image is divided into 4 regions according to the quadrant:

其中imax为摄像头图像的行数;jmax为摄像头图像的列数;i指摄像头图像第i行;j指摄像头图像第j列;Iij指摄像头图像中第i行第j列的像素点;Wherein imax is the number of rows of the camera image; jmax is the column number of the camera image; i refers to the i-th row of the camera image; j refers to the j column of the camera image;

标定信息表只记录有I象限摄像头图像的标定信息,其他象限摄像头图像的标定信息可通过遵循标定信息换算示意表的规则计算得到;标定信息换算示意表如下所示:The calibration information table only records the calibration information of the I-quadrant camera image, and the calibration information of the other quadrant camera images can be calculated by following the rules of the calibration information conversion table; the calibration information conversion table is as follows:

  I象限 I quadrant   IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) IF: (f(P)=|f(P)|)∧(g(Q)=|g(Q)|)   |Δθ| |Δθ|   |Δη| |Δη|   II象限 II quadrant   IF:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) IF: (-f(P)=|f(P)|)∧(g(Q)=|g(Q)|)   -|Δθ| -|Δθ|   |Δη| |Δη|   III象限 III quadrant   IF:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) IF: (-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|)   -|Δθ| -|Δθ|   -|Δη| -|Δη|   IV象限 IV quadrant   IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) IF: (f(P)=|f(P)|)∧(-g(Q)=|g(Q)|)   |Δθ| |Δθ|   |Δη| |Δη|

深度信息及世界坐标提取算法包括以下步骤:The depth information and world coordinate extraction algorithm includes the following steps:

根据两广角定焦摄像头几何中心点之间的距离,以及根据角度信息标定学习算法得到的两广角定焦摄像头此时各自的像平面法向量相对于各自的初始位置摄像头法向量的水平夹角和竖直夹角,利用三角函数计算得到目标的世界坐标。According to the distance between the geometric centers of the two wide-angle fixed-focus cameras and the angle information calibration learning algorithm, the sum of the horizontal angles of the respective image plane normal vectors of the two wide-angle fixed-focus cameras relative to the respective initial position camera normal vectors The vertical angle is calculated using trigonometric functions to obtain the world coordinates of the target.

本发明的应用范围:Scope of application of the present invention:

此套技术方案中的摄像头标定部分可以作为单目视觉的标定算法使用,亦可以化多目视觉为各自独立的单目视觉进行标定,降低摄像头的标定要求。配套的目标跟踪与定位方法可以应用于主动视觉系统中。The camera calibration part in this set of technical solutions can be used as a calibration algorithm for monocular vision, and can also be used to calibrate multi-eye vision as independent monocular vision to reduce camera calibration requirements. The matching target tracking and localization method can be applied to the active vision system.

本发明具有如下优点:The present invention has the following advantages:

1)常规的主动视觉摄像头标定过程要求标定过程中目标静止不动且标定过程要先于图像处理过程。本发明所采取的标定方法化连续的标定过程为一个离散的标定过程,利用目标在视觉算法运行过程中离散的静止状态进行摄像头的标定,降低了摄像头标定的要求且实现了视觉算法与标定过程的同步进行。其自标定的特性减少了人工的标定工作,大幅简化了算法的前期准备工作。其自标定功能具有自动中断恢复功能,即使自标定过程中受到外界干扰,其亦能保存标定断点,待之后再次满足标定条件后继续完成自标定。1) The conventional active vision camera calibration process requires the target to be stationary during the calibration process and the calibration process must precede the image processing process. The calibration method adopted in the present invention transforms the continuous calibration process into a discrete calibration process, uses the discrete static state of the target during the operation of the visual algorithm to calibrate the camera, reduces the requirements for camera calibration and realizes the visual algorithm and calibration process is performed synchronously. Its self-calibration feature reduces the manual calibration work and greatly simplifies the preparatory work of the algorithm. Its self-calibration function has an automatic interrupt recovery function. Even if the self-calibration process is disturbed by the outside world, it can also save the calibration breakpoint, and continue to complete the self-calibration after meeting the calibration conditions again.

2)由于本发明所采用的视觉系统架构方式为基于主动视觉的立体视觉架构方式,因此视觉系统可以准确的获得目标的深度信息,并计算出相应的世界坐标。相较于常规的立体视觉系统,其配套的深度信息及世界坐标提取算法对摄像头镜头的图像畸变具有很强的容错性,因此该系统可以采用广角定焦摄像头来扩大其可视范围,且不必担心广角镜头引起的畸变会对深度信息及世界坐标的计算造成影响。2) Since the vision system architecture adopted in the present invention is a stereo vision architecture based on active vision, the vision system can accurately obtain the depth information of the target and calculate the corresponding world coordinates. Compared with the conventional stereo vision system, its supporting depth information and world coordinate extraction algorithm have a strong tolerance to the image distortion of the camera lens, so the system can use a wide-angle fixed-focus camera to expand its visual range without having to I am worried that the distortion caused by the wide-angle lens will affect the calculation of depth information and world coordinates.

3)本发明所应用的视觉系统至少拥有4个自由度,超过了常规视觉系统的自由度数量。其主动视觉的特性使系统构成了基于图像反馈的闭环控制系统,使配套的视觉算法具有更强的环境适应能力。3) The vision system applied in the present invention has at least 4 degrees of freedom, exceeding the number of degrees of freedom of conventional vision systems. The characteristics of its active vision make the system constitute a closed-loop control system based on image feedback, so that the matching vision algorithm has stronger environmental adaptability.

附图说明 Description of drawings

图1仿蜥蜴亚目避役科生物视觉系统基础结构抽象示意图Figure 1 Abstract schematic diagram of the basic structure of the biological visual system of the lizard-mimicking suborder Rescueidae

图2仿蜥蜴亚目避役科生物视觉系统扩展结构总体结构图Figure 2 The general structure diagram of the extended structure of the biological visual system of the lizard-mimicking suborder Rescueidae

图3角度信息标定学习算法运行过程图像平面示意图Figure 3 Schematic diagram of the image plane during the operation process of the angle information calibration learning algorithm

图4X,Y坐标提取示意图Figure 4 Schematic diagram of X, Y coordinate extraction

图5Z坐标提取示意图Figure 5 Schematic diagram of Z coordinate extraction

图6总步骤流程图Figure 6 Flowchart of the overall steps

图7角度信息标定学习算法流程图Figure 7 Flow chart of angle information calibration learning algorithm

图8角度信息标定学习算法学习过程示意图Figure 8 Schematic diagram of the learning process of the angle information calibration learning algorithm

图9角度信息标定学习算法学习过程流程图Figure 9 Flowchart of learning process of angle information calibration learning algorithm

图10标定信息表示意图Figure 10 Schematic diagram of the calibration information table

图11深度信息及世界坐标提取算法流程图Figure 11 Depth information and world coordinates extraction algorithm flow chart

图中:1-广角定焦摄像头,2-竖直方向步进电机,3-水平方向步进电机,4-图像采集处理器,5-电机控制器。In the figure: 1-wide-angle fixed-focus camera, 2-vertical stepping motor, 3-horizontal stepping motor, 4-image acquisition processor, 5-motor controller.

具体实施方式 Detailed ways

下面结合图1~图11,详细说明本实例。This example will be described in detail below with reference to FIG. 1 to FIG. 11 .

1.实现仿蜥蜴亚目避役科生物视觉坐标提取算法的硬件平台及基本工作原理。1. The hardware platform and basic working principle to realize the visual coordinate extraction algorithm of the lizard-like suborder evaderidae.

本实施例中,实现仿蜥蜴亚目避役科生物视觉坐标提取算法的硬件平台描述如下:In this embodiment, the hardware platform for realizing the algorithm for extracting the biological visual coordinates of the imitation lizard suborder avoidant family is as follows:

如图1所示,该硬件平台设有两台广角定焦摄像头(1),竖直方向运动的步进电机(2)、水平方向运动的步进电机(3)、图像采集处理器(4)和电机控制器(5)。每台广角定焦摄像头(1)配有一个竖直方向运动的步进电机(2)和一台水平方向运动的步进电机(3),其目的在于模仿蜥蜴亚目避役科生物的视觉系统,尤其是模拟生物的眼球运动机制,具有竖直与水平方向的自由度。相对于常规视觉系统来说,这种结构具有鲜明的主动视觉特点,有助于降低算法的复杂度。选择步进电机的原因是它相对于舵机可返回更精确的角度信息。As shown in Figure 1, this hardware platform is provided with two wide-angle fixed-focus cameras (1), a stepper motor (2) moving vertically, a stepping motor (3) moving horizontally, an image acquisition processor (4 ) and motor controller (5). Each wide-angle fixed-focus camera (1) is equipped with a stepping motor (2) moving vertically and a stepping motor (3) moving horizontally, the purpose of which is to imitate the vision of lizards The system, especially the mechanism of simulating the eye movement of living beings, has degrees of freedom in vertical and horizontal directions. Compared with the conventional vision system, this structure has distinct active vision characteristics, which helps to reduce the complexity of the algorithm. The reason for choosing a stepper motor is that it returns more accurate angle information than a servo.

广角定焦摄像头(1)取得外界图像信息后发送给与之相连接的图像采集处理器(4);图像采集处理器(4)经过相应预置算法处理后,将控制量发送至电机控制器(5),通过电机控制器(5)驱动竖直方向运动的步进电机(2)、水平方向运动的步进电机(3)转动,进而驱动广角定焦摄像头(1)转动。The wide-angle fixed-focus camera (1) obtains external image information and sends it to the image acquisition processor (4) connected to it; the image acquisition processor (4) sends the control amount to the motor controller after processing the corresponding preset algorithm (5), the motor controller (5) drives the stepping motor (2) moving in the vertical direction and the stepping motor (3) moving in the horizontal direction to rotate, and then drives the wide-angle fixed-focus camera (1) to rotate.

特别注意的是,广角定焦摄像头(1)的几何中心点与竖直方向运动的步进电机(2)和水平方向运动的步进电机(3)的几何中心点是在同一条垂直于水平面的直线上的,这一特点保证了后文将要详细介绍的深度信息及世界坐标提取算法的顺利进行。Special attention is that the geometric center point of the wide-angle fixed-focus camera (1) is on the same line perpendicular to the horizontal plane as the geometric center point of the stepping motor (2) moving in the vertical direction and the stepping motor (3) moving in the horizontal direction. This feature ensures the smooth progress of the depth information and world coordinate extraction algorithm that will be introduced in detail later.

如图2所示,在此基础视觉系统上,在两台广角定焦摄像头连线中点处上方可加入一台具有水平和竖直方向2自由度运动能力的长焦变焦摄像头,且长焦变焦摄像头要高于广角定焦摄像头,使扩展的视觉系统的识别能力得到提升。所加入的长焦变焦摄像头需在水平和竖直方向分别具有180度角度的运动能力来配合之后的识别步骤。其长焦变焦摄像头的长焦距,变焦段功能使得其可以保证待识别目标图像的清晰度与所占长焦变焦摄像头图像的百分比,充分利用摄像头的图像获取能力。其主要模仿人体视觉结构中的视网膜中心窝生理结构。As shown in Figure 2, on this basic vision system, a telephoto zoom camera with horizontal and vertical 2-degree-of-freedom movement capabilities can be added above the midpoint of the connection between the two wide-angle fixed-focus cameras, and the telephoto The zoom camera is higher than the wide-angle fixed-focus camera, which improves the recognition ability of the extended vision system. The added telephoto zoom camera needs to have the movement capability of 180 degrees in the horizontal and vertical directions respectively to cooperate with the subsequent recognition steps. The telephoto zoom camera's long focal length and zoom function enable it to ensure the clarity of the target image to be recognized and the percentage of the telephoto zoom camera image, making full use of the camera's image acquisition capabilities. It mainly imitates the physiological structure of the retinal fovea in the human visual structure.

为有效解决目标易离开视觉系统共同视域范围的问题,在前述硬件平台的底部可增加一台竖直方向运动的步进电机和一台水平方向运动的步进电机。新增的电机功率要大于前述步进电机。当目标即将离开视觉系统图像边缘时,通过调整新增结构的转动角度,避免目标离开视觉系统共同视域范围。这一结构主要模仿自蜥蜴亚目避役科生物的颈部结构,可辅助整个视觉系统工作。新增加的长焦变焦摄像头和两台电机使整个系统具有了更明显的主动视觉特征,更有利于防止算法运行过程中目标的丢失;具有了更清晰的目标图像细节,更有利于实现识别与导航类算法。In order to effectively solve the problem that the target is easy to leave the common field of view of the vision system, a stepping motor moving vertically and a stepping motor moving horizontally can be added to the bottom of the aforementioned hardware platform. The power of the newly added motor is greater than that of the aforementioned stepping motor. When the target is about to leave the edge of the vision system image, the rotation angle of the newly added structure is adjusted to prevent the target from leaving the common field of vision of the vision system. This structure is mainly imitated from the neck structure of the lizards, which can assist the work of the entire visual system. The newly added telephoto zoom camera and two motors make the whole system have more obvious active visual features, which is more conducive to preventing the loss of targets during the operation of the algorithm; with clearer target image details, it is more conducive to the realization of recognition and Navigation algorithms.

2、仿蜥蜴亚目避役科生物视觉坐标提取算法实现,如图6所示:2. Realization of the visual coordinate extraction algorithm of the imitation lizard suborder avoidance family, as shown in Figure 6:

1)广角定焦摄像头位置初始化1) Initialize the position of the wide-angle fixed-focus camera

首先令两台广角定焦摄像头转动至预先设定的初始化位置:即使广角定焦摄像头的像平面法向量与水平面相互平行,并且使两台广角定焦摄像头的像平面法向量垂直于两广角定焦摄像头几何中心点连线的位置;First, let the two wide-angle fixed-focus cameras rotate to the preset initialization positions: even if the normal vectors of the image planes of the wide-angle fixed-focus cameras are parallel to the horizontal plane, and make the normal vectors of the image planes of the two wide-angle fixed-focus cameras perpendicular to the two wide-angle fixed-focus cameras The position of the line connecting the geometric center points of the focal camera;

2)仿照蜥蜴亚目避役科生物的视觉系统,令两台广角定焦摄像头分别向预先设定的不同方向搜索目标,其中一台搜索到目标后,启用基于Cam shift算法的单目视觉跟踪算法保持对目标的跟踪,并实时返回该摄像头此时像平面法向量相对于该摄像头初始位置时摄像头像平面法向量的水平夹角和竖直夹角;2) Imitate the visual system of lizard suborder evasion family creatures, let two wide-angle fixed-focus cameras search for targets in different preset directions, and one of them will start the monocular visual tracking based on the Cam shift algorithm after searching for the target The algorithm keeps track of the target, and returns in real time the horizontal and vertical angles of the camera’s image plane normal vector relative to the camera’s initial position;

Cam shift算法能够提供目标中心点在当前图像中的位置以及目标在当前图像中所占的大小。The Cam shift algorithm can provide the position of the center point of the target in the current image and the size of the target in the current image.

此仿生搜索方法结合广角定焦摄像头可以获得较大的可视搜索范围。此时两台广角定焦摄像头所采集的图像有可能是不具有共同视域的,处理时可视作2组相互独立的单目视觉视频序列。This bionic search method combined with a wide-angle fixed-focus camera can obtain a larger visual search range. At this time, the images collected by the two wide-angle fixed-focus cameras may not have a common field of view, and can be regarded as two sets of independent monocular vision video sequences during processing.

为算法说明需要,假设首先搜索到目标的摄像头为广角定焦摄像头A。To illustrate the algorithm, it is assumed that the camera that first searches for the target is the wide-angle fixed-focus camera A.

令广角定焦摄像头B跟随广角定焦摄像头A进行目标搜索,摄像头B搜索到目标后,亦启用基于Cam shift算法的单目视觉跟踪算法保持对目标的跟踪;Let the wide-angle fixed-focus camera B follow the wide-angle fixed-focus camera A to search for the target. After the camera B searches the target, it also enables the monocular vision tracking algorithm based on the Cam shift algorithm to keep track of the target;

本实施例中将广角定焦摄像头B跟随广角定焦摄像头A进行目标搜索的过程分解为竖直方向和水平方向的搜索。首先,令广角定焦摄像头B的竖直角度保持与广角定焦摄像头A相同;然后,广角定焦摄像头B开始水平方向搜索目标。水平方向搜索过程如下:In this embodiment, the process of the wide-angle fixed-focus camera B following the wide-angle fixed-focus camera A for target search is decomposed into vertical and horizontal searches. First, keep the vertical angle of the wide-angle fixed-focus camera B the same as that of the wide-angle fixed-focus camera A; then, the wide-angle fixed-focus camera B starts to search for the target in the horizontal direction. The horizontal search process is as follows:

本实施例中水平方向运动的步进电机(3)为摄像头提供水平180度的转动角度。在世界坐标系中,定义广角定焦摄像头A像平面法向量与Y轴正方向夹角为α,广角定焦摄像头B像平面法向量与Y轴正方向夹角为β。当0°<α≤90°时,广角定焦摄像头B从β=0°处开始沿夹角增大方向扫描;当90°<α<180°时,广角定焦摄像头B从β=180°处开始沿夹角减小方向扫描;In this embodiment, the stepper motor (3) moving in the horizontal direction provides the camera with a horizontal rotation angle of 180 degrees. In the world coordinate system, define the angle between the normal vector of the image plane of the wide-angle fixed-focus camera A and the positive direction of the Y-axis to be α, and the angle between the normal vector of the image plane of the wide-angle fixed-focus camera B and the positive direction of the Y-axis to be β. When 0°<α≤90°, the wide-angle fixed-focus camera B scans along the direction of increasing angle from β=0°; when 90°<α<180°, the wide-angle fixed-focus camera B starts from β=180° Start scanning along the direction of decreasing angle;

所述的世界坐标系为:以两广角定焦摄像头几何中心点连线的中点为原点,以与水平面平行、垂直于两广角定焦摄像头几何中心点连线、且沿视觉追踪的方向为X轴正方向,以垂直于水平面向上的方向为Z轴正方向的右手坐标系;The described world coordinate system is: take the midpoint of the line connecting the geometric center points of the two wide-angle fixed-focus cameras as the origin, and take the line parallel to the horizontal plane, perpendicular to the line connecting the geometric center points of the two wide-angle fixed-focus cameras, and along the direction of visual tracking as The positive direction of the X-axis, the right-handed coordinate system with the direction perpendicular to the horizontal plane as the positive direction of the Z-axis;

当广角定焦摄像头B搜索到目标后,亦启用基于Cam shift算法的单目视觉跟踪算法保持对目标的跟踪。When the wide-angle fixed-focus camera B searches for the target, it also enables the monocular vision tracking algorithm based on the Cam shift algorithm to keep track of the target.

当两台广角定焦摄像头均启用跟踪算法后,若目标持续运动,则跟踪可能存在一定的跟踪余差和跟踪滞后。为消除算法的跟踪余差与跟踪滞后,可考虑扩展算法,通过应用PID算法消除跟踪余差与跟踪滞后。具体做法为:在摄像头当前图像中,把目标相对于摄像头当前图像中心点的水平距离和竖直距离作为反馈量,应用PID控制算法,使广角定焦摄像头跟踪目标更加精确。When the tracking algorithm is enabled on the two wide-angle fixed-focus cameras, if the target continues to move, there may be a certain amount of tracking error and tracking lag. In order to eliminate the tracking residual error and tracking lag of the algorithm, the extended algorithm can be considered, and the tracking residual error and tracking lag can be eliminated by applying the PID algorithm. The specific method is: in the current image of the camera, the horizontal distance and vertical distance of the target relative to the center point of the current image of the camera are used as the feedback amount, and the PID control algorithm is applied to make the wide-angle fixed-focus camera track the target more accurately.

3)调用角度信息标定学习算法分别实时计算每个摄像头A、B均跟踪到目标后,各自的像平面法向量相对于各自初始位置摄像头像平面法向量的水平夹角和竖直夹角;3) Invoke the angle information calibration learning algorithm to calculate in real time respectively the horizontal and vertical angles of the respective image plane normal vectors relative to the respective initial position camera image plane normal vectors after each camera A and B track the target;

4)使用角度信息标定学习算法得到的结果,调用深度信息及世界坐标提取算法,实时计算目标深度信息及目标在世界坐标系中的世界坐标;所述的世界坐标系为:以两广角定焦摄像头几何中心点连线的中点为原点,以与水平面平行、垂直于两广角定焦摄像头几何中心点连线、且沿视觉追踪的方向为X轴正方向,以垂直于水平面向上的方向为Z轴正方向的右手坐标系;4) Use the angle information to calibrate the results obtained by the learning algorithm, call the depth information and the world coordinate extraction algorithm, and calculate the target depth information and the world coordinates of the target in the world coordinate system in real time; the world coordinate system is: focus with two wide angles The midpoint of the line connecting the geometric center points of the cameras is the origin, the line parallel to the horizontal plane, perpendicular to the line connecting the geometric center points of the two wide-angle fixed-focus cameras, and the direction along the visual tracking is the positive direction of the X-axis, and the direction perpendicular to the horizontal plane is the upward direction. The right-handed coordinate system in the positive direction of the Z axis;

5)输出目标世界坐标。若目标未丢失,则返回步骤3);若目标丢失,则返回步骤2)。5) Output the target world coordinates. If the target is not lost, return to step 3); if the target is lost, return to step 2).

若仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统在底部增加了上文所描述的步进电机,则在两台广角定焦摄像头均启用基于Cam shift算法的单目视觉跟踪算法这一步骤后,若目标即将运动到两台广角定焦摄像头的共同视域范围之外时,可以通过调整新增电机在水平和竖直方向上的转动角度以保持目标在两台广角定焦摄像头的共同视域范围之内。If the stepper motor described above is added at the bottom of the visual system used in the algorithm for extracting the visual coordinates of the lizard-like evasion family, then the monocular visual tracking based on the Cam shift algorithm will be enabled on the two wide-angle fixed-focus cameras After this step of the algorithm, if the target is about to move out of the common field of view of the two wide-angle fixed-focus cameras, the rotation angle of the newly added motor in the horizontal and vertical directions can be adjusted to keep the target within the range of the two wide-angle fixed-focus cameras. within the common field of view of the focal camera.

若仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统扩展了上文所述的长焦变焦摄像头,则在步骤4)得到的目标世界坐标的基础上,进行坐标变换,将世界坐标系的原点移动到长焦变焦摄像头的几何中心点,并计算目标在新坐标系中的坐标。根据此坐标转动摄像头可使目标中心点与长焦变焦摄像头图像中心点近似重合,启用基于Cam shift算法的单目视觉跟踪算法保持长焦变焦摄像头对目标的跟踪,改变长焦变焦摄像头的焦段使目标占整个长焦变焦摄像头图像的百分比为预设的百分比,即可开始识别。If the vision system applied in the algorithm for extracting the visual coordinates of the lizard-like suborder evaderidae biological visual coordinates extends the telephoto zoom camera described above, then on the basis of the target world coordinates obtained in step 4), coordinate transformation is carried out, and the world The origin of the coordinate system is moved to the geometric center point of the telephoto zoom camera, and the coordinates of the target in the new coordinate system are calculated. Rotating the camera according to this coordinate can make the center point of the target approximately coincide with the image center point of the telephoto zoom camera, enable the monocular vision tracking algorithm based on the Cam shift algorithm to keep the telephoto zoom camera tracking the target, and change the focal length of the telephoto zoom camera to use The percentage of the target in the entire telephoto zoom camera image is a preset percentage, and the recognition can begin.

2.角度信息标定学习算法2. Angle information calibration learning algorithm

仿蜥蜴亚目避役科生物视觉坐标提取算法中所述的角度信息标定学习算法可以描述如下:The angle information calibration learning algorithm described in the visual coordinate extraction algorithm of the lizard-like suborder avoidance family can be described as follows:

图3所示为任意一台广角定焦摄像头在角度信息标定学习算法开始时采集到的初始图像。图中两条虚线交叉点处为摄像头成像中心点,以该中心点为原点,横向虚线为LX轴,纵向虚线为LY轴建立摄像头图像坐标系。上方球体为兴趣目标,其坐标为(P、Q),整个图像横向长度记为X,纵向长度记为Y。Figure 3 shows the initial image collected by any wide-angle fixed-focus camera at the beginning of the angle information calibration learning algorithm. The intersection point of two dotted lines in the figure is the center point of camera imaging, with the center point as the origin, the horizontal dotted line is the LX axis, and the vertical dotted line is the LY axis to establish the camera image coordinate system. The upper sphere is the target of interest, its coordinates are (P, Q), the horizontal length of the entire image is marked as X, and the vertical length is marked as Y.

规定标志位:设Marker_learning为学习模式标志位。当Marker_learning值非0时,算法运行于学习模式。当Marker_learning值为0时,算法不运行学习模式。Prescribed flag bit: Let Marker_learning be the flag bit of the learning mode. When the value of Marker_learning is non-zero, the algorithm runs in learning mode. When the value of Marker_learning is 0, the algorithm does not run the learning mode.

算法目的:实时输出摄像头跟踪目标过程中,摄像头像平面法向量相对于自身初始位置摄像头像平面法向量的水平夹角和竖直夹角。The purpose of the algorithm: to output in real time the horizontal and vertical angles between the normal vector of the camera's head plane and the normal vector of the camera's face plane at its initial position during the process of the camera tracking the target.

算法的流程图如图7所示。The flowchart of the algorithm is shown in Figure 7.

角度信息标定学习算法是针对于单摄像头自标定行为的算法。在进行算法的步骤说明时,假设讨论的为广角定焦摄像头A以及它所对应的两台步进电机。但算法也同样适用于广角定焦摄像头B及它所对应的两台步进电机。The angle information calibration learning algorithm is an algorithm for the self-calibration behavior of a single camera. When describing the steps of the algorithm, it is assumed that the wide-angle fixed-focus camera A and its corresponding two stepping motors are discussed. But the algorithm is also applicable to the wide-angle fixed-focus camera B and its corresponding two stepping motors.

算法步骤如下:The algorithm steps are as follows:

1)初始化学习入口角度(Δθ1、Δη1),令Δθ1=θε,Δη1=ηε,其中(θε、ηε)为阈值角度,其具体大小需要根据摄像头畸变程度实际调试得到,如(3°,3°)。所述的学习入口角度是指标定过程中目标发生运动时,摄像头图像中目标从原点移动到发生运动的位置,所对应的摄像头转动的水平角度和竖直角度。学习入口角度用于记录角度信息标定学习算法中算法自标定的进度。当(Δθ1≠θε)ˇ(Δη1≠ηε)时,说明算法存储了一组自标定断点。当Δθ1=θε,Δη1=ηε时,说明算法并未存储自标定断点。1) Initialize the learning entrance angle (Δθ 1 , Δη 1 ), set Δθ 1 = θ ε , Δη 1 = η ε , where (θ ε , η ε ) is the threshold angle, and its specific size needs to be obtained through actual debugging according to the degree of camera distortion , such as (3°, 3°). The learning entry angle refers to the horizontal and vertical angles of the corresponding camera rotation when the target moves from the origin to the moving position in the camera image when the target moves during the calibration process. The learning entry angle is used to record the progress of the algorithm self-calibration in the angle information calibration learning algorithm. When (Δθ 1 ≠θ ε )ˇ(Δη 1 ≠η ε ), it means that the algorithm has stored a set of self-calibration breakpoints. When Δθ 1ε , Δη 1ε , it means that the algorithm does not store self-calibration breakpoints.

2)计算广角定焦摄像头图像中(图3)图像中心点与目标的水平距离P和竖直距离Q相对于图像主对角线长度的百分比,即 其中X、Y分别表示摄像头图像的长和宽;P为正时表示目标位于图像中心点右方,P为负时表示目标位于目标中心点左方。Q为正时表示目标位于图像中心点上方,Q为负时表示目标位于目标中心点下方。2) Calculate the horizontal distance P and the vertical distance Q between the image center point and the target in the wide-angle fixed-focus camera image (Figure 3) relative to the main diagonal length of the image percentage of Among them, X and Y represent the length and width of the camera image respectively; when P is positive, it means that the target is located on the right of the center point of the image, and when P is negative, it means that the target is located on the left of the target center point. When Q is positive, it means that the target is located above the center point of the image, and when Q is negative, it means that the target is located below the center point of the target.

3)判断目标是否在摄像头图像中心点,即判断|f(P)|和|g(Q)|是否均小于一个大于0的预先设定的阈值ε。3) Judging whether the target is at the center of the camera image, that is, judging whether |f(P)| and |g(Q)| are both smaller than a preset threshold ε greater than 0.

若(|f(P)|<ε)∧(|g(Q)|<ε),则目标与图像中心点近似重合,转至步骤7),If (|f(P)|<ε)∧(|g(Q)|<ε), the target is approximately coincident with the center point of the image, go to step 7),

若(|f(P)|>ε)∨(|g(Q)|>ε),则目标与图像中心点不重合,算法继续。If (|f(P)|>ε)∨(|g(Q)|>ε), the target does not coincide with the center point of the image, and the algorithm continues.

其中,阈值ε用于判定目标是否已经与图像中心点近似重合。较大的阈值ε将会使算法具有更快的运行速度,但也会降低算法的精度。较小的阈值ε将会使算法具有更高的精度,但也会降低算法的运行速度,其具体大小需要根据实际角度精度需要和视觉系统实时性要求调试得到,参考值:1/50。Among them, the threshold ε is used to determine whether the target has approximately coincided with the center point of the image. A larger threshold ε will make the algorithm run faster, but it will also reduce the accuracy of the algorithm. A smaller threshold ε will make the algorithm have higher accuracy, but it will also reduce the running speed of the algorithm. The specific size needs to be adjusted according to the actual angular accuracy and the real-time requirements of the vision system. Reference value: 1/50.

4)若目标与图像中心点不重合,查标定信息表,判断标定信息表中相对于|f(P)|和|g(Q)|是否已经记录有对应的|Δθ|值和|Δη|值,若存在对应的|Δθ|值与|Δη|值,则算法跳转至步骤8);若不存在对应的|Δθ|值和|Δη|值,则算法继续。4) If the target does not coincide with the center point of the image, check the calibration information table and judge whether the corresponding |Δθ| value and |Δη| have been recorded in the calibration information table relative to |f(P)| and |g(Q)| value, if there is a corresponding |Δθ| value and |Δη| value, the algorithm jumps to step 8); if there is no corresponding |Δθ| value and |Δη| value, the algorithm continues.

其中Δθ值是指在世界坐标系下,以摄像头几何中心点为起点,以目标几何中心点为终点的向量在水平面上的投影与此时摄像机像平面法向量在水平面上的投影之间的水平夹角值。Δη值是指在世界坐标系下,以摄像头中心点为起点,以兴趣目标中心点为终点的向量在垂直于水平面及两摄像头中心点连线的平面上的投影与此时摄像头法向量在垂直于水平面及两摄像头中心点连线的平面上的投影之间的竖直夹角值。The Δθ value refers to the vector starting from the geometric center of the camera and ending at the geometric center of the target in the world coordinate system The horizontal angle value between the projection on the horizontal plane and the projection of the normal vector of the camera image plane on the horizontal plane at this time. The Δη value refers to the vector starting from the center of the camera and ending at the center of the target of interest in the world coordinate system The vertical angle value between the projection on the plane perpendicular to the horizontal plane and the line connecting the center points of the two cameras and the projection of the normal vector of the camera on the plane perpendicular to the horizontal plane and the line connecting the center points of the two cameras.

需要注意的是,此步骤中判定的依据为是否存在对应的|Δθ|和|Δη|值而非是否存在标定信息表的原因是:角度信息标定学习算法的自标定过程是可以中断的,其原理类似于计算机的中断处理。如果目标移动了,将标定断点的学习入口角度进行保存;跳出标定学习过程,回到主算法;当目标中心点与图像中心点再次重合,且目标静止时,调用学习入口角度返回学习过程。这也就意味着在此步骤判定时标定信息表中的标定信息可能是不完整的,而非完全存在或完全不存在。因此判定标定信息时需要具体判定每一个|f(P)|和|g(Q)|所对应的标定信息是否存在。It should be noted that the reason for judging in this step is whether there are corresponding |Δθ| The principle is similar to computer interrupt handling. If the target moves, save the learning entrance angle of the calibration breakpoint; jump out of the calibration learning process and return to the main algorithm; when the center point of the target coincides with the center point of the image again, and the target is still, call the learning entrance angle to return to the learning process. This means that the calibration information in the calibration information table may be incomplete when judging in this step, rather than completely existing or not existing at all. Therefore, when determining the calibration information, it is necessary to specifically determine whether the calibration information corresponding to each |f(P)| and |g(Q)| exists.

自标定过程加入断点返回机制的好处是显而易见的:1.它增强了算法的适应性,使标定过程由一个连续过程变为一个离散的标定过程,即不再要求目标在算法开始后首先保持静止以标定,而是利用算法运行过程中,目标离散的静止状态进行标定。2.它使得算法无需等待自标定过程完成即可正常运行,表现为算法运行过程中的角度信息输出频率将会随着自标定过程的进行而越来越实时化。3.它增强了算法的智能性。The benefits of adding a breakpoint return mechanism to the self-calibration process are obvious: 1. It enhances the adaptability of the algorithm, making the calibration process change from a continuous process to a discrete calibration process, that is, it is no longer required that the target be maintained first after the algorithm starts. Calibration is done at rest, but the discrete static state of the target is used for calibration during the running of the algorithm. 2. It enables the algorithm to run normally without waiting for the completion of the self-calibration process, which means that the output frequency of the angle information during the operation of the algorithm will become more and more real-time with the progress of the self-calibration process. 3. It enhances the intelligence of the algorithm.

5)逐步转动摄像头使在摄像头图像中目标与图像中心点重合5) Turn the camera step by step so that the target in the camera image coincides with the center point of the image

具体实现:逐步变化发送给步进电机(2)(3)的PWM波占空比,使广角定焦摄像头(1)向使图像中心点和目标中心点距离|P|和|Q|减小的方向转动。其中摄像头转动控制量PWM波占空比的变化步长分为水平方向的变化步长μ1和竖直方向的变化步长μ2,变化步长并不是一个固定的值,而是一个根据f(P)或g(Q)计算得出的值,其计算公式分别为μ1=k*μ0*f(P)和μ2=k*μ0*g(Q)。其中k为预先设定的倍率常数,μ0为单位变化步长,f(P)或g(Q)有正有负,因此变化步长μ亦有正有负。Concrete implementation: Gradually change the PWM wave duty cycle sent to the stepping motor (2) (3), so that the wide-angle fixed-focus camera (1) can reduce the distance |P| and |Q| between the center point of the image and the center point of the target direction of rotation. Among them, the change step size of the PWM wave duty cycle of the camera rotation control amount is divided into a change step size μ 1 in the horizontal direction and a change step size μ 2 in the vertical direction. The change step size is not a fixed value, but a value according to f (P) or g(Q), the calculation formulas are respectively μ 1 =k*μ 0 *f(P) and μ 2 =k*μ 0 *g(Q). Among them, k is a pre-set rate constant, μ 0 is the unit change step size, f(P) or g(Q) can be positive or negative, so the change step size μ can also be positive or negative.

在递增或递减PWM波占空比时,若硬件系统中增加了前文所描述的控制整体水平和竖直运动的步进电机,则计算摄像头此时的像平面法向量相对于初始位置摄像头法向量的水平夹角θ和竖直夹角η。计算|θ|是否接近了步进电机(3)的临界运动范围,以及计算|η|是否接近了步进电机(2)的临界运动范围。如果|θ|>|θmax|-|θε2|,其中|θmax|指步进电机(3)的最大运动角度,θε2为阈值角度,即指一个较小的步进电机(3)的运动角度,如3到5倍PWM波单位变化步长μ0所对应的步进电机的运动角度,则在算法进一步继续前先调整新增的水平运动步进电机的输出角度,使整个视觉系统在水平方向上运动以缩小|θ|的大小,扩大仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统的水平运动范围。如果|η|>|ηmax|-|ηε2|,其中|ηmax|指步进电机(2)的最大运动角度,ηε2为阈值角度,即指一个较小的步进电机(2)的运动角度,如3到5倍PWM波单位变化步长μ0所对应的步进电机的运动角度。则在算法进一步继续前先调整新增的竖直运动步进电机的输出角度,使整个视觉系统在竖直方向上运动以缩小|η|的大小,扩大仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统的竖直运动范围。当|θ|<|θmax|-|θε2|且|η|<|ηmax|-|ηε2|,则算法继续。When increasing or decreasing the duty cycle of the PWM wave, if the stepper motor that controls the overall horizontal and vertical motion described above is added to the hardware system, then the normal vector of the image plane of the camera at this time is calculated relative to the normal vector of the camera at the initial position The horizontal angle θ and the vertical angle η. Calculate whether |θ| is close to the critical range of motion of the stepper motor (3), and calculate whether |η| is close to the critical range of motion of the stepper motor (2). If |θ|>|θ max |-|θ ε2 |, where |θ max | refers to the maximum motion angle of the stepper motor (3), and θ ε2 is the threshold angle, which means a smaller stepper motor (3) For example, the motion angle of the stepper motor corresponding to the step size μ0 of 3 to 5 times the PWM wave unit change, then adjust the output angle of the newly added horizontal motion stepper motor before the algorithm continues, so that the entire visual system Move in the horizontal direction to reduce the size of |θ|, and expand the horizontal range of motion of the visual system applied by the visual coordinate extraction algorithm of the lizard-like suborder evacuidae. If |η|>|η max |-|η ε2 |, where |η max | refers to the maximum motion angle of the stepper motor (2), and η ε2 is the threshold angle, which means a smaller stepper motor (2) For example, the movement angle of the stepper motor corresponding to the step size μ 0 of 3 to 5 times the PWM wave unit change. Then, before the algorithm continues further, adjust the output angle of the newly added vertical motion stepping motor, so that the entire visual system moves in the vertical direction to reduce the size of |η|, and expand the visual coordinates of the imitation lizards. Extracts the vertical range of motion of the vision system to which the algorithm is applied. When |θ|<|θ max |-|θ ε2 | and |η|<|η max |-|η ε2 |, the algorithm continues.

6)当f(P)值和g(Q)值的绝对值均小于某预设的阈值ε,即(|f(P)|<ε)∧(|g(Q)|<ε)时,判定为目标与摄像头图像中心点近似重合,跳转至步骤7);否则返回步骤5)。6) When the absolute values of f(P) and g(Q) are less than a preset threshold ε, that is (|f(P)|<ε)∧(|g(Q)|<ε), If it is determined that the center point of the target and the camera image approximately coincides, skip to step 7); otherwise, return to step 5).

7)当目标与图像中心点近似重合时,读出摄像头此时的像平面法向量相对于该摄像头初始位置摄像头法向量的水平夹角θ和竖直夹角η并输出。转至步骤10)。7) When the target approximately coincides with the center point of the image, read out the horizontal angle θ and the vertical angle η of the normal vector of the image plane of the camera relative to the normal vector of the camera at the initial position of the camera and output it. Go to step 10).

8)若标定信息表中存在对应的标定信息,则根据|f(P)|值和|g(Q)|值查标定信息表得到对应的|Δθ|值和|Δη|值。依照标定信息换算示意表,根据目标所在摄像头图像象限确定|Δθ|与|Δη|前附加的符号,确定摄像头转动的水平角度Δθ与竖直角度Δη。8) If there is corresponding calibration information in the calibration information table, check the calibration information table according to |f(P)| value and |g(Q)| value to get the corresponding |Δθ| value and |Δη| value. According to the calibration information conversion diagram, determine the symbols attached before |Δθ| and |Δη| according to the image quadrant of the camera where the target is located, and determine the horizontal angle Δθ and vertical angle Δη of the camera rotation.

此处所述的标定信息表将在下文标定原理部分之后详细描述,标定信息换算表将在下文标定信息表部分之后详细描述。The calibration information table described here will be described in detail after the calibration principle section below, and the calibration information conversion table will be described in detail after the calibration information table section below.

若硬件系统中增加了前文所描述的控制整个系统进行水平和竖直运动的步进电机,则计算摄像头此时的像平面法向量相对于初始位置摄像头法向量的水平夹角θ和竖直夹角η。根据目标目前所在广角定焦摄像头图像的象限,依据标定信息换算示意表确定Δθ和Δη的符号,计算|θ±Δθ|是否接近了步进电机(3)的临界运动范围,以及计算|η±Δη|是否接近了步进电机(2)的临界运动范围。If the stepper motor described above that controls the horizontal and vertical movement of the entire system is added to the hardware system, then calculate the horizontal angle θ and vertical angle between the normal vector of the image plane of the camera at this time and the normal vector of the camera at the initial position. Angle η. According to the quadrant of the wide-angle fixed-focus camera image where the target is currently located, the signs of Δθ and Δη are determined according to the calibration information conversion diagram, and whether |θ±Δθ| is close to the critical motion range of the stepping motor (3) is calculated, and |η± Whether Δη| is close to the critical range of motion of the stepper motor (2).

如果|θ±Δθ|>|θmax|-|θε2|,其中|θmax|指步进电机(3)的最大运动角度,θε2为阈值角度,即指一个较小的步进电机(3)的运动角度,如3到5倍PWM波单位变化步长μ0所对应的步进电机的运动角度,则在算法进一步继续前先调整新增的水平运动步进电机的输出角度,使整个视觉系统在水平方向上运动以缩小|θ±Δθ|,从而扩大仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统的水平运动范围。由于这种扩大运动范围的方式改变了目标在图像上的位置,因此不能直接使用标定信息表中的标定信息,需重新转至步骤4查找标定信息。If |θ±Δθ|>|θ max |-|θ ε2 |, where |θ max | refers to the maximum motion angle of the stepping motor (3), and θ ε2 is the threshold angle, which means a smaller stepping motor ( 3) the angle of motion, such as the angle of motion of the stepping motor corresponding to 3 to 5 times of PWM wave unit change step size μ 0 , then adjust the output angle of the newly added horizontal motion stepping motor before the algorithm continues further, so that The entire visual system moves in the horizontal direction to reduce |θ±Δθ|, thereby expanding the horizontal range of motion of the visual system applied by the visual coordinates extraction algorithm of the lizard-like suborder escapidae. Because this method of expanding the range of motion changes the position of the target on the image, the calibration information in the calibration information table cannot be used directly, and it is necessary to go to step 4 to find the calibration information.

如果|η±Δη|>|ηmax|-|ηε2|,其中|ηmax|指步进电机(2)的最大运动角度,ηε2为阈值角度,即指一个较小的步进电机(2)的运动角度,如3到5倍PWM波单位变化步长μ0所对应的步进电机的运动角度,则在算法进一步继续前先调整新增的竖直运动步进电机的输出角度,使整个视觉系统在竖直方向上运动以缩小|η±Δη|的大小,扩大仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统的竖直运动范围。由于这种扩大运动范围的方式改变了目标在图像上的位置,因此不能直接使用标定信息表中的标定信息,需重新转至步骤4查找标定信息。If |η ± Δη | > |η max |-|η ε2 |, where |η max | refers to the maximum motion angle of the stepping motor (2), and η ε2 is the threshold angle, which refers to a smaller stepping motor ( 2) the angle of motion, such as the angle of motion of the stepping motor corresponding to 3 to 5 times of PWM wave unit change step size μ 0 , then adjust the output angle of the newly added vertical motion stepping motor before the algorithm further continues, Make the entire visual system move in the vertical direction to reduce the size of |η±Δη|, and expand the vertical motion range of the visual system applied by the visual coordinate extraction algorithm of the imitation lizards. Because this method of expanding the range of motion changes the position of the target on the image, the calibration information in the calibration information table cannot be used directly, and it is necessary to go to step 4 to find the calibration information.

如果|θ±Δθ|<|θmax|-|θε2|且|η±Δη|<|ηmax|-|ηε2|,则算法继续。If |θ±Δθ|<| θmax |-| θε2 | and |η±Δη|<| ηmax |-| ηε2 |, the algorithm continues.

查完标定信息表及标定信息换算表后,给步进电机发送对应于摄像头转动水平角度Δθ与竖直角度Δη的占空比的PWM波以控制广角定焦摄像头使|f(P)|和|g(Q)|减小。After checking the calibration information table and the calibration information conversion table, send a PWM wave corresponding to the duty cycle of the camera rotation horizontal angle Δθ and vertical angle Δη to the stepping motor to control the wide-angle fixed-focus camera to make |f(P)| and |g(Q)| decreases.

9)输出世界坐标系下,此时的目标低精度实际水平夹角θ+Δθ和低精度实际竖直夹角η+Δη。其中,θ、η分别为摄像头此时的像平面法向量相对于其初始位置摄像头法向量的水平夹角和竖直夹角。延时T时间,其中T为预先设置的延时量,返回步骤2)。9) In the world coordinate system, the target low-precision actual horizontal angle θ+Δθ and the low-precision actual vertical angle η+Δη at this time are output. Among them, θ and η are the horizontal and vertical angles between the normal vector of the image plane of the camera at this time and the normal vector of the camera at its initial position, respectively. Delay for T time, where T is the preset delay amount, return to step 2).

由于在算法运行过程中目标有可能是实时运动的,导致系统的步进电机控制量也是需要实时更新的。因此在此处计算出的PWM波发送后,延迟一段确定的时间T,即可进入下一步骤继续更新新的电机控制量。这里的延迟时间T需要根据实际需要调试设定:较长的延迟时间T使算法计算量小但系统具有滞后性,较短的延迟时间T使系统比较灵敏但算法计算量大。Since the target may move in real time during the operation of the algorithm, the control quantity of the stepping motor of the system also needs to be updated in real time. Therefore, after the PWM wave calculated here is sent, after a certain time T is delayed, the next step can be continued to update the new motor control value. The delay time T here needs to be debugged and set according to actual needs: a longer delay time T makes the calculation amount of the algorithm small but the system has a hysteresis, and a shorter delay time T makes the system more sensitive but the calculation amount of the algorithm is large.

10)判断Marker_learning标志位是否为0,此标志位的值是预先设定的。若Marker_learning标志位值为0,则说明标定信息表已经建立完毕,转至步骤2)。若Marker_learning标志位值非0,则继续算法,开始或继续建立标定信息表。10) Determine whether the Marker_learning flag is 0, and the value of this flag is preset. If the value of the Marker_learning flag is 0, it means that the calibration information table has been established, and go to step 2). If the value of the Marker_learning flag is not 0, continue the algorithm and start or continue to build the calibration information table.

标定原理Calibration principle

如图8所示为角度信息标定学习算法自标定过程运行示意图。标定过程为:在摄像头图像坐标系的第一象限进行标定,令摄像头固定竖直角度,然后进行水平方向的标定,接下来改变摄像头的竖直角度,对应于新的竖直角度进行水平方向的标定,不断循环,直到目标在竖直方向上离开摄像头图像。在标定第一象限同一行的像素点时,摄像头是水平移动的,这也就意味着目标的g(Q)值的浮动应该是小于一个阈值范围的,该阈值需要在实际调试时确定。Figure 8 is a schematic diagram of the operation of the self-calibration process of the angle information calibration learning algorithm. The calibration process is as follows: Calibrate in the first quadrant of the camera image coordinate system, fix the vertical angle of the camera, and then perform horizontal calibration, then change the vertical angle of the camera, and perform horizontal calibration corresponding to the new vertical angle Calibrate and loop continuously until the target leaves the camera image in the vertical direction. When calibrating the pixels of the same row in the first quadrant, the camera moves horizontally, which means that the fluctuation of the g(Q) value of the target should be less than a threshold range, which needs to be determined during actual debugging.

进行标定需要满足两个条件,第一,目标位于摄像头图像的中心点,第二,目标静止。第一个条件已经在前面的步骤中得到了保证,因此在接下来的步骤中只需要判断目标是否静止。判断的方法分为两步:Calibration needs to meet two conditions, first, the target is located at the center of the camera image, and second, the target is stationary. The first condition has been guaranteed in the previous steps, so in the next steps it is only necessary to judge whether the target is stationary or not. The judgment method is divided into two steps:

第一步,在摄像头以固定竖直角度进行水平方向标定的过程中,判断目标是否静止。判断的方法为:在开始进行水平方向标定前,首先计算g(Q),并将其赋值给g(Q0),即g(Q0)=g(Q),然后开始进行水平方向标定,实时计算g(Q),并判断|g(Q)-g(Q0)|<ε2是否成立,如果成立,则说明目标没有移动,如果不成立,则说明目标发生了移动。ε2为参照阈值,为一个较小的正值,可取摄像头图像主对角线长度1/50。阈值的设置与摄像头性能以及实验要求有关。The first step is to determine whether the target is stationary during the horizontal calibration of the camera at a fixed vertical angle. The method of judging is: before starting to calibrate in the horizontal direction, first calculate g(Q) and assign it to g(Q 0 ), that is, g(Q 0 )=g(Q), and then start to calibrate in the horizontal direction, Calculate g(Q) in real time and judge whether |g(Q)-g(Q 0 )|<ε 2 is true. If true, it means that the target has not moved. If not, it means that the target has moved. ε 2 is the reference threshold, which is a small positive value, and can be taken as 1/50 of the main diagonal length of the camera image. The setting of the threshold is related to the performance of the camera and the requirements of the experiment.

第二步,在摄像头完成前一行水平方向标定且连续转动至下一行的水平方向起始标定位置过程后,判断目标是否发生运动。判断的方法为:将前一行对应的g(Q0)赋值给g(Q1),然后在下一行水平方向标定开始前计算g(Q),并赋值给g(Q0),即g(Q0)=g(Q),判断|g(Q1)-g(Q0)|<ε3。若不等式成立,则说明在目标没有发生运动,若不等式不成立,则说明目标发生了运动。ε3为参照阈值,为一个比ε2略大的正值,可取主对角线长度1/30,视实际要求精度而定。In the second step, after the camera completes the horizontal calibration of the previous row and continuously rotates to the initial calibration position in the horizontal direction of the next row, it is judged whether the target moves. The method of judgment is: assign g(Q 0 ) corresponding to the previous row to g(Q 1 ), then calculate g(Q) before the horizontal calibration of the next row starts, and assign it to g(Q 0 ), that is, g(Q 0 )=g(Q), judge |g(Q 1 )-g(Q 0 )|<ε 3 . If the inequality is true, it means that there is no movement in the target, and if the inequality is not true, it means that the target has moved. ε3 is the reference threshold, which is a positive value slightly larger than ε2, and can be taken as 1/30 of the length of the main diagonal, depending on the actual required accuracy.

一旦目标在标定过程发生了运动,标定过程立即停止,并记录目标运动时对应的学习入口角度。当触发再次标定的条件成立时,依照学习入口角度继续未完成的标定工作。Once the target moves during the calibration process, the calibration process stops immediately, and the corresponding learning entrance angle when the target moves is recorded. When the condition for triggering re-calibration is met, the unfinished calibration work is continued according to the learning entry angle.

步骤11至步骤18的程序详细流程图如图9所示。The detailed flow chart of the program from step 11 to step 18 is shown in FIG. 9 .

11)判断是否是第一次标定11) Determine whether it is the first calibration

定义变量并赋值:Define variables and assign values:

标准水平角度θ0:摄像头开始标定时像平面法向量相对于其初始位置摄像头法向量的水平夹角;Standard horizontal angle θ 0 : the horizontal angle between the normal vector of the image plane and the normal vector of the camera at its initial position when the camera starts to calibrate;

标准竖直角度η0:摄像头开始标定时像平面法向量相对于初始位置摄像头法向量的竖直夹角;Standard vertical angle η 0 : the vertical angle between the normal vector of the image plane and the normal vector of the camera at the initial position when the camera starts to calibrate;

Δθ、Δη、θ、η定义同前;Δθ, Δη, θ, η are defined as before;

θ,Δθ,θ0三者的关系可以表述为:θ=θ0+ΔθThe relationship between θ, Δθ, and θ 0 can be expressed as: θ=θ 0 +Δθ

η,Δη,η0三者的关系可以表述为:η=η0+Δηη, Δη, and the relationship between η 0 can be expressed as: η=η 0 +Δη

判断方法:Judgment method:

判断此时学习入口角度(Δθ1、Δη1)是否为(θε,ηε),即(Δθ1=θε)^(Δη1=ηε)是否成立。Judging whether the learning entrance angle (Δθ 1 , Δη 1 ) is (θ ε , η ε ), that is, whether (Δθ 1ε )^(Δη 1ε ) holds true.

判断结果:critical result:

如果成立,算法继续下一步。判断成立说明学习入口角度的初始化值没有被重新赋值,即之前并不存在自标定过程中断的情况,算法并未存储标定中断时的学习入口角度,接下来需要开始建立标定信息表;If true, the algorithm proceeds to the next step. If the judgment is established, it means that the initialization value of the learning entrance angle has not been reassigned, that is, there was no interruption of the self-calibration process before, and the algorithm did not store the learning entrance angle when the calibration was interrupted. Next, it is necessary to start building the calibration information table;

如果不成立,跳转至步骤13)。判断不成立说明存在标定过程中断的情况,并存储有上次标定中断时对应的学习入口角度。If not, go to step 13). If the judgment is not established, it means that the calibration process is interrupted, and the learning entrance angle corresponding to the last calibration interruption is stored.

12)在摄像头坐标系中,令摄像头在水平方向和竖直方向分别转动(θε,ηε),θε,ηε为阈值角度,并计算转动后的g(Q);定义目标静止参照值g(Q0)和备选目标静止参照值g(Q1),并赋值g(Q0)=g(Q),g(Q1)=0,然后跳转至步骤14)。g(Q0)用于判断目标在进行任意一行水平方向标定过程中,目标是否静止,备选目标静止参照值g(Q1)用于判断摄像头在完成一行标定,转动准备进行下一行标定的过程中,目标是否静止。12) In the camera coordinate system, let the camera rotate (θ ε , η ε ) in the horizontal direction and vertical direction respectively, θ ε , η ε are the threshold angles, and calculate g(Q) after rotation; define the target static reference value g(Q 0 ) and the alternative target stationary reference value g(Q 1 ), and assign the value g(Q 0 )=g(Q), g(Q 1 )=0, and then jump to step 14). g(Q 0 ) is used to judge whether the target is still during the calibration process of any row in the horizontal direction, and the alternative target static reference value g(Q 1 ) is used to judge whether the camera has completed a row of calibration and is turning to prepare for the next row of calibration During the process, whether the target is stationary.

具体方法:逐渐变化发送给竖直运动的步进电机的PWM波占空比,令摄像头持续向下转动。当摄像头此时的像平面法向量相对于其初始位置摄像头法向量的竖直夹角η与标准竖直角度η0的差值等于预先设置的竖直学习阈值角度值|ηε|时,摄像头停止;计算此时的g(Q),并设目标静止参照值g(Q0)=g(Q),备选目标静止参照值g(Q1)=0。逐渐变化发送给水平运动的步进电机的PWM波占空比,令摄像头持续向左转动。当摄像头此时的像平面法向量相对于其初始位置摄像头法向量的水平夹角θ与标准水平角度θ0的差值等于预先设置的水平学习阈值角度值|θε|时,开始建立对应于此时g(Q)值的标定信息表,跳转至步骤14)。Specific method: Gradually change the duty cycle of the PWM wave sent to the vertically moving stepping motor, so that the camera continues to rotate downward. When the difference between the vertical angle η of the normal vector of the image plane of the camera relative to its initial position camera normal vector and the standard vertical angle η 0 at this time is equal to the preset vertical learning threshold angle value | ηε |, the camera Stop; calculate g(Q) at this time, and set the target static reference value g(Q 0 )=g(Q), and the alternative target static reference value g(Q 1 )=0. Gradually change the duty cycle of the PWM wave sent to the horizontally moving stepping motor, so that the camera continues to rotate to the left. When the difference between the horizontal angle θ of the normal vector of the image plane of the camera relative to its initial position and the standard horizontal angle θ 0 is equal to the preset horizontal learning threshold angle value |θ ε |, start to establish the corresponding At this time, the calibration information table of the g(Q) value jumps to step 14).

在建立标定信息表之前要先转动阈值角度的原因是:由于通过标定信息表输出的角度信息是粗精度的角度信息,而在目标与图像中心点近似重合时算法需要输出的角度是高精度的角度信息。因此当目标即将与图像中心点重合时,不再使用标定信息表指导电机转动进而获得高精度的角度信息。所以,算法并不需要阈值角度范围内的摄像头标定信息,即算法不标定图8中阴影区域。The reason for turning the threshold angle before establishing the calibration information table is that the angle information output through the calibration information table is coarse-precision angle information, and the angle that the algorithm needs to output is high-precision when the target and the center point of the image are approximately coincident. angle information. Therefore, when the target is about to coincide with the center point of the image, the calibration information table is no longer used to guide the motor to rotate to obtain high-precision angle information. Therefore, the algorithm does not need the camera calibration information within the threshold angle range, that is, the algorithm does not calibrate the shaded area in Figure 8.

13)算法此时将跳过学习入口角度前的自标定过程,直接恢复上一次自标定过程的断点进度;13) At this time, the algorithm will skip the self-calibration process before learning the entrance angle, and directly restore the breakpoint progress of the last self-calibration process;

逐渐变化发送给竖直运动的步进电机的PWM坡占空比,在摄像头坐标系内,首先令摄像头持续向下转动,直到摄像头向下转过的角度等于竖直方向的学习入口角度Δη1时,摄像头停止。计算此时的g(Q),令备选目标静止参照值g(Q1)=g(Q0),并设目标静止参照值g(Q0)=g(Q)Gradually change the duty cycle of the PWM slope sent to the vertically moving stepper motor. In the camera coordinate system, first make the camera continue to rotate downward until the angle of the camera’s downward rotation is equal to the learning entrance angle in the vertical direction Δη 1 , the camera stops. Calculate g(Q) at this time, let the alternative target static reference value g(Q 1 )=g(Q 0 ), and set the target static reference value g(Q 0 )=g(Q)

逐渐变化发送给水平运动的步进电机的PWM波占空比,在摄像头坐标系内,令摄像头持续向左转动,直到摄像头向左转过的角度等于水平方向的学习入口角度Δθ1为止。继续下一步。Gradually change the duty cycle of the PWM wave sent to the horizontally moving stepper motor. In the camera coordinate system, the camera is continuously turned to the left until the angle the camera turns to the left is equal to the learning entrance angle Δθ 1 in the horizontal direction. Continue to the next step.

14)如果g(Q1)等于零,计算此时的f(P)值,将此时的Δθ=θε与Δη=ηε值记录入标定信息表中对应于此时f(P)值与g(Q)值的位置中,继续下一步。14) If g(Q 1 ) is equal to zero, calculate the f(P) value at this time, and record the Δθ=θ ε and Δη=η ε values in the calibration information table corresponding to the f(P) value and In the position of g(Q) value, continue to the next step.

如果g(Q1)不等于零,则判断备选目标静止参照值g(Q1)与目标静止参照值g(Q0)的差值绝对值是否小于阈值,即|g(Q1)-g(Q0)|<ε3。若不等式成立,计算此时的f(P)值,将此时的Δθ与Δη值记录入标定信息表中对应于此时f(P)值与g(Q)值的位置中。算法继续下一步;若不等式不成立,记录下此时的Δθ和Δη,令学习入口角度Δθ1=Δθ,Δη1=Δη;跳转步骤2)。If g(Q1) is not equal to zero, it is judged whether the absolute value of the difference between the alternative target static reference value g(Q 1 ) and the target static reference value g(Q 0 ) is smaller than the threshold, that is |g(Q 1 )-g( Q 0 )|<ε 3 . If the inequality is established, calculate the f(P) value at this time, and record the Δθ and Δη values at this time into the calibration information table corresponding to the f(P) value and g(Q) value at this time. The algorithm continues to the next step; if the inequality is not true, record the Δθ and Δη at this time, and set the learning entrance angle Δθ 1 =Δθ, Δη 1 =Δη; skip to step 2).

15)在摄像头坐标系中,摄像头向左转动一个对应于PWM波占空比单位变化步长μ0的水平角度,判断此时目标几何中心点是否离开摄像头图像右方;若己离开,则转至步骤17);若未离开,则继续算法;15) In the camera coordinate system, turn the camera to the left by a horizontal angle corresponding to the PWM wave duty cycle unit change step size μ0, and judge whether the geometric center of the target is away from the right side of the camera image at this time; if it has left, turn to Go to step 17); if not leave, then continue the algorithm;

16)计算此时g(Q)值,判断此时|g(Q)-g(Q0)|<ε2是否成立,若成立,计算此时的f(P)值,并将该f(P),g(Q)对应的Δθ与Δη值记录入标定信息表中,返回步骤15);否则,记录下此时的Δθ和Δη,令学习入口角度Δθ1=Δθ,Δη1=Δη;跳转步骤2);16) Calculate the value of g(Q) at this time, judge whether |g(Q)-g(Q0)|<ε 2 is true at this time, if it is true, calculate the value of f(P) at this time, and set the value of f(P ), record the Δθ and Δη values corresponding to g(Q) into the calibration information table, and return to step 15); otherwise, record the Δθ and Δη at this time, let the learning entrance angle Δθ 1 = Δθ, Δη 1 = Δη; skip Go to step 2);

步骤16)中,只有在确认新获得的标定信息在竖直方向并未产生过大的跳变,即|g(Q)-g(Q0)|<ε2时,才会将其记录入标定信息表并返回继续获取新的标定信息。否则可判定目标已经移动,自标定过程需要在保存断点后终止。In step 16) , it will be recorded into Calibration information table and return to continue to obtain new calibration information. Otherwise, it can be determined that the target has moved, and the self-calibration process needs to be terminated after saving the breakpoint.

17)在摄像头坐标系中,令摄像头向右转动至Δθ=|θε|,再令摄像头向下转动一个对应于PWM波占空比单位变化步长μ0的竖直角度;判断此时目标几何中心点是否离开摄像头图像上方;若己离开,则说明标定信息表建立完毕,转至步骤18);若未离开,则计算此时的g(Q),并更新备选目标静止参照值g(Q1)=g(Q0),然后更新目标静止参照值g(Q0)=g(Q),转至步骤14);17) In the camera coordinate system, make the camera rotate to the right to Δθ=| θε |, and then make the camera rotate downwards to a vertical angle corresponding to the PWM wave duty cycle unit change step size μ0 ; judge at this time Whether the geometric center point of the target leaves the camera image; if it has left, it means that the calibration information table has been established, and go to step 18); if not, calculate g(Q) at this time, and update the static reference value of the alternative target g(Q 1 )=g(Q 0 ), then update the target stationary reference value g(Q 0 )=g(Q), go to step 14);

角度信息标定学习算法中所辅助使用的基于Cam shift算法的跟踪算法可以为角度信息标定学习算法提供目标的中心点在摄像头图像中的位置和目标在摄像头图像中所占的大小。借助于此两项信息,即可判别出目标的几何中心点是否已经离开摄像头图像,具体实现为:当f(P)>ε4或g(Q)>ε5时,即可判断目标已经离开摄像头图像。此时的真实情况为目标的中心点接近了图像的边缘。ε4、ε5为阈值,本实施例中ε4的取值为略小,ε5的取值为比略小。The tracking algorithm based on the Cam shift algorithm used in the angle information calibration learning algorithm can provide the position of the center point of the target in the camera image and the size of the target in the camera image for the angle information calibration learning algorithm. With the help of these two pieces of information, it can be judged whether the geometric center point of the target has left the camera image. The specific implementation is: when f(P)> ε4 or g(Q)> ε5 , it can be judged that the target has left webcam image. The real situation at this time is that the center point of the target is close to the edge of the image. ε 4 and ε 5 are threshold values, and the value of ε 4 in this embodiment is Slightly smaller, the value of ε 5 is more than slightly smaller.

18)标定信息表建立完毕后,清Marker_learning标志位为0,转至步骤2);18) After the calibration information table is established, clear the Marker_learning flag to 0, and go to step 2);

只有在确认角度信息标定学习算法的自标定过程已经完成的情况下,算法才会转至步骤18,清除学习标志位Marker_learning为0。在清除了学习标志位Marker_learning之后,不论算法是否存储有自标定过程断点,都不再进入自标定状态。Only when it is confirmed that the self-calibration process of the angle information calibration learning algorithm has been completed, the algorithm will go to step 18, and clear the learning flag Marker_learning to 0. After the learning flag Marker_learning is cleared, no matter whether the algorithm stores a self-calibration process breakpoint or not, it will no longer enter the self-calibration state.

标定信息表Calibration Information Sheet

如图10所示,在角度信息标定学习算法中所提到的标定信息表为记录有摄像头成像标定信息的一张二维数据表。该表由摄像头图像坐标系中任意点的|f(P)|,|g(Q)|,以及使目标由原点移动到该点所对应的摄像头在摄像头坐标系中转过的水平角度和竖直角度组成。即表中每一个位置记录了一组对应于一组|f(P)|,|g(Q)|值的|Δθ|,|Δη|值。其中每一行的数据对应于相同的|f(P)|值,自行最左侧至行最右侧对应于数值逐渐加大的|g(Q)|值。每一列的数据对应于相同的|g(Q)|值,自列最上方至列最下方对应于数值逐渐加大的|f(P)|值。例如:图10中对应于|f(P)|=0.25,|g(Q)|=0.33,表中记录有一组Δθ,Δη的数据:Δθ=15°,Δη=24.3°。As shown in FIG. 10 , the calibration information table mentioned in the angle information calibration learning algorithm is a two-dimensional data table in which camera imaging calibration information is recorded. The table consists of |f(P)|, |g(Q)| at any point in the camera image coordinate system, and the horizontal angle and vertical angle of the camera that moves the target from the origin to the point in the camera coordinate system. angle composition. That is, each position in the table records a set of |Δθ|, |Δη| values corresponding to a set of |f(P)|, |g(Q)| values. The data in each row corresponds to the same |f(P)| value, and the values from the leftmost to the rightmost row correspond to |g(Q)| values that gradually increase in value. The data in each column corresponds to the same |g(Q)| value, and the values from the top to the bottom of the column correspond to |f(P)| values that increase in value gradually. For example: Figure 10 corresponds to |f(P)|=0.25, |g(Q)|=0.33, and a set of Δθ, Δη data is recorded in the table: Δθ=15°, Δη=24.3°.

由于所使用的摄像头镜头畸变是近似对称的,因此标定信息表中只记录有摄像头图像中I象限图像的标定信息。其他象限图像的标定信息可通过标定信息换算示意表中示意的运算得到。Since the lens distortion of the used camera is approximately symmetrical, only the calibration information of the I-quadrant image in the camera image is recorded in the calibration information table. The calibration information of images in other quadrants can be obtained through the calculation shown in the calibration information conversion schematic table.

标定信息换算表Calibration Information Conversion Table

标定信息换算示意表:标定信息换算示意表辅助标定信息表工作。Calibration information conversion table: The calibration information conversion table assists the work of the calibration information table.

首先将摄像头图像依照所在象限划分为4个区域:First, the camera image is divided into 4 regions according to the quadrant:

其中imax为摄像头图像的行数。jmax为摄像头图像的列数。i指摄像头图像第i行。j指摄像头图像第j列。Iij指摄像头图像中第i行第j列的像素点。Where imax is the number of lines of the camera image. jmax is the number of columns of the camera image. i refers to the i-th row of the camera image. j refers to the jth column of the camera image. I ij refers to the pixel point in row i and column j in the camera image.

标定信息表只记录有I象限摄像头图像的标定信息,其他象限摄像头图像的标定信息可通过遵循标定信息换算示意表的规则计算得到。标定信息换算示意表如下所示:表中第二列表示判断不同象限的判断方法,第三、四列表示位于不同象限时,在摄像头坐标系中摄像头应该转动的角度及方向。The calibration information table only records the calibration information of the camera images in the I quadrant, and the calibration information of the camera images in other quadrants can be calculated by following the rules of the calibration information conversion table. The calibration information conversion diagram is as follows: the second column in the table indicates the judgment method for judging different quadrants, and the third and fourth columns indicate the angle and direction that the camera should rotate in the camera coordinate system when it is located in a different quadrant.

 I象限 I quadrant   IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) IF: (f(P)=|f(P)|)∧(g(Q)=|g(Q)|)   |Δθ| |Δθ|   |Δη| |Δη|  II象限 II quadrant   IF:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) IF: (-f(P)=|f(P)|)∧(g(Q)=|g(Q)|)   -|Δθ| -|Δθ|   |Δη| |Δη|  III象限 III quadrant   IF:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) IF: (-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|)   -|Δθ| -|Δθ|   -|Δη| -|Δη|  IV象限 IV quadrant   IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) IF: (f(P)=|f(P)|)∧(-g(Q)=|g(Q)|)   |Δθ| |Δθ|   |Δη| |Δη|

角度信息标定学习算法需要进行自标定的必要性:1.角度信息标定学习算法若运行过程中不进行自标定,其只有在目标与图像中心点近似重合的情况下才可以输出角度信息,这将导致角度信息的输出不具有实时性,不利于后续算法的使用。2.角度信息标定学习算法若运行过程中不进行自标定,其跟踪目标的能力将会下降,使目标更易丢失。The necessity of self-calibration for the angle information calibration learning algorithm: 1. If the angle information calibration learning algorithm does not perform self-calibration during operation, it can only output the angle information when the target and the center point of the image are approximately coincident, which will As a result, the output of angle information is not real-time, which is not conducive to the use of subsequent algorithms. 2. If the angle information calibration learning algorithm does not perform self-calibration during operation, its ability to track the target will decrease, making the target more likely to be lost.

3.深度信息及世界坐标提取算法3. Depth information and world coordinate extraction algorithm

深度信息及世界坐标提取算法步骤如下:Depth information and world coordinates extraction algorithm steps are as follows:

1)对仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统进行标定。经手工标定后,两广角定焦摄像头各自的几何中心点之间的距离d为己知量。如图4所示,为详细说明算法的计算过程,设两广角定焦摄像头像平面法向量所在的射线与两广角定焦摄像头各自的几何中心点之间的连线的夹角均为锐角。设左侧广角定焦摄像头几何中心点与目标在两广角定焦摄像头各自的几何中心点连线上的投影点之间的距离为d1,右广角定焦摄像头几何中心点与目标在两广角定焦摄像头几何中心点连线上的投影点之间的距离为d2,则d1+d2=d。1) Calibrate the visual system used in the visual coordinate extraction algorithm of the lizard-mimicking suborder evaderidae. After manual calibration, the distance d between the respective geometric centers of the two wide-angle fixed-focus cameras is a known quantity. As shown in Figure 4, in order to describe the calculation process of the algorithm in detail, it is assumed that the included angles between the ray where the plane normal vectors of the two wide-angle fixed-focus cameras are located and the respective geometric center points of the two wide-angle fixed-focus cameras are all acute angles. Let d 1 be the distance between the geometric center point of the left wide-angle fixed-focus camera and the projection point of the target on the line connecting the respective geometric center points of the two wide-angle fixed-focus cameras. The distance between the projection points on the line connecting the geometric centers of the fixed-focus cameras is d 2 , then d 1 +d 2 =d.

2)接收来自角度信息标定学习算法的两广角定焦摄像头此时各自的像平面法向量相对于各自的初始位置摄像头法向量的水平夹角θ1,θ2和竖直夹角η1,η2。在来自角度信息标定学习算法的角度信息中,有通过查询标定信息表得到的角度信息,也有通过读取控制量反向计算得到的角度信息。前者精度低,实时性高,后者精度高,实时性低。根据具体情况需要,深度信息及世界坐标提取算法可选择屏蔽其中一类角度信息或全部接收。2) The horizontal angles θ 1 , θ 2 and the vertical angles η 1 , η of the respective image plane normal vectors of the two wide-angle fixed-focus cameras received from the angle information calibration learning algorithm relative to the respective initial position camera normal vectors 2 . Among the angle information from the angle information calibration learning algorithm, there is the angle information obtained by querying the calibration information table, and the angle information obtained by reading the reverse calculation of the control quantity. The former has low precision and high real-time performance, while the latter has high precision and low real-time performance. According to specific needs, the depth information and world coordinate extraction algorithm can choose to shield one type of angle information or receive all of them.

3)根据得到的标定量d,以及角度信息θ1,θ2,计算目标相对于两广角定焦摄像头各自的几何中心点之间的连线中点的深度,即目标的X坐标。3) According to the obtained calibration value d and angle information θ 1 and θ 2 , calculate the depth of the target relative to the midpoint of the line between the respective geometric center points of the two wide-angle fixed-focus cameras, that is, the X coordinate of the target.

相对于图4而言,目标的深度信息可以表示为: Compared with Figure 4, the depth information of the target can be expressed as:

推导得: d 1 = tan &theta; 1 tan &theta; 2 d 2 Deduced: d 1 = the tan &theta; 1 the tan &theta; 2 d 2

代入:d1+d2=d,可得: ( tan &theta; 1 tan &theta; 2 + 1 ) d 2 = d Substituting: d 1 +d 2 =d, we can get: ( the tan &theta; 1 the tan &theta; 2 + 1 ) d 2 = d

整理得: d 2 = d ( tan &theta; 1 tan &theta; 2 + 1 ) , d 1 = d - d 2 = d - d ( tan &theta; 1 tan &theta; 2 + 1 ) Organized: d 2 = d ( the tan &theta; 1 the tan &theta; 2 + 1 ) , d 1 = d - d 2 = d - d ( the tan &theta; 1 the tan &theta; 2 + 1 )

相应的: X = d 2 tan &theta; 2 = d 1 tan &theta; 1 = d ( tan &theta; 1 tan &theta; 2 + 1 ) tan &theta; 2 = d tan &theta; 1 + tan &theta; 2 corresponding: x = d 2 the tan &theta; 2 = d 1 the tan &theta; 1 = d ( the tan &theta; 1 the tan &theta; 2 + 1 ) the tan &theta; 2 = d the tan &theta; 1 + the tan &theta; 2

即深度信息、X坐标为: X = d tan &theta; 1 + tan &theta; 2 . That is, the depth information and the X coordinate are: x = d the tan &theta; 1 + the tan &theta; 2 .

4)进一步计算得即为系统世界坐标系的Y坐标。在步骤1)中,只考虑了如图4所示的两广角定焦摄像头像平面法向量所在的射线与两广角定焦摄像头各自的几何中心点之间的连线的夹角均为锐角的情况。若为钝角则计算方法与上述情况类似。4) Further calculated It is the Y coordinate of the system world coordinate system. In step 1), only the angles between the lines between the ray where the plane normal vectors of the heads of the two wide-angle fixed-focus cameras are located and the respective geometric center points of the two wide-angle fixed-focus cameras as shown in Figure 4 are all acute angles Condition. If it is an obtuse angle, the calculation method is similar to the above case.

世界坐标系的Z坐标亦可用类似于求取Y坐标的方法求出。如图5所示,得到深度信息X后,结合角度信息标定学习算法得到的摄像头像平面法向量相对于初始位置摄像头法向量的竖直夹角η1,通过公式:Z=tan(η1)×X,即可得到Z坐标。至此,目标相对于视觉系统坐标系的世界坐标建立完毕。The Z coordinate of the world coordinate system can also be obtained by a method similar to that of obtaining the Y coordinate. As shown in Figure 5, after the depth information X is obtained, the vertical angle η 1 of the normal vector of the camera head plane obtained by combining the angle information calibration learning algorithm with respect to the normal vector of the camera at the initial position, through the formula: Z=tan(η 1 ) ×X, you can get the Z coordinate. So far, the world coordinates of the target relative to the visual system coordinate system have been established.

5)输出目标相对于两广角定焦摄像头各自的几何中心点之间的连线中点的世界坐标,退出算法。5) Output the world coordinates of the midpoint of the target relative to the respective geometric center points of the two wide-angle fixed-focus cameras, and exit the algorithm.

其算法流程图如图11所示。Its algorithm flowchart is shown in Figure 11.

将深度信息及世界坐标提取算法与角度信息标定学习算法分离开来的原因为:1.角度信息标定学习算法需不停的更新目标的角度信息,其实时性要求较高。故算法的简短十分重要。2.角度信息标定学习算法所输出的角度信息并不止提供给深度信息及世界坐标提取算法,还可以提供给其他算法。The reasons for separating the depth information and world coordinate extraction algorithm from the angle information calibration learning algorithm are as follows: 1. The angle information calibration learning algorithm needs to continuously update the angle information of the target, and its real-time requirements are high. So the brevity of the algorithm is very important. 2. Angle information calibration The angle information output by the learning algorithm is not only provided to the depth information and world coordinate extraction algorithm, but also to other algorithms.

4.基于Camshifi算法的单目跟踪算法4. Monocular tracking algorithm based on Camshifi algorithm

角度信息标定学习算法使用了基于Cam shift算法的单目跟踪算法,其主要用于实时向角度信息标定学习算法提供目标在摄像头图像中的大小以及目标几何中心点在摄像头图像中的位置。The angle information calibration learning algorithm uses the monocular tracking algorithm based on the Cam shift algorithm, which is mainly used to provide the angle information calibration learning algorithm with the size of the target in the camera image and the position of the geometric center of the target in the camera image in real time.

常规的多目视觉系统跟踪算法需要综合考虑多个摄像头间跟踪时的配合问题,计算量大,算法复杂,且由于不具备主动视觉的功能,跟踪范围窄,易发生目标丢失。基于主动视觉的仿蜥蜴亚目避役科生物视觉坐标提取算法所应用的视觉系统将各个摄像头的跟踪算法独立开来,有效的降低了算法的复杂程度,降低了算法的计算量。Conventional multi-eye vision system tracking algorithms need to comprehensively consider the coordination of multiple cameras during tracking, which requires a large amount of calculation and complex algorithms. Moreover, because they do not have the function of active vision, the tracking range is narrow and the target is prone to loss. The vision system used in the active vision-based algorithm for extracting the visual coordinates of the lizard-like evasionidae organisms separates the tracking algorithms of each camera, which effectively reduces the complexity of the algorithm and reduces the amount of calculation of the algorithm.

基于Cam shift算法的单目视觉跟踪算法属现有技术,可以概述如下:The monocular visual tracking algorithm based on the Cam shift algorithm belongs to the prior art, which can be summarized as follows:

对摄像头回传的图像进行HSV颜色空间变化,提取hue分量,构建颜色直方图,颜色概率分布图。根据颜色概率分布图,使用Cam_shift算法进行目标的跟踪。建立适当的图像搜索窗,提取搜索窗中心点坐标。计算控制量发送给步进电机,控制摄像头使跟踪目标始终处于图像的中心点处,其具体控制方法为角度信息标定学习算法中所述的控制方法。Change the HSV color space of the image returned by the camera, extract the hue component, and construct a color histogram and color probability distribution map. According to the color probability distribution map, the Cam_shift algorithm is used to track the target. Establish an appropriate image search window, and extract the coordinates of the center point of the search window. The calculated control amount is sent to the stepping motor to control the camera so that the tracking target is always at the center of the image. The specific control method is the control method described in the angle information calibration learning algorithm.

Claims (2)

1. an imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm, based on by two wide-angle cameras with fixed focus, stepper motor
The physical platform forming, is characterized in that comprising the following steps:
1). make two wide-angle cameras with fixed focus search for respectively target, when A camera is found after target, enable monocular vision track algorithm based on Cam shift algorithm and keep the tracking to target, and while returning in real time this time image planar process vector of A camera with respect to A camera initial position camera as the horizontal sextant angle of planar process vector and vertical angle; First described A camera for to search the camera of target, and B camera is another camera; Described initial position is: make being parallel to each other with surface level as planar process vector of wide-angle cameras with fixed focus, and make the position of the line segment forming perpendicular to two wide-angle cameras with fixed focus geometric center point lines as planar process vector of two wide-angle cameras with fixed focus;
2) .B camera is followed A camera and is carried out target search, and B camera searches after target, and the monocular vision track algorithm of also enabling based on Cam shift algorithm keeps the tracking to target;
3). call angle information and demarcate that learning algorithm calculates respectively A in real time, B camera all traces into after target, picture planar process vector separately with respect to initial position camera separately as horizontal sextant angle and the vertical angle of planar process vector;
Wherein angle information demarcation learning algorithm comprises the following steps:
3.1) initialization study inlet angle Δ θ 1, Δ η 1,
Described study inlet angle refers to when in calibration process, target moves, and in camera image, target moves to the position moving from initial point, the level angle that corresponding camera rotates and vertically angle;
3.2) calculate respectively in camera image the horizontal range P of camera image central point and target and vertically number percent f (P) and the g (Q) with respect to camera image principal diagonal length apart from Q;
3.3) judge that whether target is at camera image central point,
If (| f (P) | < ε) ∧ (| g (Q) | < ε), threshold epsilon >0, target, at central point, jumps to step 3.7);
If (| f (P) | > ε) ∨ (| g (Q) | > ε), threshold epsilon >0, target, not at central point, jumps to step 3.4);
3.4) look into demarcation information table,
If exist in demarcation information table corresponding to | f (P) | and | g (Q) | angle information, algorithm jumps to step 3.8);
If do not exist in demarcation information table corresponding to | f (P) | and | g (Q) | angle information, jump to step 3.5);
3.5) progressively rotate camera, target is overlapped with image center in camera image;
3.6) when target overlaps with camera image central point, jump to step 3.7); Otherwise return to step 3.5);
3.7) when target overlaps with image center, read camera picture planar process vector now and also export as the horizontal sextant angle θ of planar process vector and vertical angle η with respect to this camera initial position camera, go to step 3.10);
3.8) if demarcate, in information table, exist corresponding to | f (P) | and | g (Q) | angle information, camera rotates this angle target geometric center point is overlapped with image center in camera image, and rotation direction is determined according to the demarcation information signal table that converts;
3.9), under output world coordinate system, target is the horizontal sextant angle and vertical angle as planar process vector with respect to initial position camera; The time delay T time, wherein T is the amount of delay setting in advance, and returns to step 3.2);
3.10) by default study zone bit, judge whether to learn complete,
If learn completely, go to step 3.2);
If do not learn completely, algorithm continues, and starts to demarcate;
3.11) judge whether it is to demarcate for the first time,
If study inlet angle equals initial value, for demarcating for the first time, now algorithm continues;
If study inlet angle is not equal to initial value,, for demarcating for the first time, now do not jump to step 3.13);
3.12) make camera rotate respectively θ with vertical direction in the horizontal direction ε, η ε, θ ε, η εfor threshold angle, jump to step 3.14),
3.13) camera turns over the study inlet angle of demarcating record while interrupting for last time, jumps to step 3.14);
3.14) if demarcate for the first time, the threshold angle θ that camera is rotated respectively with vertical direction in the horizontal direction ε, η εand the camera image internal object geometric center point corresponding with it | f (P) | and | g (Q) | write demarcation information table;
If not demarcate for the first time, first judge whether target motion has occurred before entering this step; If target travel, renewal learning inlet angle, and jump to step 2.2); If target is motion no, upgrade and demarcate information table;
3.15) keep the vertical angle of camera constant, carry out rower and determine, whenever camera turns over a unit level angle in the horizontal direction, judge now whether target geometric center point leaves camera image; If leave, go to step 3.17); If do not leave, continue algorithm;
3.16) judgement camera often turns over after a unit level angle in the horizontal direction, and whether target moves, if be not moved, upgrade and demarcates information table, and jump to step 3.15); If be moved, renewal learning inlet angle, and jump to step 3.2);
3.17) change the vertical angle of camera, carry out the level of another row and demarcate, if target geometric center point is left camera image, demarcate information table and set up completely, go to step 3.18); If do not leave, go to step 3.14);
3.18) after demarcating information table foundation, remove study zone bit, go to step 3.2);
Described demarcation information table is to record the bivector table that camera imaging is demarcated information; In table, recorded target position information, and in camera image, target moves to target location from initial point, the horizontal sextant angle that camera turns over as planar process vector correspondence and vertically angle; Described target position information is by the horizontal range P of camera image central point and target and vertically number percent f (P) and g (Q) expression with respect to camera image principal diagonal length apart from Q;
Wherein 3.8) described demarcation information conversion signal table can be described as:
First camera image is divided into 4 regions according to place quadrant:
The line number that wherein imax is camera image; Jmax is the columns of camera image; I refers to that camera image i is capable; J refers to camera image j row; I ijthe pixel that refers to the capable j row of i in camera image;
Demarcation information table only records the demarcation information of I quadrant camera image, and the demarcation information of other quadrant camera image can calculate by following the rule of demarcation information conversion signal table; Demarcation information converts and illustrates that table is as follows:
I quadrant IF:(f(P)=|f(P)|)∧(g(Q)=|g(Q)|) |Δθ| |Δη| II quadrant IF:(-f(P)=|f(P)|)∧(g(Q)=|g(Q)|) -|Δθ| |Δη| III quadrant IF:(-f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) -|Δθ| -|Δη| IV quadrant IF:(f(P)=|f(P)|)∧(-g(Q)=|g(Q)|) |Δθ| |Δη|
In table, secondary series represents to judge the determination methods of different quadrants, and third and fourth list is shown while being positioned at different quadrant, angle and direction that in camera coordinate system, camera rotates;
4). according to step 3) result that obtains, use depth information and world coordinates extraction algorithm, calculate in real time and export target depth information and the world coordinates of target in world coordinate system; Described world coordinates is: the mid point of two wide-angle cameras with fixed focus geometric center point lines of take is initial point, take parallel with surface level, perpendicular to two wide-angle cameras with fixed focus geometric center point lines and be x axle positive dirction along the direction of visual pursuit, the right-handed coordinate system that the direction that makes progress perpendicular to surface level of take is z axle positive dirction;
5) if. target is not lost, and returns to step 3), if track rejection returns to step 1).
2. a kind of imitative Lacertilia Chamaeleontidae biological vision coordinate extraction algorithm according to claim 1, is characterized in that described depth information and world coordinates extraction algorithm comprise the following steps:
According to the distance between two wide-angle cameras with fixed focus geometric center point, and according to angle information demarcate two wide-angle cameras with fixed focus that learning algorithm obtains now picture planar process vector separately with respect to initial position camera separately horizontal sextant angle and the vertical angle as planar process vector, utilize trigonometric function to calculate the world coordinates of target.
CN201110460701.XA 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision Expired - Fee Related CN102682445B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110460701.XA CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110460701.XA CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Publications (2)

Publication Number Publication Date
CN102682445A CN102682445A (en) 2012-09-19
CN102682445B true CN102682445B (en) 2014-12-03

Family

ID=46814312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110460701.XA Expired - Fee Related CN102682445B (en) 2011-12-31 2011-12-31 Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision

Country Status (1)

Country Link
CN (1) CN102682445B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104007761B (en) * 2014-04-30 2016-05-11 宁波韦尔德斯凯勒智能科技有限公司 Tracking control method and the device of the Visual Servo Robot based on position and attitude error
CN108377342B (en) * 2018-05-22 2021-04-20 Oppo广东移动通信有限公司 Double-camera shooting method and device, storage medium and terminal
CN109976335A (en) * 2019-02-27 2019-07-05 武汉大学 A kind of traceable Portable stereoscopic live streaming intelligent robot and its control method
CN115550538B (en) * 2021-06-30 2025-05-13 北京小米移动软件有限公司 Tracking shooting method, device and medium
CN118170003B (en) * 2024-05-14 2024-07-19 济南大学 A PID parameter optimization method based on improved horned lizard optimization algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
CN102034092A (en) * 2010-12-03 2011-04-27 北京航空航天大学 Active compound binocular rapid target searching and capturing system based on independent multiple-degree-of-freedom vision modules

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101517568A (en) * 2006-07-31 2009-08-26 生命力有限公司 System and method for performing motion capture and image reconstruction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
CN102034092A (en) * 2010-12-03 2011-04-27 北京航空航天大学 Active compound binocular rapid target searching and capturing system based on independent multiple-degree-of-freedom vision modules

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Naigong Yu等.Study on mobile robot mapping based on binocular vision and Voronoi diagram.《Electrical and Control Engineering (ICECE), 2011 International Conference on》.2011,860-863. *
Study on mobile robot mapping based on binocular vision and Voronoi diagram;Naigong Yu等;《Electrical and Control Engineering (ICECE), 2011 International Conference on》;20110918;860-863 *
主动立体视觉关键技术及其应用研究;余洪山;《万方学位论文数据库》;20050425;全文 *
于乃功等.机械臂视觉伺服系统中的高精度实时特征提取.《控制与决策》.2009,第24卷(第10期),1568-1572. *
余洪山.主动立体视觉关键技术及其应用研究.《万方学位论文数据库》.2005,全文. *
机械臂视觉伺服系统中的高精度实时特征提取;于乃功等;《控制与决策》;20091031;第24卷(第10期);1568-1572 *

Also Published As

Publication number Publication date
CN102682445A (en) 2012-09-19

Similar Documents

Publication Publication Date Title
CN109308693B (en) Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN109102525B (en) A mobile robot following control method based on adaptive pose estimation
CN110163963B (en) Mapping device and mapping method based on SLAM
Kragic et al. Vision for robotic object manipulation in domestic settings
WO2019090833A1 (en) Positioning system and method, and robot using same
CN102682445B (en) Coordinate extracting algorithm of lacertilian-imitating suborder chamaeleonidae biological vision
CN110222581A (en) A kind of quadrotor drone visual target tracking method based on binocular camera
CN110842940A (en) Building surveying robot multi-sensor fusion three-dimensional modeling method and system
CN100554877C (en) A kind of real-time binocular vision guidance method towards underwater research vehicle
CN109191504A (en) A kind of unmanned plane target tracking
CN102693543B (en) Method for automatically calibrating Pan-Tilt-Zoom in outdoor environments
CN103886107A (en) Robot locating and map building system based on ceiling image information
CN114245091B (en) Projection position correction method, projection positioning method, control device and robot
Loper et al. Mobile human-robot teaming with environmental tolerance
Alizadeh Object distance measurement using a single camera for robotic applications
Gratal et al. Visual servoing on unknown objects
Shoukat et al. Cognitive robotics: Deep learning approaches for trajectory and motion control in complex environment
Li et al. Learning view and target invariant visual servoing for navigation
CN112232275A (en) Obstacle detection method, system, equipment and storage medium based on binocular recognition
CN110517284A (en) A Target Tracking Method Based on LiDAR and PTZ Camera
Mateos Apriltags 3d: dynamic fiducial markers for robust pose estimation in highly reflective environments and indirect communication in swarm robotics
Chen et al. VPL-SLAM: a vertical line supported point line monocular SLAM system
CN112000135A (en) Three-axis holder visual servo control method based on human face maximum temperature point characteristic feedback
Zhang et al. Automated extrinsic calibration of multi-cameras and lidar
CN114529800A (en) Obstacle avoidance method, system, device and medium for rotor unmanned aerial vehicle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20141203