[go: up one dir, main page]

CN101377812B - Method for recognizing position and attitude of space plane object - Google Patents

Method for recognizing position and attitude of space plane object Download PDF

Info

Publication number
CN101377812B
CN101377812B CN2008101677837A CN200810167783A CN101377812B CN 101377812 B CN101377812 B CN 101377812B CN 2008101677837 A CN2008101677837 A CN 2008101677837A CN 200810167783 A CN200810167783 A CN 200810167783A CN 101377812 B CN101377812 B CN 101377812B
Authority
CN
China
Prior art keywords
pose
image
point
matrix
point set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2008101677837A
Other languages
Chinese (zh)
Other versions
CN101377812A (en
Inventor
张广军
王巍
魏振忠
赵征
孙军华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2008101677837A priority Critical patent/CN101377812B/en
Publication of CN101377812A publication Critical patent/CN101377812A/en
Application granted granted Critical
Publication of CN101377812B publication Critical patent/CN101377812B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a dimensional and planar object pose recognition method, which does not need the prior matching information of any object point and image point. In the method, the pose is calculated based on a plane point set, which is independent of the geometric shape of a planar object; the ambiguity problem under the calculation of the pose of the planar object is solved through the calibration of Tsai planar targets. As no additional diagnostic information is needed in the method, the conditional restrictions of computing the poses of dimensional and planar objects are reduced; the application scope of computing poses is widened; the method is very robust in blocked and confused environment under the circumstance of noise with different levels, which is very valuable in application.

Description

一种空间平面物体位姿识别方法 A Pose Recognition Method for Space Plane Objects

技术领域technical field

本发明涉及图像处理领域,特别是指一种空间平面物体位姿识别方法。The invention relates to the field of image processing, in particular to a method for recognizing the pose of a spatial plane object.

背景技术Background technique

物体的位姿计算是机器视觉的基本任务之一,在视觉导航、机器人定位、物体识别、视觉监控、工业测量等方面都有着广泛而重要的应用。所谓位姿计算指的是,在摄像机已标定的条件下,通过空间三维基元如点或直线与图像二维基元之间的对应关系,求得摄像机坐标系与物体坐标系之间的刚体变换关系,从而得到物体在摄像机坐标系中的位置和姿态,即通过对物体的位姿计算识别出物体在摄像机坐标系中的位置和姿态。The pose calculation of objects is one of the basic tasks of machine vision, and it has extensive and important applications in visual navigation, robot positioning, object recognition, visual monitoring, industrial measurement, etc. The so-called pose calculation refers to obtaining the rigid body transformation between the camera coordinate system and the object coordinate system through the corresponding relationship between the spatial three-dimensional primitives such as points or lines and the image two-dimensional primitives under the condition that the camera has been calibrated relationship, so as to obtain the position and attitude of the object in the camera coordinate system, that is, to recognize the position and attitude of the object in the camera coordinate system by calculating the pose of the object.

目前,常用的位姿计算方法主要分为两大类:一类是基于点的方法,这类方法可归结为经典的n点透视问题(PnP,perspective-n-point problem),PnP的求解方法又可分为很多种,在已知3到5个三维点匹配的条件下,物体的位姿可以通过求解一个多项式方程组的方法得到;在已知6个三维点或者更多点匹配的条件下,物体的位姿可以通过求解线性或者非线性近似的方法得以解决。但是这类方法必须要在物点和像点匹配已知的条件下进行,如此计算物体位姿的限制条件比较苛刻。At present, the commonly used pose calculation methods are mainly divided into two categories: one is point-based methods, which can be attributed to the classic n-point perspective problem (PnP, perspective-n-point problem), PnP solution method It can be divided into many types. Under the condition of knowing 3 to 5 three-dimensional point matching, the pose of the object can be obtained by solving a polynomial equation system; under the condition of knowing 6 three-dimensional point or more point matching Under , the pose of an object can be solved by solving a linear or nonlinear approximation. However, this type of method must be carried out under the condition that the object point and the image point match are known, so the restrictive conditions for calculating the object pose are relatively strict.

另一类方法是基于几何特征的方法,如这类方法中的某些方法是基于圆特征来求解空间平面物体的位姿,但是,这些方法只适合于几何特征为圆的物体,适用的范围受到限制,并且这类方法需要额外的点或者直线等特征来解决二义性问题。Another type of method is a method based on geometric features. For example, some methods in this type of method are based on circular features to solve the pose of spatial plane objects. However, these methods are only suitable for objects whose geometric features are circular. The scope of application are limited, and such methods require additional features such as points or lines to resolve ambiguities.

另外,这两类方法对于物体位姿的计算只能依据物体几何特征的特殊点,如一个矩形的四个角点、一条直线的两个端点等等。In addition, these two types of methods can only calculate the object's pose based on special points of the object's geometric characteristics, such as the four corner points of a rectangle, the two endpoints of a straight line, and so on.

由此可见,现有的一些空间平面物体的位姿计算必须在一些特定的条件下才能进行,适用的范围受到了限制。It can be seen that the pose calculation of some existing spatial plane objects must be performed under some specific conditions, and the scope of application is limited.

发明内容Contents of the invention

有鉴于此,本发明的主要目的在于提供一种空间平面物体位姿识别方法,能减弱对空间平面物体位姿计算的条件限制,进而扩展适用范围。In view of this, the main purpose of the present invention is to provide a method for recognizing the pose of a spatial plane object, which can reduce the conditional restriction on the calculation of the pose of a spatial plane object, and further expand the scope of application.

为达到上述目的,本发明的技术方案是这样实现的:In order to achieve the above object, technical solution of the present invention is achieved in that way:

本发明提供了一种空间平面物体位姿识别方法,建立基于弱透视模型的空间平面物体的物点与像点之间的变换关系,该方法包括:The invention provides a method for recognizing the pose of a spatial plane object, which establishes a transformation relationship between object points and image points of a spatial plane object based on a weak perspective model. The method includes:

提取空间平面物体图像的几何特征的边缘,得到边缘像点集合,在空间平面物体的几何特征上取点,得到物点集合;寻求对所述物点集合与边缘像点集合进行双向约束优化后得到的矩阵Qg×h的第i行与第j列的最大值max qij;构造一个每一行和每一列元素的个数与矩阵Qg×h的每一行和每一列元素的个数相同的矩阵M;使矩阵M中与max qij对应位置的元素为1,并满足每一行与每一列只有一个元素为1,其余元素全为0,并将所述矩阵M作为表征物点和像点变换关系的匹配矩阵;Extract the edge of the geometric features of the spatial plane object image to obtain the edge image point set, take points on the geometric features of the spatial plane object to obtain the object point set; seek to optimize the two-way constraint on the object point set and the edge image point set The maximum value max q ij of the i-th row and the j-th column of the obtained matrix Q g×h ; the number of elements in each row and each column is the same as the number of elements in each row and each column of the matrix Q g×h matrix M; make the element in the matrix M corresponding to max q ij be 1, and satisfy that only one element in each row and column is 1, and the rest of the elements are all 0, and use the matrix M as a representative object point and image The matching matrix of the point transformation relationship;

给出物点与像点变换关系的初始变化估计;Give the initial change estimation of the transformation relationship between the object point and the image point;

根据所述物点集合、所述边缘像点集合、所述初始变化估计和所述匹配矩阵,得到物点与像点的估计变换关系,并由所得到的估计变换关系和所述物点集合,得到与所述物点集合匹配的像点集合;According to the object point set, the edge image point set, the initial change estimate and the matching matrix, an estimated transformation relationship between object points and image points is obtained, and the obtained estimated transformation relationship and the object point set , to obtain a set of image points matching the set of object points;

根据所述物点集合和像点集合得到空间平面物体的法向量在摄像机坐标系下的坐标和空间平面物体质心在摄像机坐标系下的坐标;Obtain the coordinates of the normal vector of the space plane object in the camera coordinate system and the coordinates of the center of mass of the space plane object in the camera coordinate system according to the object point set and the image point set;

根据所述空间平面物体的法向量得到空间平面物体在摄像机坐标系下的俯仰角和偏航角。The pitch angle and yaw angle of the space plane object in the camera coordinate system are obtained according to the normal vector of the space plane object.

其中,步骤a中所述提取几何特征的边缘为:利用Canny算子提取空间平面物体图像的几何特征的边缘。Wherein, the extraction of the edge of the geometric feature in step a is: using the Canny operator to extract the edge of the geometric feature of the spatial plane object image.

步骤b所述给出物点与像点变换关系的初始变化估计为:利用随机数发生器给出所述初始变化估计。In step b, the initial change estimate of the transformation relationship between the object point and the image point is given as follows: using a random number generator to give the initial change estimate.

步骤c所述得到物点与像点的估计变换关系为:利用确定退火算法得到物点与像点的估计变换关系。Obtaining the estimated transformation relationship between the object point and the image point in step c is: using a deterministic annealing algorithm to obtain the estimated transformation relationship between the object point and the image point.

利用确定退火算法得到物点与像点的估计变换关系,具体为:The estimated transformation relationship between the object point and the image point is obtained by using the deterministic annealing algorithm, specifically:

c1、根据所述匹配矩阵的初始值设置确定退火算法的初始参数;c1. Determine the initial parameters of the annealing algorithm according to the initial value setting of the matching matrix;

c2、利用sinkhorn算法更新所述匹配矩阵;c2, using the sinkhorn algorithm to update the matching matrix;

c3、根据更新后的匹配矩阵利用Gauss-Seidel迭代方法计算物点和像点的估计变换关系;c3. Using the Gauss-Seidel iterative method to calculate the estimated transformation relationship between the object point and the image point according to the updated matching matrix;

c4、根据所述估计变换关系和所述物点集合,利用物点和像点的变换关系表达式得到与物点集合匹配的像点集合。c4. According to the estimated transformation relationship and the object point set, use the expression of the transformation relation between the object point and the image point to obtain an image point set matching the object point set.

其中,步骤c2所述利用sinkhorn算法更新所述匹配矩阵包括:Wherein, updating the matching matrix using the sinkhorn algorithm described in step c2 includes:

初始化所述匹配矩阵;initializing the matching matrix;

对所述匹配矩阵的每一行和每一列元素进行归一化计算;Perform normalization calculation on each row and each column element of the matching matrix;

对所述匹配矩阵循环进行归一化计算。Perform normalized calculation on the matching matrix loop.

步骤d中所述得到法向量在摄像机坐标系下的坐标和空间平面物体质心在摄像机坐标系下的坐标是:利用迭代位姿算法得到空间平面物体的法向量在摄像机下的坐标和空间平面物体质心在摄像机坐标系下的坐标。Obtaining the coordinates of the normal vector in the camera coordinate system and the coordinates of the center of mass of the space plane object in the camera coordinate system as described in step d is: using the iterative pose algorithm to obtain the coordinates and the space plane of the normal vector of the space plane object in the camera coordinate system The coordinates of the object's center of mass in the camera coordinate system.

利用迭代位姿算法得到空间平面物体的法向量在摄像机下的坐标和空间平面物体质心在摄像机坐标系下的坐标,具体为:Use the iterative pose algorithm to obtain the coordinates of the normal vector of the space plane object under the camera and the coordinates of the center of mass of the space plane object in the camera coordinate system, specifically:

d1、根据所述物点集合和所述像点集合,利用迭代位姿算法计算得到两组位姿向量;d1. According to the set of object points and the set of image points, two sets of pose vectors are calculated by using an iterative pose algorithm;

d2、根据Tsai平面靶标标定的方法排除所述得到的两组位姿向量中错误的一组位姿向量,得到正确的一组位姿向量;d2. According to the method of Tsai plane target calibration, a group of wrong group of pose vectors in the two groups of pose vectors obtained is excluded, and a correct group of pose vectors is obtained;

d3、根据所述正确的位姿向量得到空间平面物体的法向量在摄像机坐标系下的坐标和空间平面物体质心在摄像机坐标系下的坐标。d3. Obtain the coordinates of the normal vector of the space plane object in the camera coordinate system and the coordinates of the center of mass of the space plane object in the camera coordinate system according to the correct pose vector.

本发明所提供的空间平面物体位姿识别方法,不需要任何物点与像点的先验匹配信息,并且位姿的计算是基于平面点集,且与平面物体的几何形状无关,使平面物体位姿计算下的二义性问题通过Tsai平面靶标标定的方法得以解决。The pose recognition method of a spatial plane object provided by the present invention does not require any prior matching information between object points and image points, and the calculation of the pose is based on a set of plane points, and has nothing to do with the geometric shape of the plane object, so that the plane object The ambiguity problem in pose calculation is solved by the method of Tsai planar target calibration.

由于不需要额外的特征信息,本发明的方法减弱了对空间平面物体位姿计算的条件限制,扩展了位姿计算的适用范围,该方法在闭塞、混乱的环境下,以及存在不同级别的噪声的情况下也能表现出较好的鲁棒性,具有很好的应用价值.Since no additional feature information is required, the method of the present invention weakens the conditional constraints on the calculation of the pose of a space plane object, and expands the scope of application of the pose calculation. It can also show good robustness in the case of , and has good application value.

附图说明Description of drawings

图1为本发明空间平面物体位姿识别方法的流程示意图;Fig. 1 is a schematic flow chart of the method for recognizing the pose of a space plane object in the present invention;

图2为平面弱透视模型示意图;Fig. 2 is a schematic diagram of a plane weak perspective model;

图3为迭代位姿算法实现原理示意图。Figure 3 is a schematic diagram of the implementation principle of the iterative pose algorithm.

具体实施方式Detailed ways

下面结合附图和具体实施例对本发明的技术方案进一步详细阐述。The technical solutions of the present invention will be further elaborated below in conjunction with the accompanying drawings and specific embodiments.

本发明方案是在物点与像点的匹配未知的条件下进行的,适用于满足弱透视的任何空间平面物体。如图1所示,本发明空间平面物体位姿识别方法,主要包括以下步骤:The scheme of the present invention is carried out under the condition that the matching between the object point and the image point is unknown, and is applicable to any spatial plane object satisfying weak perspective. As shown in Figure 1, the method for recognizing the pose of a space plane object in the present invention mainly includes the following steps:

步骤101,建立基于弱透视模型的空间平面物体的物点与像点之间的变换关系。Step 101, establishing a transformation relationship between object points and image points of a spatial plane object based on a weak perspective model.

这里,建立基于弱透视的平面物点与像点的变换关系,变换{A,B},其中A是2×2的仿射变换矩阵,B是2×1的向量,表示物体质心的像点坐标。Here, the transformation relationship between the plane object point and the image point based on weak perspective is established, transforming {A, B}, where A is a 2×2 affine transformation matrix, and B is a 2×1 vector, representing the image of the center of mass of the object point coordinates.

通过建立物点与像点之间的变换关系,可以得到物点与像点的变换关系表达式。这样,在物点已知的情况下,即可通过物点与像点的变换关系表达式得到与物点一一匹配的像点。By establishing the transformation relationship between the object point and the image point, the expression of the transformation relationship between the object point and the image point can be obtained. In this way, when the object point is known, the image point that matches the object point one by one can be obtained through the transformation relationship expression between the object point and the image point.

弱透视模型具体是这样建立的:The weak perspective model is specifically established as follows:

本发明适用于满足弱透视的任何空间平面物体,所以,一般要求物体到摄像机的距离至少大于十倍的物体表面深度的变化,如果摄像机的视场比较小,而物体表面深度变化相对其到摄像机的距离很小的话,如物体到摄像机的距离是其表面深度变化的至少十倍,那么物体上各点的深度变化可以用固定的深度值z0近似,z0是物体质心在光轴上的深度坐标。The present invention is applicable to any spatial plane object that satisfies weak perspective. Therefore, it is generally required that the distance from the object to the camera is at least ten times greater than the change in the depth of the object surface. If the distance is very small, if the distance from the object to the camera is at least ten times the depth change of its surface, then the depth change of each point on the object can be approximated by a fixed depth value z 0 , z 0 is the center of mass of the object on the optical axis depth coordinates.

假设摄像机的焦距为f,空间平面物体上任意一点P在摄像机坐标系下的坐标为(xc,yc,zc)T,(x,y)T代表点P在摄像机像平面上像点的坐标,则弱透视模型可以表示为公式(1):Assuming that the focal length of the camera is f, the coordinates of any point P on the space plane object in the camera coordinate system are (x c , y c , z c ) T , and (x, y) T represents the image point of point P on the camera image plane coordinates, the weak perspective model can be expressed as formula (1):

可以表示为公式(1):It can be expressed as formula (1):

xx ythe y == ff zz 00 00 00 ff zz 00 xx cc ythe y cc -- -- -- (( 11 ))

其中,

Figure G2008101677837D00052
为放缩常数。in,
Figure G2008101677837D00052
is the scaling constant.

弱透视模型建立之后,对弱透视模型表达式,即公式(1)进行推导,可以得到物点与像点的变换关系表达式,具体方法如下:After the weak perspective model is established, the expression of the weak perspective model, that is, formula (1), can be derived to obtain the expression of the transformation relationship between the object point and the image point. The specific method is as follows:

假如空间平面物体的几何特征是圆,则设空间圆圆心在摄像机坐标系下的坐标为ow(xc0,yc0,zc0),如图2平面弱透视模型示意图所示,以ow为原点,建立空间圆三维坐标系owxwywzw;设o′w为空间圆质心,与空间圆圆心ow重合,以与像平面xoy平行的过空间圆质心o′w的平面为x′wo′wy′w坐标面,o′w为坐标原点,建立三维坐标系o′wx′wy′wz′w,其中坐标轴o′wx′w//ox,o′wy′w//oy,且方向一致。对于平面x′wo′wy′w上的任意点p′w,设其在摄像机坐标系ocxcyczc下的坐标为(xc,yc,zc0),在坐标系o′wx′wy′wz′w下的坐标为(x′w,y′w,0),将

Figure G2008101677837D00053
带入公式(1),简化公式(1)为公式(2):If the geometric feature of the space plane object is a circle, the coordinates of the center of the space circle in the camera coordinate system are o w (x c0 , y c0 , z c0 ), as shown in the schematic diagram of the plane weak perspective model in Fig . As the origin, establish a space circle three-dimensional coordinate system o w x w y w z w ; let o′ w be the space circle centroid, which coincides with the space circle center o w , and pass through the space circle centroid o′ w parallel to the image plane xoy The plane is the x′ w o′ w y′ w coordinate plane, and o′ w is the coordinate origin, and a three-dimensional coordinate system o′ w x′ w y′ w z′ w is established, where the coordinate axis o′ w x′ w //ox , o′ w y′ w //oy, and the direction is the same. For any point p′ w on the plane x′ w o′ w y′ w , let its coordinates in the camera coordinate system o c x c y c z c be (x c , y c , z c0 ), at coordinates The coordinates under the system o′ w x′ w y′ w z′ w are (x′ w , y′ w , 0), and the
Figure G2008101677837D00053
Bring in formula (1), simplify formula (1) to formula (2):

xx ythe y == sthe s 00 00 sthe s xx cc ythe y cc -- -- -- (( 22 ))

其中,为放缩常数。in, is the scaling constant.

根据坐标系o′wx′wy′wz′w与坐标系ocxcyczc之间的变换关系,可得到(xc,yc)T=(x′w,y′w)T+(xc0,yc0)T,代入公式(2),得到公式(3):According to the transformation relationship between the coordinate system o′ w x′ w y′ w z′ w and the coordinate system o c x c y c z c , it can be obtained that (x c , y c ) T = (x′ w , y′ w ) T +(x c0 , y c0 ) T , substitute into formula (2), get formula (3):

xx ythe y == sthe s 00 00 sthe s xx ′′ ww ythe y ′′ ww ++ bb 11 bb 22 -- -- -- (( 33 ))

其中, b 1 b 2 = sx c 0 sy c 0 . in, b 1 b 2 = sx c 0 sy c 0 .

坐标系owxwywzw与o′wx′wy′wz′w之间的旋转变换关系可由公式(4)描述:The rotation transformation relationship between the coordinate system o w x w y w z w and o′ w x′ w y′ w z′ w can be described by formula (4):

xx ′′ ww ythe y ′′ ww zz ′′ ww == RR xx ww ythe y ww zz ww -- -- -- (( 44 ))

其中, R = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 是正交矩阵。in, R = r 11 r 12 r 13 r twenty one r twenty two r twenty three r 31 r 32 r 33 is an orthogonal matrix.

空间圆平面xwowyw上的点有zw=0,由公式(4)可得公式(5):Points on the spatial circular plane x w o w y w have z w = 0, formula (5) can be obtained from formula (4):

xx ′′ ww ythe y ′′ ww == rr 1111 rr 1212 rr 21twenty one rr 22twenty two xx ww ythe y ww -- -- -- (( 55 ))

将公式(5)带入公式(3)可得到物点与像点之间的变换关系,变换{A,B}:Put formula (5) into formula (3) to get the transformation relationship between object point and image point, transform {A, B}:

xx ythe y == AA xx ww ythe y ww ++ BB -- -- -- (( 66 ))

其中,A是2×2的仿射变换矩阵,

Figure G2008101677837D00066
B是2×1的向量,
Figure G2008101677837D00067
表示物体质心的像点坐标,则从公式(6)可以得到空间圆质心的像点坐标:B=s(xc0,yc0)T。Among them, A is a 2×2 affine transformation matrix,
Figure G2008101677837D00066
B is a 2×1 vector,
Figure G2008101677837D00067
Indicates the image point coordinates of the center of mass of the object, then the image point coordinates of the center of mass of the space circle can be obtained from the formula (6): B=s(x c0 , y c0 ) T .

步骤102,提取空间平面物体图像的几何特征的边缘,得到边缘像点集合,在空间平面物体的几何特征上取点,得到物点集合,并根据物点集合和边缘像点集合得到表征物点和像点变换关系的匹配矩阵。Step 102, extracting the edge of the geometric features of the spatial plane object image to obtain the edge image point set, taking points on the geometric features of the spatial plane object to obtain the object point set, and obtaining the representative object point according to the object point set and the edge image point set and the matching matrix of the pixel transformation relationship.

这里,如何提取空间平面物体图像几何特征的边缘可以采用各种已有算法,下面以采用Canny算子提取的方法具体说明。利用Canny边缘算子提取空间平面物体图像几何特征的边缘,得到边缘像点集合{pj},1≤j≤g,g为边缘像点的个数,且边缘像点的个数由Canny算子决定;在物体平面几何特征的边缘上间隔取物点,得到物点集合{Pi},1≤i≤h,h为物点的个数,h≥g,其中物点的取点采用现有技术且是随机的,物点之间的间隔可以相等也可以不等,但要保证物点的个数大于等于边缘像点的个数。Here, various existing algorithms can be used for how to extract the edge of the geometric feature of the image of the spatial plane object, and the extraction method using the Canny operator will be described in detail below. Use the Canny edge operator to extract the edge of the geometric features of the spatial plane object image, and obtain the edge pixel set {p j }, 1≤j≤g, g is the number of edge pixels, and the number of edge pixels is calculated by Canny sub-decision; select object points at intervals on the edge of the object plane geometric features, and obtain the object point set {P i }, 1≤i≤h, h is the number of object points, h≥g, where the object points are selected by The existing technology is random, and the distance between object points can be equal or different, but it must be ensured that the number of object points is greater than or equal to the number of edge image points.

给定平面物点集合{Pi},Pi=(xwi,ywi)T,1≤i≤h;边缘像点集合{pj},pj=(xj,yj)T,1≤j≤g,h≥g。公式(6)表示物点与像点间的变换关系,那么,物点集合{Pi}和边缘像点集合{pj}的约束关系可以由公式(6),即变换{A,B}表示。物点集合{Pi}中的每一个点最多只能匹配边缘像点集合{pj}中的一个点,同时,边缘像点集合{pj}中的每一个点也最多只能匹配物点集合{Pi}中的一个点,这是一个双向约束优化问题,可以用公式(7)表示:Given a set of plane object points {P i }, P i = (x wi , y wi ) T , 1≤i≤h; a set of edge image points {p j }, p j = (x j , y j ) T , 1≤j≤g, h≥g. Formula (6) expresses the transformation relationship between object points and image points, then, the constraint relationship between the object point set {P i } and the edge image point set {p j } can be expressed by formula (6), that is, the transformation {A, B} express. Each point in the object point set {P i } can only match at most one point in the edge image point set {p j }, and at the same time, each point in the edge image point set {p j } can only match at most A point in the point set {P i }, this is a two-way constrained optimization problem, which can be expressed by formula (7):

QQ gg ×× hh == -- (( xx 11 -- xx ′′ 11 )) 22 -- (( xx 11 -- xx ′′ 22 )) 22 .. .. .. -- (( xx 11 -- xx ′′ hh )) 22 -- (( xx 22 -- xx ′′ 11 )) 22 -- (( xx 22 -- xx ′′ 22 )) 22 .. .. .. -- (( xx 22 -- xx ′′ hh )) 22 .. .. .. .. .. .. .. .. .. .. .. .. -- (( xx gg -- xx ′′ 11 )) 22 -- (( xx gg -- xx ′′ 22 )) 22 .. .. .. -- (( xx gg -- xx ′′ hh )) 22 ++ -- (( ythe y 11 -- ythe y ′′ 11 )) 22 -- (( ythe y 11 -- ythe y ′′ 22 )) 22 .. .. .. -- (( ythe y 11 -- ythe y ′′ hh )) 22 -- (( ythe y 22 -- ythe y ′′ 11 )) 22 -- (( ythe y 22 -- ythe y ′′ 22 )) 22 .. .. .. -- (( ythe y 22 -- ythe y ′′ hh )) 22 .. .. .. .. .. .. .. .. .. .. .. .. -- (( ythe y gg -- ythe y ′′ 11 )) 22 -- (( ythe y gg -- ythe y ′′ 22 )) 22 .. .. .. -- (( ythe y gg -- ythe y ′′ hh )) 22 -- -- -- (( 77 ))

其中, x i ′ y i ′ = A x wi y wi + B , 1 ≤ i ≤ h . in, x i ′ the y i ′ = A x wi the y wi + B , 1 ≤ i ≤ h .

可以将公式(7)用矩阵Q表示为:Formula (7) can be expressed by matrix Q as:

QQ gg ×× hh == qq 1111 qq 1212 .. .. .. qq 11 hh qq 21twenty one qq 22twenty two .. .. .. qq 22 hh .. .. .. .. .. .. .. .. .. .. .. .. qq gg 11 qq gg 22 .. .. .. qq ghgh -- -- -- (( 88 ))

根据物点集合与边缘像点集合的双向约束优化,得到矩阵Qg×h;再根据矩阵Qg×h,将物点与像点的匹配问题转化为寻求max qij,max qij表示矩阵Qg×h的第i行与第j列的最大值。如此,可以构造表征物点与像点匹配关系的g×h的匹配矩阵M:匹配矩阵M由元素mij组成,匹配矩阵M中每一行和每一列元素,即mij的个数与矩阵Qg×h的每一行和每一列元素,即qij的个数相同。寻求矩阵Qg×h中每一行或者每一列中qij的最大值,即max qij,令矩阵M中与maxqij对应位置的mij为1,即mij=1,表示第i个像点与第j个物体点匹配,最终得到的匹配矩阵M,且满足每一行与每一列只有一个值是1,其余值全是0。According to the two-way constraint optimization of the object point set and the edge image point set, the matrix Q g×h is obtained; then according to the matrix Q g×h , the matching problem between the object point and the image point is transformed into seeking max q ij , and max q ij represents the matrix The maximum value of row i and column j of Q g×h . In this way, a matching matrix M of g×h that characterizes the matching relationship between object points and image points can be constructed: the matching matrix M is composed of elements m ij , and each row and column element in the matching matrix M, that is, the number of mij and the matrix Q Each row and each column of g×h have the same number of elements, that is, q ij . Find the maximum value of q ij in each row or column in the matrix Q g×h , that is, max q ij , and set m ij at the position corresponding to maxq ij in matrix M to be 1, that is, m ij =1, which means that the i-th image The point is matched with the jth object point, and the final matching matrix M is obtained, and only one value of each row and each column is 1, and the rest of the values are all 0.

这里,考虑到缺失点的现象,如一个立方体有8个角点,如果从立方体的正面进行拍摄,那么立方体的背面的四个角点就会被前面的四个角点遮住,那么在摄像机的坐标系中,立方体背面的四个角点就和前面的四个角点重合,则背面的四个角点就是缺失点。考虑到缺失点给物体位姿计算带来的不精确的问题,为匹配矩阵M增加一行和一列,则M变为(g+1)×(h+1)矩阵,如果mi,h+1=1,1≤i≤g,则第i个图像点无法和任何物体点进行匹配,同样,mg+1,j=1,1≤j≤h表示第j个物体点无法和任何图像点进行匹配。这个离散问题可以通过引入一个控制变量β(β>0)转化为连续问题,初始化匹配矩阵M,令Here, considering the phenomenon of missing points, such as a cube with 8 corners, if shooting from the front of the cube, the four corners on the back of the cube will be covered by the four front corners, then in the camera In the coordinate system of , the four corners on the back of the cube coincide with the four corners on the front, so the four corners on the back are the missing points. Considering the inaccurate problem caused by missing points to the object pose calculation, add one row and one column to the matching matrix M, then M becomes a (g+1)×(h+1) matrix, if m i, h+1 =1, 1≤i≤g, then the i-th image point cannot be matched with any object point, similarly, m g+1, j = 1, 1≤j≤h means that the j-th object point cannot be matched with any image point to match. This discrete problem can be transformed into a continuous problem by introducing a control variable β (β>0), and initializing the matching matrix M, so that

mm 00 ijij == γγ ×× ee (( ββ ×× qq ijij -- αα )) ,, 11 ≤≤ ii ≤≤ gg ,, 11 ≤≤ jj ≤≤ hh -- -- -- (( 99 ))

如此,可以保证m0 ij>0。根据公式(9)来进行以后的确定退火算法,其中,α、γ、β是确定退火算法的参数值,γ是放缩常数系数,β用来模拟确定退火(deterministic annealing)算法中的温度,其初始值很小,α是一个很小的常数,表示qij与0的接近程度。In this way, m 0 ij >0 can be guaranteed. Carry out the subsequent deterministic annealing algorithm according to formula (9), wherein, α, γ, β are the parameter values of the deterministic annealing algorithm, γ is the scaling constant coefficient, and β is used to simulate the temperature in the deterministic annealing algorithm, Its initial value is very small, and α is a small constant, indicating how close q ij is to 0.

步骤103,给出物点与像点变换关系的初始变化估计。Step 103, giving an initial change estimate of the transformation relationship between the object point and the image point.

这里,可以利用随机数发生器给出物点与像点变换关系的初始变化估计,变换{A0,B0},A0是2×2矩阵,B0是2×1向量。Here, the random number generator can be used to give an initial change estimate of the transformation relationship between the object point and the image point, transform {A 0 , B 0 }, A 0 is a 2×2 matrix, and B 0 is a 2×1 vector.

根据物点与像点的初始变化估计,变换{A0,B0}和已知的物点集合可以得到估计像点集合,然后由边缘像点集合{pj}和估计像点集合可以计算出矩阵Qg×h的值,矩阵Qg×h的值用来作为进行确定退火算法时,设置参数β的依据。According to the initial change estimation between the object point and the image point, transform {A 0 , B 0 } and the known object point set to obtain the estimated image point set, and then the edge image point set {p j } and the estimated image point set can be calculated The value of the matrix Q g×h is output, and the value of the matrix Q g×h is used as the basis for setting the parameter β when performing the definite annealing algorithm.

较佳地,A0数值的量级设置为10左右,如此,物点与像点匹配时,所要优化的全局函数很容易就可以收敛到全局最优值,当全局函数收敛到全局最优值时,与其所对应的变换{A,B}的值就是最终需要得到的估计变换关系{A′,B′},此处在后面的步骤中加以说明。Preferably, the magnitude of the value of A0 is set to about 10, so that when the object point matches the image point, the global function to be optimized It is easy to converge to the global optimal value. When the global function converges to the global optimal value, the value of the corresponding transformation {A, B} is the final estimated transformation relationship {A′, B′} that needs to be obtained. This is explained here in a later step.

根据初始变换{A0,B0},由变换{A,B},即公式(6)可以计算出估计像点集合{p′i},其中1≤p′i ≤h,h为物点的个数。采用现有技术中矩阵的构造方法,由边缘像点集合{pj}构造矩阵U和V:According to the initial transformation {A 0 , B 0 }, the estimated image point set {p′ i } can be calculated by the transformation {A, B}, that is, formula (6), where 1≤p′ i ≤h, h is the object point the number of . Using the matrix construction method in the prior art, the matrix U and V are constructed from the edge image point set {p j }:

Uu == xx 11 xx 11 .. .. .. xx 11 xx 22 xx 22 .. .. .. xx 22 .. .. .. .. .. .. .. .. .. .. .. .. xx gg xx gg .. .. .. xx gg gg ×× hh VV == ythe y 11 ythe y 11 .. .. .. ythe y 11 ythe y 22 ythe y 22 .. .. .. ythe y 22 .. .. .. .. .. .. .. .. .. .. .. .. ythe y gg ythe y gg .. .. .. ythe y gg gg ×× hh -- -- -- (( 1010 ))

由估计像点集合{p′i}构造矩阵U′和V′:Construct matrices U′ and V′ from the estimated pixel set {p′ i }:

Uu ′′ == xx ′′ 11 xx ′′ 22 .. .. .. xx ′′ hh xx ′′ 11 xx ′′ 22 .. .. .. xx ′′ hh .. .. .. .. .. .. .. .. .. .. .. .. xx ′′ 11 xx ′′ 22 .. .. .. xx ′′ hh gg ×× hh VV ′′ == ythe y ′′ 11 ythe y ′′ 22 .. .. .. ythe y ′′ hh ythe y ′′ 11 ythe y ′′ 22 .. .. .. ythe y ′′ hh .. .. .. .. .. .. .. .. .. .. .. .. ythe y ′′ 11 ythe y ′′ 22 .. .. .. ythe y ′′ hh gg ×× hh

将公式(10)和公式(11)代入公式(7)可以计算出矩阵Qg×h。Qg×h的值用来决定β的值,此处在下面的步骤中进行说明。The matrix Q g×h can be calculated by substituting formula (10) and formula (11) into formula (7). The value of Q g×h is used to determine the value of β, which is explained in the following steps.

步骤104、根据物点集合、边缘像点集合、物点和像点变换关系的初始变化估计、以及匹配矩阵,得到物点与像点的估计变换关系,并由估计变换关系和物点集合,得到与物点集合匹配的像点集合。Step 104: According to the object point set, the edge image point set, the initial change estimation of the object point and image point transformation relationship, and the matching matrix, the estimated transformation relationship between the object point and the image point is obtained, and from the estimated transformation relationship and the object point set, Obtain the set of image points matching the set of object points.

该步骤的关键是计算出物点与像点的估计变换关系,该估计变换关系即是物点与像点的实际变换关系,由此便可得到与物点一一匹配的实际像点,进而得到实际像点集合。其中,物点与像点的估计变换关系通过确定退火算法求得,具体方法如下:The key to this step is to calculate the estimated transformation relationship between the object point and the image point, which is the actual transformation relationship between the object point and the image point, so that the actual image point that matches the object point one by one can be obtained, Get the actual set of image points. Among them, the estimated transformation relationship between the object point and the image point is obtained through the definite annealing algorithm, and the specific method is as follows:

依据公式(9)

Figure G2008101677837D00095
1≤i≤g,1≤j≤h进行确定退火算法,确定退火算法所要求解的参数是物点和像点的变换关系,变换{A,B},通过计算可以得到物点和像点的估计变换关系,变换{A′,B′},根据变换{A′,B′}可以得到与物点集合{Pi}匹配的像点集合{pi}。在每一次确定退火算法过程中,通过计算可以得到一个匹配矩阵M的值,根据匹配矩阵M的值可以得到一个新的物点和像点的变换关系,变换{Ai+1,Bi+1},当变换{Ai+1,Bi+1}使全局优化函数达到全局最优值,即满足确定退火的迭代终止条件:时,结束确定退火循环,则该变换{Ai+1,Bi+1}即为物点和像点的估计变换关系,变换{A′,B′}。具体方法为:According to formula (9)
Figure G2008101677837D00095
1≤i≤g, 1≤j≤h to carry out the definite annealing algorithm, the parameter to be solved by the definite annealing algorithm is the transformation relationship between the object point and the image point, transform {A, B}, and the object point and the image point can be obtained by calculation The estimated transformation relationship of , transformation {A′, B′}, according to the transformation {A′, B′}, the image point set {p i } matching the object point set {P i } can be obtained. In each process of determining the annealing algorithm, the value of a matching matrix M can be obtained by calculation, and a new transformation relationship between the object point and the image point can be obtained according to the value of the matching matrix M, and the transformation {A i+1 , B i+ 1 }, when the transformation {A i+1 , B i+1 } makes the global optimization function The global optimum is reached, that is, the iteration termination condition of deterministic annealing is satisfied: When the annealing cycle is determined, the transformation {A i+1 , B i+1 } is the estimated transformation relationship between the object point and the image point, and the transformation {A′, B′}. The specific method is:

首先,需要设置确定退火算法的各参数值,即对公式(9)中的各个参数值进行设置。First of all, it is necessary to set the values of the parameters to determine the annealing algorithm, that is, to set the values of the parameters in the formula (9).

确定退火算法各个参数值的设置是这样的:α是个很小的值,一般可设置为α=10-5

Figure G2008101677837D00102
γ对退火过程中的迭代影响不大,可以设置为1;β对迭代影响较大,
Figure G2008101677837D00103
T表示确定退火算法的温度参数,则β可用来模拟确定退火算法的温度,β的初始值β0的设定应参考矩阵Qg×h元素的数量级,Qg×h的值在步骤103中通过估计像点集合{p′i}和像点集合{pj}可以计算出来,如果Qg×h中大部分元素的数量级是105,那么β0应该设置为β0=10-5,β0设置的太高或者太低,容易使迭代陷入局部极小值,β的终止值βfinal一般设置为0.5,β的更新倍数βupdate一般设置为1.05,βi为每一次退火过程中的β值,βi+1=βi×βupdate;delta表示估计的像点与提取的真实像点欧式距离的平均值,
Figure G2008101677837D00104
tol1是一个很小的数值,和噪声水平有关,
Figure G2008101677837D00105
noiseStd表示噪声的标准差,在实际图像处理中,噪声的标准差是未知的,tol1的值一般设置为0.5。The setting of determining the value of each parameter of the annealing algorithm is as follows: α is a very small value, generally it can be set as α= 10-5 ;
Figure G2008101677837D00102
γ has little effect on the iteration in the annealing process and can be set to 1; β has a greater influence on the iteration,
Figure G2008101677837D00103
T represents the temperature parameter for determining the annealing algorithm, then β can be used to simulate the temperature for determining the annealing algorithm, the setting of the initial value β0 of β should refer to the order of magnitude of the matrix Q g×h elements, and the value of Q g×h is determined in step 103 It can be calculated by estimating the pixel set {p′ i } and the pixel set {p j }, if the magnitude of most elements in Q g×h is 10 5 , then β 0 should be set to β 0 =10 -5 , If β 0 is set too high or too low, it is easy to make the iteration fall into a local minimum value. The termination value of β final is generally set to 0.5, and the update multiple of β βupdate is generally set to 1.05. β i is the β in each annealing process. value, β i+1 = β i × β update ; delta represents the average value of the Euclidean distance between the estimated image point and the extracted real image point,
Figure G2008101677837D00104
tol 1 is a very small value, related to the noise level,
Figure G2008101677837D00105
noiseStd represents the standard deviation of noise. In actual image processing, the standard deviation of noise is unknown, and the value of tol 1 is generally set to 0.5.

将确定退火算法的各个参数值设置好后,利用sinkhorn算法更新匹配矩阵M,具体的方法是:After setting each parameter value of the determined annealing algorithm, use the sinkhorn algorithm to update the matching matrix M. The specific method is:

1、初始化匹配矩阵M,令1≤i≤g,1≤j≤h,给mi,jh+1,1≤i≤g+1分配一个很小的常数,如10-3,给mg+1,j,1≤j≤h+1分配一个很小的常数,如10-31. Initialize the matching matrix M, let 1≤i≤g, 1≤j≤h, assign a small constant to m i, jh+1 , 1≤i≤g+1, such as 10 -3 , assign m g+1, j , 1≤j ≤h+1 assign a small constant, such as 10 -3 ;

2、对矩阵M的每一行元素和每一列元素进行归一化计算,对每一行元素使用公式(12)进行归一化计算:2. Perform normalized calculation on each row element and each column element of matrix M, and use formula (12) to perform normalized calculation on each row element:

mm ll ++ 11 ijij == mm ll ijij ΣΣ jj == 11 hh ++ 11 mm ll ijij ,, 11 ≤≤ ii ≤≤ gg ,, 11 ≤≤ jj ≤≤ hh ++ 11 -- -- -- (( 1212 ))

其中,l表示进行归一化计算的次数。Among them, l represents the number of normalized calculations.

对每一列元素使用公式(13)进行归一化计算:Use the formula (13) for normalization calculation for each column element:

mm ll ++ 11 ijij == mm ll ++ 11 ijij ΣΣ ii == 11 gg ++ 11 mm ll ++ 11 ijij ,, 11 ≤≤ ii ≤≤ gg ++ 11 ,, 11 ≤≤ jj ≤≤ hh -- -- -- (( 1313 ))

其中l表示进行归一化计算的次数。where l represents the number of normalized calculations.

3、对匹配矩阵M循环进行归一化计算;3. Perform normalized calculation on the matching matrix M cycle;

这里,循环归一化计算的终止条件是tol2小于一个给定的值,如0.005,

Figure G2008101677837D00113
对匹配矩阵M进行完一次归一化处理以后得到一个新的匹配矩阵M,将其代入
Figure G2008101677837D00114
中,得到一个tol2,如果tol2<0.005,则终止对匹配矩阵M的归一化计算;或者循环归一化计算的次数大于设定的最大循环次数,如80次,则终止对匹配矩阵M的归一化计算。Here, the termination condition of the loop normalization calculation is that tol 2 is less than a given value, such as 0.005,
Figure G2008101677837D00113
After performing a normalization process on the matching matrix M, a new matching matrix M is obtained, and it is substituted into
Figure G2008101677837D00114
, get a tol 2 , if tol 2 <0.005, then terminate the normalization calculation of the matching matrix M; or the number of loop normalization calculations is greater than the set maximum number of cycles, such as 80 times, then terminate the matching matrix Normalized calculation of M.

通过sinkhorn算法对匹配矩阵M每进行循环归一化计算后,就会得到新的匹配矩阵M,根据计算得到的匹配矩阵M计算新的物点和像点的变换关系{Ai+1,Bi+1},令After each cycle normalization calculation of the matching matrix M by the sinkhorn algorithm, a new matching matrix M will be obtained, and the transformation relationship between the new object point and the image point will be calculated according to the calculated matching matrix M {A i+1 , B i+1 }, let

&PartialD;&PartialD; EE. &PartialD;&PartialD; aa 1111 == 00 &PartialD;&PartialD; EE. &PartialD;&PartialD; aa 1212 == 00 &PartialD;&PartialD; EE. &PartialD;&PartialD; aa 21twenty one == 00 &PartialD;&PartialD; EE. &PartialD;&PartialD; aa 22twenty two == 00 &PartialD;&PartialD; EE. &PartialD;&PartialD; bb 11 == 00 &PartialD;&PartialD; EE. &PartialD;&PartialD; bb 22 == 00 -- -- -- (( 1414 ))

其中,a11、a12、a21、a22是矩阵A中的元素,b1、b2是矩阵B中的元素,将全局优化函数

Figure G2008101677837D00121
带入公式(14),然后将公式(14)展开后可以得到线性方程组CX=N,其中C是一个6×6的矩阵:Among them, a 11 , a 12 , a 21 , and a 22 are elements in matrix A, b 1 and b 2 are elements in matrix B, and the global optimization function
Figure G2008101677837D00121
Bring into formula (14), then formula (14) can be obtained after expanding formula (14) CX=N, wherein C is a matrix of 6 * 6:

CC == &Sigma;&Sigma; ii == 11 gg &Sigma;&Sigma; jj == 11 hh mm ijij xx 22 wjwj xx wjwj ythe y wjwj 00 00 xx wjwj 00 xx wjwj ythe y wjwj ythe y 22 wjwj 00 00 ythe y wjwj 00 00 00 xx 22 wjwj xx wjwj ythe y wjwj 00 xx wjwj 00 00 xx wjwj ythe y wjwj ythe y 22 wjwj 00 ythe y wjwj xx wjwj ythe y wjwj 00 00 11 00 00 00 xx wjwj ythe y wjwj 00 11 ,, Xx == (( aa 1111 ,, aa 1212 ,, aa 21twenty one ,, aa 22twenty two ,, bb 11 ,, bb 22 )) TT ,,

N是一个6×1的矩阵: N = &Sigma; i = 1 g &Sigma; j = 1 h m ij u i x wj u i y wj v i x wj v i y wj u i v i . N is a 6×1 matrix: N = &Sigma; i = 1 g &Sigma; j = 1 h m ij u i x wj u i the y wj v i x wj v i the y wj u i v i .

需要指出的是,在构造线性方程组CX=N时,可以针对具体的问题做一个技术上的处理。当通过计算可以求得物体几何特征中心的像点坐标时,则可以将该像点坐标近似代替物体质心的像点坐标,由公式(6)可知,B表示物体质心的像点坐标。例如,当空间平面物体的几何特征是圆形时,该空间圆的成像为椭圆形,那么可以利用Canny算子提取得到椭圆的边缘像点集合,然后利用椭圆拟合的方法得到椭圆中心的像点坐标。由于椭圆中心像点和空间圆质心像点之间距离的畸变误差只有几微米到几十微米,所以椭圆中心的像点坐标可以近似代替空间圆质心的像点坐标,则公式(6)中B就成为了已知量,如此,在确定退火算法过程中需要求解的未知量只是2×2的矩阵A。It should be pointed out that when constructing the linear equation system CX=N, a technical treatment can be done for a specific problem. When the image point coordinates of the geometric feature center of the object can be obtained by calculation, the image point coordinates can be approximately replaced by the image point coordinates of the object's center of mass. From formula (6), B represents the image point coordinates of the object's center of mass. For example, when the geometric feature of a space plane object is a circle, the image of the space circle is an ellipse, then the Canny operator can be used to extract the edge image point set of the ellipse, and then the image of the center of the ellipse can be obtained by using the ellipse fitting method point coordinates. Since the distortion error of the distance between the image point at the center of the ellipse and the image point at the centroid of the space circle is only a few micrometers to tens of micrometers, the coordinates of the image point at the center of the ellipse can approximately replace the coordinates of the image point at the centroid of the space circle, then B in formula (6) becomes a known quantity, so that the unknown quantity to be solved in the process of determining the annealing algorithm is only a 2×2 matrix A.

当B由物体几何特征中心的像点坐标近似代替后,公式(6)可以表示为:When B is approximately replaced by the image point coordinates of the geometric feature center of the object, formula (6) can be expressed as:

xx &prime;&prime; ythe y &prime;&prime; == xx -- bb 11 ythe y -- bb 22 == AA xx ww ythe y ww

这时,确定退火算法过程中需要优化的全局函数为:

Figure G2008101677837D00125
其中p′i=(x′y′)T,则构造线性方程组CX=N,其中:At this time, the global function that needs to be optimized in the process of determining the annealing algorithm is:
Figure G2008101677837D00125
Where p′ i =(x′y′) T , then construct a system of linear equations CX=N, where:

CC == &Sigma;&Sigma; ii == 11 gg &Sigma;&Sigma; jj == 11 hh mm ijij xx 22 wjwj xx wjwj ythe y wjwj 00 00 xx wjwj ythe y wjwj ythe y wjwj 22 00 00 00 00 xx 22 wjwj xx wjwj ythe y wjwj 00 00 xx wjwj ythe y wjwj ythe y wjwj 22 ,, Xx == aa 1111 aa 1212 aa 21twenty one aa 22twenty two TT ,,

则N是一个4×1的矩阵:Then N is a 4×1 matrix:

NN == &Sigma;&Sigma; ii == 11 gg &Sigma;&Sigma; jj == 11 hh mm ijij uu ii xx wjwj uu ii ythe y wjwj vv ii xx wjwj vv ii ythe y wjwj

然后,根据Gauss-Seidel迭代方法对线性方程组CX=N求解,就可以得到新的变换{Ai+1,Bi+1}。Then, according to the Gauss-Seidel iterative method to solve the linear equation system CX=N, a new transformation {A i+1 , B i+1 } can be obtained.

在执行完一次退火算法后,更新β、βi+1=βi×βupdate,再次执行确定退火算法,得到新的匹配矩阵M和变换{Ai+1,Bi+1},如此循环执行确定退火算法,直到满足确定退火的迭代终止条件,即βfinal=0.5,

Figure G2008101677837D00133
确定退火算法达到迭代终止条件后,退出确定退火算法。After executing the annealing algorithm once, update β, β i+1 = β i × β update , execute the definite annealing algorithm again to obtain the new matching matrix M and transformation {A i+1 , B i+1 }, and so on Execute the definite annealing algorithm until the iterative termination condition of deterministic annealing is satisfied, that is, β final =0.5,
Figure G2008101677837D00133
After the definite annealing algorithm reaches the iteration termination condition, exit the definite annealing algorithm.

需要指出的是,每次确定退火算法过程结束后都会得到一个新的变换{Ai+1,Bi+1},当变换{Ai+1,Bi+1}使全局优化函数

Figure G2008101677837D00134
达到全局最优值,即满足确定退火的迭代终止条件:
Figure G2008101677837D00135
时,结束确定退火循环,则该变换{Ai+1,Bi+1}即为物点和像点的估计变换关系,变换{A′B′}。It should be pointed out that a new transformation {A i+1 , B i+1 } will be obtained after each determination of the annealing algorithm process. When the transformation {A i+1 , B i+1 } makes the global optimization function
Figure G2008101677837D00134
The global optimum is reached, that is, the iteration termination condition of deterministic annealing is satisfied:
Figure G2008101677837D00135
When the annealing cycle is determined, the transformation {A i+1 , B i+1 } is the estimated transformation relationship between the object point and the image point, transformation {A′B′}.

将估计变换关系,变换{A′,B′}和物点集合{Pi}带入公式(6),通过计算可以得到与物点集合匹配的像点集合{pi}。Bring the estimated transformation relationship, transformation {A′, B′} and object point set {P i } into formula (6), and the image point set {p i } matching the object point set can be obtained through calculation.

步骤105、根据物点集合和像点集合,得到空间平面物体的法向量在摄像机坐标系下的坐标和空间平面物体质心在摄像机坐标系下的坐标。Step 105, according to the object point set and the image point set, obtain the coordinates of the normal vector of the space plane object in the camera coordinate system and the coordinates of the centroid of the space plane object in the camera coordinate system.

这里,根据物点集合{Pi}和像点集合{pi},利用迭代位姿算法计算得到两组位姿向量I和J,利用I和J可以得到空间平面物体的法向量在摄像机坐标系下的坐标kT=iT×jT,还可以得到空间平面物体质心在摄像机坐标系下的坐标T。具体的算法如下:Here, according to the set of object points {P i } and the set of image points {p i }, two sets of pose vectors I and J are obtained by using the iterative pose algorithm. Using I and J, the normal vector of the space plane object can be obtained in the camera coordinates The coordinate k T =i T ×j T in the coordinate system, and the coordinate T of the center of mass of the space plane object in the camera coordinate system can also be obtained. The specific algorithm is as follows:

如图3所示,ocxcyczc是摄像机坐标系,任意选取一个物点M0,以M0为物坐标系的坐标原点ow,建立物坐标系owxwywzw,K平面是过M0与像平面π平行的平面,点Pi是物点Mi正投影到K平面上的交点,pi是物点Pi的像点,mi是物点Mi的像点,m0是物点M0的像点,i、j、k分别是摄像机坐标系的xc轴、yc轴、zc轴在物坐标系下的方向矢量,由此可以得到:As shown in Figure 3, o c x c y c z c is the camera coordinate system, randomly select an object point M 0 , take M 0 as the coordinate origin o w of the object coordinate system, and establish the object coordinate system o w x w y w z w , the K plane is a plane passing through M 0 and parallel to the image plane π, the point P i is the intersection point of the orthographic projection of the object point M i onto the K plane, p i is the image point of the object point P i , and mi is the object point The image point of M i , m 0 is the image point of the object point M 0 , i, j, k are the direction vectors of the x c axis, y c axis, and z c axis of the camera coordinate system in the object coordinate system, thus can get:

xx cici ythe y cici zz cici == ii TT jj TT kk TT xx wiwi ythe y wiwi zz wiwi ++ xx cc 00 ythe y cc 00 zz cc 00 -- -- -- (( 1515 ))

其中,(xci,yci,zci)T表示物点Mi在摄像机坐标系下的坐标,(xwi,ywi,zwi)T表示物点Mi在物体坐标系下的坐标,M0=(xc0,yc0,zc0)TAmong them, (x ci , y ci , z ci ) T represents the coordinates of the object point Mi in the camera coordinate system, (x wi , y wi , z wi ) T represents the coordinates of the object point Mi in the object coordinate system, M 0 =(x c0 , y c0 , z c0 ) T .

由于because

xx cc 00 zz cc 00 == xx ff ythe y cc 00 zz cc 00 == ythe y ff -- -- -- (( 1616 ))

k=i×j    (17)k=i×j (17)

其中,m0是参考点M0的像点,如此可以得知空间物体的姿态完全由i,j,zc0,m0=(xc0,yc0)T决定。Among them, m 0 is the image point of the reference point M 0 , so it can be known that the attitude of the space object is completely determined by i, j, z c0 , m 0 =(x c0 , y c0 ) T.

因为because

x i = f M 0 M i &CenterDot; i + x c 0 M 0 M i &CenterDot; k + z c 0 (18) x i = f m 0 m i &CenterDot; i + x c 0 m 0 m i &Center Dot; k + z c 0 (18)

ythe y ii == ff Mm 00 Mm ii &CenterDot;&Center Dot; jj ++ xx cc 00 Mm 00 Mm ii &CenterDot;&Center Dot; kk ++ zz cc 00

所以so

(( xx wiwi ,, ythe y wiwi ,, zz wiwi )) (( II xx ww ,, II ythe y ww ,, II zz ww )) TT == xx ii (( 11 ++ &epsiv;&epsiv; ii )) -- xx cc 00 (( xx wiwi ,, ythe y wiwi ,, zz wiwi )) (( JJ xx ww ,, JJ ythe y ww ,, JJ zz ww )) TT == ythe y ii (( 11 ++ &epsiv;&epsiv; ii )) -- ythe y cc 00 -- -- -- (( 1919 ))

其中, I = f z c 0 i , J = f z c 0 j , &epsiv; i = 1 z c 0 M 0 M i &CenterDot; k . in, I = f z c 0 i , J = f z c 0 j , &epsiv; i = 1 z c 0 m 0 m i &CenterDot; k .

公式(19)是一个线性方程组,未知量是向量I、J的坐标,已知量是M0、Mi的物坐标,以及真实的像坐标m0、miFormula (19) is a linear equation system, the unknown quantities are the coordinates of the vectors I and J, the known quantities are the object coordinates of M 0 and M i , and the real image coordinates m 0 and m i .

对于n个物点M1,M2,Mi,...,Mn,有:For n object points M 1 , M 2 , M i ,..., M n , there are:

AI=x′AI=x'

            (20)(20)

AJ=y′AJ=y'

其中, A = x w 1 y w 1 z w 1 x w 2 y w 2 z w 2 . . . . . . . . . x wn y wn z wn , I = I u I v I w , J = J u J v J w , x &prime; = x 1 ( 1 + &epsiv; 1 ) - x c 0 x 2 ( 1 + &epsiv; 2 ) - x c 0 . . . x n ( 1 + &epsiv; n ) - x c 0 , y &prime; = y 1 ( 1 + &epsiv; 1 ) - y c 0 y 2 ( 1 + &epsiv; 2 ) - y c 0 . . . y n ( 1 + &epsiv; n ) - y c 0 in, A = x w 1 the y w 1 z w 1 x w 2 the y w 2 z w 2 . . . . . . . . . x wn the y wn z wn , I = I u I v I w , J = J u J v J w , x &prime; = x 1 ( 1 + &epsiv; 1 ) - x c 0 x 2 ( 1 + &epsiv; 2 ) - x c 0 . . . x no ( 1 + &epsiv; no ) - x c 0 , the y &prime; = the y 1 ( 1 + &epsiv; 1 ) - the y c 0 the y 2 ( 1 + &epsiv; 2 ) - the y c 0 . . . the y no ( 1 + &epsiv; no ) - the y c 0

如果有至少4个非共面点(M0,M1,M2,M3),那么矩阵的A的秩是3,所以矩阵方程组(20)的最小二乘解是:If there are at least 4 non-coplanar points (M 0 , M 1 , M 2 , M 3 ), then the rank of A of the matrix is 3, so the least squares solution of the matrix equation system (20) is:

I0=Bx′I 0 =Bx'

         (21) (twenty one)

J0=By′J 0 =By'

其中,B=(ATA)-1AT。依据公式(19)和(20)列出方程组,并设置初始ε=0,再根据公式(21)进行计算,可以得到I0和J0Wherein, B=(A T A) -1 A T . List the equations according to the formulas (19) and (20), and set the initial ε=0, and then calculate according to the formula (21), you can get I 0 and J 0 .

共面点构成的矩阵A的秩是2,这时需要额外的约束条件,方程组(20)的真实解由公式(22)描述:The rank of the matrix A composed of coplanar points is 2, and additional constraints are required at this time. The real solution of the equation system (20) is described by the formula (22):

I=I0+λu  I=I 0 +λu

            (22) (twenty two)

J=J0+μuJ=J 0 +μu

对于真实姿态向量I,J,有两条约束条件:For the real pose vector I, J, there are two constraints:

I·J=0    (23)I J=0 (23)

||I||=|J||(24)||I||=|J||(24)

整理后,得到两个λ和μ的约束条件After tidying up, two constraints of λ and μ are obtained

λμ=-I0·J0 λμ=-I 0 ·J 0

                (25)(25)

λ22=J2 0-I2 0 λ 22 =J 2 0 -I 2 0

根据公式(23)和(24)计算λ和μ,进而得到两组姿态向量I和J。定义复数C:Calculate λ and μ according to formulas (23) and (24), and then obtain two sets of attitude vectors I and J. Define a complex number C:

C=λ+iμ(26)C=λ+iμ(26)

所以so

C2=J2 0-I2 0-2iI0·J0  (27)C 2 =J 2 0 -I 2 0 -2iI 0 ·J 0 (27)

将C2写成极坐标形式:Write C 2 in polar form:

C2=[R,Θ](28)C 2 =[R,Θ] (28)

R=((J2 0-I2 0)2+4(I0·J0)2)1/2(29)R=((J 2 0 -I 2 0 ) 2 +4(I 0 ·J 0 ) 2 ) 1/2 (29)

&Theta; = Arc tan ( - 2 I 0 &CenterDot; J 0 J 2 0 - I 2 0 ) , J 2 0 - I 2 0 > 0 (30) &Theta; = Arc the tan ( - 2 I 0 &Center Dot; J 0 J 2 0 - I 2 0 ) , J 2 0 - I 2 0 > 0 (30)

&Theta;&Theta; == &pi;&pi; ++ ArcArc tanthe tan (( -- 22 II 00 &CenterDot;&CenterDot; JJ 00 JJ 22 00 -- II 22 00 )) ,, JJ 22 00 -- II 22 00 << 00

如果J2 0-I2 0=0,

Figure G2008101677837D00163
R=|2I0·J0|If J 2 0 −I 2 0 =0,
Figure G2008101677837D00163
R=|2I 0 ·J 0 |

那么,对于复数C有两个根,C=[ρ,θ],C=[ρ,θ+π],其中,Then, for a complex number C there are two roots, C=[ρ,θ], C=[ρ,θ+π], where,

&rho;&rho; == RR ,, &theta;&theta; == &Theta;&Theta; 22 -- -- -- (( 3131 ))

所以,so,

λ=ρcosθ,μ=ρsinθλ=ρcosθ, μ=ρsinθ

                               (32)  (32)

λ=-ρcosθ,μ=-ρsinθλ=-ρcosθ, μ=-ρsinθ

这样,就可以得到两组姿态向量:In this way, two sets of pose vectors can be obtained:

II 11 == II 00 ++ &rho;&rho; (( coscos &theta;&theta; )) uu JJ 11 == JJ 00 ++ &rho;&rho; (( sinsin &theta;&theta; )) uu -- -- -- (( 3333 )) ,, II 22 == II 00 -- &rho;&rho; (( coscos &theta;&theta; )) uu JJ 22 == JJ 00 -- &rho;&rho; (( sinsin &theta;&theta; )) uu -- -- -- (( 3434 ))

将I0和J0带入公式(33)和(34),得到姿态向量I,J。由于姿态向量I,J有两组,所以得到的旋转矩阵R也有两组,为R1和R2,但是其中一组并不可行,是错误的旋转矩阵。也就是说,利用得到的旋转矩阵R计算物点质心在摄像机坐标系下的坐标T,必须保证所有物点的zci>0,如果两组旋转矩阵都满足zci>0,则可以根据Tsai的摄像机单目平面靶标标定的方法排除错误的旋转矩阵R,由错误的旋转矩阵R计算得到的有效焦距f<0,由正确的旋转矩阵R计算得到的有效焦距f>0,具体方法是求解线性方程组:Put I 0 and J 0 into formulas (33) and (34) to get attitude vectors I, J. Since there are two groups of attitude vectors I and J, the obtained rotation matrix R also has two groups, which are R 1 and R 2 , but one of them is not feasible and is a wrong rotation matrix. That is to say, using the obtained rotation matrix R to calculate the coordinate T of the centroid of the object point in the camera coordinate system, it must be ensured that z ci > 0 of all object points. If both sets of rotation matrices satisfy z ci > 0, then according to Tsai The camera monocular plane target calibration method eliminates the wrong rotation matrix R, the effective focal length f<0 calculated by the wrong rotation matrix R, and the effective focal length f>0 calculated by the correct rotation matrix R, the specific method is to solve System of linear equations:

ythe y cici -- ythe y ii ff TT zz == ww ii ythe y ii -- -- -- (( 3535 ))

wi=r31xwi+r32ywi+r9·0(36)w i =r 31 x wi +r 32 y wi +r 9 0(36)

其中,2≤i≤h,h表示物点的个数。Among them, 2≤i≤h, h represents the number of object points.

由此便可以得到正确的旋转矩阵,正确的旋转矩阵的第三列即是空间平面物体的法向量NormVect在摄像机坐标系下的坐标。得到正确的法向量NormVect后,对正确的I,J单位化,可以得到旋转矩阵R的iT,jT,进而得到kT=iT×jT,即空间平面物体的法向量NormVect在摄像机坐标系下的坐标。Thus, the correct rotation matrix can be obtained, and the third column of the correct rotation matrix is the coordinate of the normal vector NormVect of the space plane object in the camera coordinate system. After obtaining the correct normal vector NormVect, unitize the correct I and J to obtain i T , j T of the rotation matrix R, and then obtain k T =i T ×j T , that is, the normal vector NormVect of the space plane object in the camera Coordinates in the coordinate system.

得到正确的一组姿态向量I,J后,由公式(19)可知

Figure G2008101677837D00172
Figure G2008101677837D00173
则可以得到空间平面物体的质心在摄像机坐标系下zc0的坐标:After obtaining the correct set of attitude vectors I, J, it can be known from formula (19)
Figure G2008101677837D00172
Figure G2008101677837D00173
Then you can get the coordinates of z c0 of the center of mass of the space plane object in the camera coordinate system:

z c 0 = f | | I | | z c 0 = f | | J | | - - - ( 37 ) z c 0 = f | | I | | or z c 0 = f | | J | | - - - ( 37 )

公式(37)带入公式(16)可以得到空间平面物体的质心在摄像机坐标系下xc0、yc0的坐标:Putting formula (37) into formula (16), the coordinates of the center of mass of the space plane object in the camera coordinate system x c0 and y c0 can be obtained:

xx cc 00 == zz cc 00 ff xx 00 ,, ythe y cc 00 == zz cc 00 ff ythe y 00 -- -- -- (( 3838 ))

公式(37)和(38)即表示空间平面物体的质心在摄像机坐标系下的坐标,可以用唯一的平移向量T=(xc0,yc0,zc0)T表示。Formulas (37) and (38) represent the coordinates of the center of mass of the spatial plane object in the camera coordinate system, which can be represented by a unique translation vector T=(x c0 , y c0 , z c0 ) T.

步骤106、根据空间平面物体的法向量,得到空间平面物体在摄像机坐标系下的俯仰角和偏航角。Step 106, according to the normal vector of the space plane object, obtain the pitch angle and yaw angle of the space plane object in the camera coordinate system.

该步骤采用现有技术实现,具体的:将法向量NormVect单位化,得到

Figure G2008101677837D00178
俯仰角
Figure G2008101677837D00179
偏航角
Figure G2008101677837D001710
其中俯仰角pitch的取值范围为[0,90°],偏航角yaw的取值范围是[0,360°]。This step is implemented using the existing technology, specifically: the normal vector NormVect is unitized to obtain
Figure G2008101677837D00178
Pitch angle
Figure G2008101677837D00179
Yaw angle
Figure G2008101677837D001710
The value range of the pitch angle pitch is [0, 90°], and the value range of the yaw angle yaw is [0, 360°].

下面通过一个半实物仿真实验来说明本发明的方案,该半实物仿真实验中的图像是实验室开发的半实物平台生成的虚拟图像,图像是由虚拟的圆生成的,圆半径2米,圆心距离虚拟摄像机约300公里,摄像机焦距为66.885米,生成的图像像素大小为1024×768,圆的图像大约占整个像平面80×80的像素,图像上加均值为0,标准差为4个像素的高斯噪声。在虚拟相机中,设定圆的法向量NormVect和圆心在摄像机下的坐标为:The scheme of the present invention is illustrated below by a semi-physical simulation experiment. The image in this semi-physical simulation experiment is a virtual image generated by the semi-physical platform developed by the laboratory. The image is generated by a virtual circle with a radius of 2 meters and a center of circle. The distance from the virtual camera is about 300 kilometers, the focal length of the camera is 66.885 meters, the pixel size of the generated image is 1024×768, the circle image occupies about 80×80 pixels of the entire image plane, the mean value of the image is 0, and the standard deviation is 4 Pixel Gaussian noise. In the virtual camera, set the normal vector NormVect of the circle and the coordinates of the center of the circle under the camera as:

NormVectNormVect == -- 0.25880.2588 0.48300.4830 0.83650.8365 ,, TT == -- 00 .. 00130013 00 .. 00250025 292682.9268292682.9268 (( mm ))

俯仰角为56.774°,偏航角为118.187°。The pitch angle is 56.774° and the yaw angle is 118.187°.

首先,利用Canny算子提取圆的边缘,得到100个边缘像点{pj},在物体圆上均匀间隔取100个物点{Pi},计算与物点{Pi}一一匹配的像点{pi},然后根据得到的{Pi}和{pi},利用迭代位姿算法进行位姿计算,得到圆在虚拟相机下的法向量和平移向量:First, use the Canny operator to extract the edge of the circle, get 100 edge image points {p j }, take 100 object points {P i } at even intervals on the object circle, and calculate the one-to-one matching with the object point {P i } Image point {p i }, and then according to the obtained {P i } and {p i }, use the iterative pose algorithm to calculate the pose, and get the normal vector and translation vector of the circle under the virtual camera:

NormVecNormVec tt &prime;&prime; == -- 0.25410.2541 0.47420.4742 0.84290.8429 ,, TT &prime;&prime; == -- 0.01760.0176 -- 0.00440.0044 295744.6746295744.6746 (( mm ))

计算出的虚拟圆的法向量的夹角绝对误差为:AngleErr=0.678。The calculated absolute error of the included angle of the normal vector of the virtual circle is: AngleErr=0.678.

计算出的虚拟圆的圆心距离光心的距离的相对误差为:DistErr=1.046%。The calculated relative error of the distance between the center of the virtual circle and the optical center is: DistErr=1.046%.

进而,得到的俯仰角为57.452°,偏航角为118.188°,俯仰角误差为0.678°,偏航角误差为0.001°。Furthermore, the obtained pitch angle is 57.452°, the yaw angle is 118.188°, the pitch angle error is 0.678°, and the yaw angle error is 0.001°.

以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the protection scope of the present invention.

Claims (8)

1. A method for identifying the position and the attitude of a spatial plane object is characterized in that a transformation relation between object points and image points of the spatial plane object based on a weak perspective model is established; the method is characterized by comprising the following steps:
a. extracting the edge of the geometric feature of the image of the spatial plane object to obtain an edge image point set, and taking points on the geometric feature of the spatial plane object to obtain an object point set; seeking a matrix Q obtained by performing bidirectional constraint optimization on the object point set and the edge image point setg×hMaximum value max q of ith row and jth column of (1)ij(ii) a Construct each oneThe number of rows and columns of elements and the matrix Qg×hThe number of each row and each column of the matrix M is the same; in matrix M with max qijThe element of the corresponding position is 1, only one element of each row and each column is 1, the rest elements are all 0, and the matrix M is used as a matching matrix for representing the transformation relation between the object point and the image point;
b. giving an initial change estimation of the transformation relation between the object point and the image point;
c. obtaining an estimated transformation relation between the object points and the image points according to the object point set, the edge image point set, the initial change estimation and the matching matrix, and obtaining an image point set matched with the object point set according to the obtained estimated transformation relation and the object point set;
d. obtaining the coordinates of the normal vector of the space plane object in the camera coordinate system and the coordinates of the mass center of the space plane object in the camera coordinate system according to the object point set and the image point set;
e. and obtaining the pitch angle and the yaw angle of the space plane object under the camera coordinate system according to the normal vector of the space plane object.
2. The method for identifying the pose of the spatial plane object according to claim 1, wherein the edges of the extracted geometric features in the step a are as follows: and extracting the edges of the geometric features of the space plane object image by using a Canny operator.
3. The method for identifying the pose of the spatial plane object according to claim 1, wherein the initial variation estimation of the transformation relationship between the object point and the image point in the step b is as follows: the initial variation estimate is given using a random number generator.
4. The method for identifying the pose of the spatial planar object according to claim 1, wherein the estimated transformation relationship between the object point and the image point obtained in step c is as follows: and obtaining the estimated transformation relation between the object point and the image point by using a determined annealing algorithm.
5. The method for identifying the pose of the spatial planar object according to claim 4, wherein the estimation transformation relationship between the object point and the image point is obtained by using a deterministic annealing algorithm, and specifically comprises the following steps:
c1, setting and determining initial parameters of an annealing algorithm according to the initial values of the matching matrix;
c2, updating the matching matrix by using a sinkhom algorithm;
c3, calculating the estimation transformation relation of the object points and the image points by using a Gauss-Seidel iteration method according to the updated matching matrix;
and c4, obtaining an image point set matched with the object point set by using the transformation relational expression of the object points and the image points according to the estimation transformation relation and the object point set.
6. The method according to claim 5, wherein the step c2 of updating the matching matrix by using sinkhom algorithm comprises:
initializing the matching matrix;
performing normalization calculation on each row and each column of elements of the matching matrix;
and carrying out normalization calculation on the matching matrix cycle.
7. The method for identifying the pose of the spatial plane object according to claim 1, wherein the coordinates of the normal vector and the coordinates of the centroid of the spatial plane object in the camera coordinate system obtained in the step d are as follows: and obtaining the coordinates of the normal vector of the space plane object under the camera and the coordinates of the mass center of the space plane object under the camera coordinate system by using an iterative pose algorithm.
8. The method for identifying the pose of the spatial planar object according to claim 7, wherein the iterative pose algorithm is used to obtain the coordinates of the normal vector of the spatial planar object under the camera and the coordinates of the centroid of the spatial planar object under the camera coordinate system, and specifically comprises:
d1, calculating by using an iterative pose algorithm to obtain two groups of pose vectors according to the object point set and the image point set;
d2, eliminating an incorrect pose vector in the two sets of pose vectors according to a Tsai plane target calibration method to obtain a correct pose vector;
d3, obtaining the coordinate of the normal vector of the space plane object in the camera coordinate system and the coordinate of the centroid of the space plane object in the camera coordinate system according to the correct pose vector.
CN2008101677837A 2008-07-11 2008-10-07 Method for recognizing position and attitude of space plane object Expired - Fee Related CN101377812B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101677837A CN101377812B (en) 2008-07-11 2008-10-07 Method for recognizing position and attitude of space plane object

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN200810116561.2 2008-07-11
CN200810116561 2008-07-11
CN2008101677837A CN101377812B (en) 2008-07-11 2008-10-07 Method for recognizing position and attitude of space plane object

Publications (2)

Publication Number Publication Date
CN101377812A CN101377812A (en) 2009-03-04
CN101377812B true CN101377812B (en) 2010-05-12

Family

ID=40421349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101677837A Expired - Fee Related CN101377812B (en) 2008-07-11 2008-10-07 Method for recognizing position and attitude of space plane object

Country Status (1)

Country Link
CN (1) CN101377812B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635054B (en) * 2009-08-27 2012-07-04 北京水晶石数字科技股份有限公司 Method for information point placement
US8724906B2 (en) * 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US9524555B2 (en) * 2011-12-12 2016-12-20 Beihang University Method and computer program product of the simultaneous pose and points-correspondences determination from a planar model
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
CN103075998B (en) * 2012-12-31 2015-08-26 华中科技大学 A kind of monocular extraterrestrial target range finding angle-measuring method
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
CN103925895B (en) * 2014-04-24 2016-04-27 无锡新吉凯氏测量技术有限公司 A kind of industrial tag system of feature based spatial coordinated information
CN105184803A (en) * 2015-09-30 2015-12-23 西安电子科技大学 Attitude measurement method and device
CN109145788B (en) * 2018-08-08 2020-07-07 北京云舶在线科技有限公司 Video-based attitude data capturing method and system
CN110864671B (en) * 2018-08-28 2021-05-28 中国科学院沈阳自动化研究所 A method for measuring the repeatability of robot positioning based on line-structured light fitting plane
CN112750167B (en) * 2020-12-30 2022-11-04 燕山大学 Simulation method and simulation device of robot vision positioning based on virtual reality
CN114022541B (en) * 2021-09-17 2024-06-04 中国人民解放军63875部队 Method for determining ambiguity correct solution of optical single-station gesture processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570556A (en) * 2004-05-12 2005-01-26 清华大学 Measuring device and method for spatial pose of rigid body
CN101038163A (en) * 2007-02-07 2007-09-19 北京航空航天大学 Single-vision measuring method of space three-dimensional attitude of variable-focus video camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1570556A (en) * 2004-05-12 2005-01-26 清华大学 Measuring device and method for spatial pose of rigid body
CN101038163A (en) * 2007-02-07 2007-09-19 北京航空航天大学 Single-vision measuring method of space three-dimensional attitude of variable-focus video camera

Also Published As

Publication number Publication date
CN101377812A (en) 2009-03-04

Similar Documents

Publication Publication Date Title
CN101377812B (en) Method for recognizing position and attitude of space plane object
Yu et al. Robust robot pose estimation for challenging scenes with an RGB-D camera
CN103236064B (en) A kind of some cloud autoegistration method based on normal vector
CN104748750B (en) A kind of model constrained under the Attitude estimation of Three dimensional Targets in-orbit method and system
Jiang et al. Registration for 3-D point cloud using angular-invariant feature
Li et al. A 4-point algorithm for relative pose estimation of a calibrated camera with a known relative rotation angle
CN111145232A (en) An automatic registration method of 3D point cloud based on the change degree of feature information
CN105551015A (en) Scattered-point cloud image registering method
CN104899918B (en) The three-dimensional environment modeling method and system of a kind of unmanned plane
CN102750704B (en) Step-by-step video camera self-calibration method
CN113706381A (en) Three-dimensional point cloud data splicing method and device
CN108805987B (en) Hybrid tracking method and device based on deep learning
CN107358629A (en) Figure and localization method are built in a kind of interior based on target identification
CN105021124A (en) Planar component three-dimensional position and normal vector calculation method based on depth map
CN106289240A (en) A kind of two step coupling method for recognising star map based on primary
CN104615880B (en) Rapid ICP (inductively coupled plasma) method for point cloud matching of three-dimensional laser radar
CN102982556B (en) Based on the video target tracking method of particle filter algorithm in manifold
CN106547724A (en) Theorem in Euclid space coordinate transformation parameter acquisition methods based on minimum point set
CN104361573B (en) The SIFT feature matching algorithm of Fusion of Color information and global information
CN104835151A (en) Improved artificial bee colony algorithm-based image registration method
CN110097599A (en) A kind of workpiece position and orientation estimation method based on partial model expression
CN116844124A (en) Three-dimensional target detection frame annotation method, device, electronic equipment and storage medium
CN109409388A (en) A kind of bimodulus deep learning based on graphic primitive describes sub- building method
CN100590658C (en) Two-dimensional Constrained Object and Image Point Matching Method
CN106228593B (en) A kind of image dense Stereo Matching method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100512

Termination date: 20201007