CN114862969A - A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot - Google Patents
A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot Download PDFInfo
- Publication number
- CN114862969A CN114862969A CN202210586045.6A CN202210586045A CN114862969A CN 114862969 A CN114862969 A CN 114862969A CN 202210586045 A CN202210586045 A CN 202210586045A CN 114862969 A CN114862969 A CN 114862969A
- Authority
- CN
- China
- Prior art keywords
- camera
- images
- point
- image
- inspection robot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及机器视觉测量技术领域,更具体地说,涉及一种智能巡检机器人的机载云台像机角度自适应调整方法及装置。The invention relates to the technical field of machine vision measurement, and more particularly, to a method and a device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot.
背景技术Background technique
基于单目视觉的智能巡检机器人在巡检过程中,机器人在行走与驻停时存在一定的导航和定位误差,智能巡检机器人机载云台也存在一定的转动误差,导致机器人要巡检的目标偏离摄像机成像的视野中心,严重情况下造成目标完全偏离像机成像视野范围,无法对要检测的目标成像,给后续目标的智能检测和设备故障预警判断带来一定困难。During the inspection process of the intelligent inspection robot based on monocular vision, there are certain navigation and positioning errors when the robot walks and stops. The target deviates from the center of the imaging field of view of the camera. In severe cases, the target completely deviates from the imaging field of view of the camera, and the target to be detected cannot be imaged, which brings certain difficulties to the intelligent detection of the subsequent target and the early warning and judgment of equipment failure.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于提供一种智能巡检机器人的机载云台像机角度自适应调整方法及装置,以智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像构成两视图,基于ORB-FAST特征点的提取和匹配,得到两幅图像的单应性矩阵,并对目标点进行三维重建,得到智能巡检机器人机载云台像机的偏转角度,减小机器人由于行走累积误差造成的对目标定位不准确的影响。The purpose of the present invention is to provide a method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot, which consists of two images at two different times at a certain position captured by the intelligent inspection robot to form two views, Based on the extraction and matching of ORB-FAST feature points, the homography matrix of the two images is obtained, and the 3D reconstruction of the target point is carried out to obtain the deflection angle of the on-board pan-tilt camera of the intelligent inspection robot, which reduces the accumulation of the robot due to walking. The effect of the error on the inaccurate target positioning.
为达到上述目的,本发明采用的技术方案如下:For achieving the above object, the technical scheme adopted in the present invention is as follows:
本发明提供一种智能巡检机器人的机载云台像机角度自适应调整方法,包括:The present invention provides a method for adaptively adjusting the angle of an airborne pan-tilt camera of an intelligent inspection robot, comprising:
获取智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像,并在两幅图像的重叠区域进行特征点提取和匹配;Acquire two images at two different times at a certain position captured by the intelligent inspection robot, and perform feature point extraction and matching in the overlapping area of the two images;
基于提取的匹配点计算两幅图像的像机外参和两幅图像之间的单应性矩阵;Calculate the camera extrinsic parameters of the two images and the homography matrix between the two images based on the extracted matching points;
基于两幅图像的像机外参和两幅图像之间的单应性矩阵,以一幅图像为基准,计算像机的旋转角度,对云台进行调整。Based on the camera extrinsic parameters of the two images and the homography matrix between the two images, taking one image as the benchmark, the rotation angle of the camera is calculated to adjust the pan/tilt.
进一步的,所述在两幅图像的重叠区域进行特征点提取和匹配,包括:Further, performing feature point extraction and matching in the overlapping area of the two images includes:
对智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像的重叠区域进行ORB-FAST特征点提取,并利用随机抽样一致算法剔除特征点对中的误匹配点,得到两幅图像的匹配点。The ORB-FAST feature point extraction is performed on the overlapping area of two images at two different times at a certain position captured by the intelligent inspection robot, and the random sampling consensus algorithm is used to eliminate the mismatched points in the feature point pair to obtain two images. matching point.
进一步的,所述基于提取的匹配点计算两幅图像的像机外参,包括:Further, calculating the camera external parameters of the two images based on the extracted matching points includes:
基于单目移动视觉原理得到智能巡检机器人拍摄的某一位置处两个时刻的两幅图像的投影方程如下:Based on the principle of monocular mobile vision, the projection equations of two images at two moments at a certain position captured by the intelligent inspection robot are obtained as follows:
其中,s1和s2是两个时刻像机的非零尺度因子,p1和p2为两个时刻拍摄的两幅图像中二维图像齐次坐标,A1和A2分别是两个时刻的像机内参,且A1=A2,[R1 T1]和[R2 T2]分别是两个时刻的像机外参,XW为像机所拍摄的空间点P的世界三维齐次坐标;Among them, s 1 and s 2 are the non-zero scale factors of the cameras at the two moments, p 1 and p 2 are the homogeneous coordinates of the two-dimensional images in the two images captured at the two moments, and A 1 and A 2 are two The camera internal parameters at time, and A 1 =A 2 , [R 1 T 1 ] and [R 2 T 2 ] are the camera external parameters at two moments respectively, X W is the world of the spatial point P captured by the camera 3D homogeneous coordinates;
基于所述两幅图像的投影方程建立基本矩阵F和本质矩阵E如下:The fundamental matrix F and the essential matrix E are established based on the projection equations of the two images as follows:
E=SR,E=SR,
其中,R是像机在任何一个时刻下的像机外参的统一表示;Among them, R is the unified representation of the camera's external parameters at any time;
利用提取的匹配点采用8点算法,求解得到基础矩阵F;并采用张氏平面标定法得到像机内参;Using the extracted matching points, the 8-point algorithm is used to solve the fundamental matrix F; and the camera internal parameters are obtained by Zhang's plane calibration method;
基于像机内参和基础矩阵,求解得到本质矩阵E;Based on the internal parameters of the camera and the fundamental matrix, the essential matrix E is obtained by solving;
对本质矩阵E进行分解,得到两幅图像的像机外参。Decompose the essential matrix E to obtain the camera extrinsic parameters of the two images.
进一步的,所述基于提取的匹配点计算两幅图像之间的单应性矩阵,包括:Further, calculating the homography matrix between the two images based on the extracted matching points, including:
基于像机的透视投影模型,得到两个时刻拍摄的两幅图像之间的匹配点的图像坐标系和世界坐标系之间的关系如下:Based on the perspective projection model of the camera, the relationship between the image coordinate system and the world coordinate system of the matching point between the two images taken at two moments is obtained as follows:
其中,(xa,ya,za)和(xb,yb,zb)分别表示两幅图像中匹配点对的齐次坐标,Ka、Kb是两幅图像的像机内参,通过标定求出,[Ra Ta]和[Rb Tb]分别为两幅图像的像机外参;Among them, (x a , y a , z a ) and (x b , y b , z b ) represent the homogeneous coordinates of the matched point pairs in the two images, respectively, and Ka and K b are the camera internal parameters of the two images. , obtained by calibration, [R a T a ] and [R b T b ] are the camera external parameters of the two images respectively;
转换后得到,After conversion we get,
令,并假设 make, and assume
得到,get,
基于匹配点对的像素坐标和齐次坐标之间的关系,得到:Based on the relationship between pixel coordinates and homogeneous coordinates of matched point pairs, we get:
ua=xa/za,va=ya/za;u a =x a /z a , v a =y a /z a ;
采用4对匹配点对上述方程进行求解,得出两幅图像之间的单应性矩阵H。Using 4 pairs of matching points to solve the above equation, the homography matrix H between the two images is obtained.
进一步的,所述基于两幅图像的像机外参和两幅图像之间的单应性矩阵,以一幅图像为基准,计算像机的旋转角度,包括:Further, the camera extrinsic parameters based on the two images and the homography matrix between the two images are based on one image, and the rotation angle of the camera is calculated, including:
根据两幅图像Ia和Ib中的一组匹配点对p和q的图像坐标和像机外参,计算像机所拍摄的空间点P的三维世界坐标系P=(X1,Y1,Z1);According to the image coordinates and camera external parameters of a set of matched point pairs p and q in the two images I a and I b , the three-dimensional world coordinate system P=(X 1 , Y 1 of the spatial point P captured by the computer , Z 1 );
确定在不存在导航和定位误差情况下,匹配点p在图像Ib中的对应点q'=(xa,ya);Determine the corresponding point q'=(x a , y a ) of the matching point p in the image I b in the absence of navigation and positioning errors;
基于两幅图像之间的单应性矩阵H和点q',计算得到点q'在图像Ia中的匹配点p'如下:Based on the homography matrix H and the point q' between the two images, the matching point p' of the point q' in the image I a is calculated as follows:
p'=H-Tq'=H-T(xa,ya,1);p'=H -T q'=H -T (x a , y a , 1);
根据点p'与q'的图像坐标,以及像机外参,计算点p'与q'对应的虚拟空间点P'的三维世界坐标P'=(X'1,Y'1,Z'1);According to the image coordinates of the points p' and q', and the camera external parameters, calculate the three-dimensional world coordinates of the virtual space point P' corresponding to the points p' and q'P'=(X' 1 , Y' 1 , Z' 1 );
则像机的旋转角度,计算如下:Then the rotation angle of the camera is calculated as follows:
其中,θ为像机相对于图像Ia的旋转角度,o2表示图像Ib所在坐标系中的原点。Among them, θ is the rotation angle of the camera relative to the image I a , and o 2 represents the origin in the coordinate system where the image I b is located.
本发明还提供一种智能巡检机器人的机载云台像机角度自适应调整装置,包括:The present invention also provides an airborne pan-tilt camera angle adaptive adjustment device for an intelligent inspection robot, comprising:
预处理模块,用于获取智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像,并在两幅图像的重叠区域进行特征点提取和匹配;The preprocessing module is used to obtain two images at two different times at a certain position captured by the intelligent inspection robot, and perform feature point extraction and matching in the overlapping area of the two images;
计算模块,用于基于提取的匹配点计算两幅图像的像机外参和两幅图像之间的单应性矩阵;a calculation module for calculating the camera extrinsic parameters of the two images and the homography matrix between the two images based on the extracted matching points;
调整模块,用于基于两幅图像的像机外参和两幅图像之间的单应性矩阵,以一幅图像为基准,计算像机的旋转角度,对云台进行调整。The adjustment module is used for adjusting the pan/tilt by calculating the rotation angle of the camera based on the external parameters of the camera of the two images and the homography matrix between the two images, and taking one image as a reference.
进一步的,所述预处理模块具体用于,Further, the preprocessing module is specifically used for,
对智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像的重叠区域进行ORB-FAST特征点提取,并利用随机抽样一致算法剔除特征点对中的误匹配点,得到两幅图像的匹配点。The ORB-FAST feature point extraction is performed on the overlapping area of two images at two different times at a certain position captured by the intelligent inspection robot, and the random sampling consensus algorithm is used to eliminate the mismatched points in the feature point pair to obtain two images. matching point.
进一步的,所述计算模块具体用于,Further, the computing module is specifically used for,
基于单目移动视觉原理得到智能巡检机器人拍摄的某一位置处两个时刻的两幅图像的投影方程如下:Based on the principle of monocular mobile vision, the projection equations of two images at two moments at a certain position captured by the intelligent inspection robot are obtained as follows:
其中,s1和s2是两个时刻像机的非零尺度因子,p1和p2为两个时刻拍摄的两幅图像中二维图像齐次坐标,A1和A2分别是两个时刻的像机内参,且A1=A2,[R1 T1]和[R2 T2]分别是两个时刻的像机外参,XW为像机所拍摄的空间点P的世界三维齐次坐标;Among them, s 1 and s 2 are the non-zero scale factors of the cameras at the two moments, p 1 and p 2 are the homogeneous coordinates of the two-dimensional images in the two images captured at the two moments, and A 1 and A 2 are two The camera internal parameters at time, and A 1 =A 2 , [R 1 T 1 ] and [R 2 T 2 ] are the camera external parameters at two moments respectively, X W is the world of the spatial point P captured by the camera 3D homogeneous coordinates;
基于所述两幅图像的投影方程建立基本矩阵F和本质矩阵E如下:The fundamental matrix F and the essential matrix E are established based on the projection equations of the two images as follows:
E=SR,E=SR,
其中,R是像机在任何一个时刻下的像机外参的统一表示;Among them, R is the unified representation of the camera's external parameters at any time;
利用提取的匹配点采用8点算法,求解得到基础矩阵F;并采用张氏平面标定法得到像机内参;Using the extracted matching points, the 8-point algorithm is used to solve the fundamental matrix F; and the camera internal parameters are obtained by Zhang's plane calibration method;
基于像机内参和基础矩阵,求解得到本质矩阵E;Based on the internal parameters of the camera and the fundamental matrix, the essential matrix E is obtained by solving;
对本质矩阵E进行分解,得到两幅图像的像机外参。Decompose the essential matrix E to obtain the camera extrinsic parameters of the two images.
进一步的,所述计算模块具体用于,Further, the computing module is specifically used for,
基于像机的透视投影模型,得到两个时刻拍摄的两幅图像之间的匹配点的图像坐标系和世界坐标系之间的关系如下:Based on the perspective projection model of the camera, the relationship between the image coordinate system and the world coordinate system of the matching point between the two images captured at two moments is obtained as follows:
其中,(xa,ya,za)和(xb,yb,zb)分别表示两幅图像中匹配点对的齐次坐标,Ka、Kb是两幅图像的像机内参,通过标定求出,[Ra Ta]和[Rb Tb]分别为两幅图像的像机外参;Among them, (x a , y a , z a ) and (x b , y b , z b ) represent the homogeneous coordinates of the matched point pairs in the two images, respectively, and Ka and K b are the camera internal parameters of the two images. , obtained by calibration, [R a T a ] and [R b T b ] are the camera external parameters of the two images respectively;
转换后得到,After conversion we get,
令,并假设 make, and assume
得到,get,
基于匹配点对的像素坐标和齐次坐标之间的关系,得到:Based on the relationship between the pixel coordinates and the homogeneous coordinates of the matched point pairs, we get:
ua=xa/za,va=ya/za;u a =x a /z a , v a =y a /z a ;
采用4对匹配点对上述方程进行求解,得出两幅图像之间的单应性矩阵H。Using 4 pairs of matching points to solve the above equation, the homography matrix H between the two images is obtained.
进一步的,所述调整模块具体用于,Further, the adjustment module is specifically used for,
根据两幅图像Ia和Ib中的一组匹配点对p和q的图像坐标和像机外参,计算像机所拍摄的空间点P的三维世界坐标系P=(X1,Y1,Z1);According to the image coordinates and camera external parameters of a set of matched point pairs p and q in the two images I a and I b , the three-dimensional world coordinate system P=(X 1 , Y 1 of the spatial point P captured by the computer , Z 1 );
确定在不存在导航和定位误差情况下,匹配点p在图像Ib中的对应点q'=(xa,ya);Determine the corresponding point q'=(x a , y a ) of the matching point p in the image I b in the absence of navigation and positioning errors;
基于两幅图像之间的单应性矩阵H和点q',计算得到点q'在图像Ia中的匹配点p'如下:Based on the homography matrix H and the point q' between the two images, the matching point p' of the point q' in the image I a is calculated as follows:
p'=H-Tq'=H-T(xa,ya,1);p'=H -T q'=H -T (x a , y a , 1);
根据点p'与q'的图像坐标,以及像机外参,计算点p'与q'对应的虚拟空间点P'的三维世界坐标P'=(X'1,Y'1,Z'1);According to the image coordinates of the points p' and q', and the camera external parameters, calculate the three-dimensional world coordinates of the virtual space point P' corresponding to the points p' and q'P'=(X' 1 , Y' 1 , Z' 1 );
则像机的旋转角度,计算如下:Then the rotation angle of the camera is calculated as follows:
其中,θ为像机相对于图像Ia的旋转角度,o2表示图像Ib所在坐标系中的原点。Among them, θ is the rotation angle of the camera relative to the image I a , and o 2 represents the origin in the coordinate system where the image I b is located.
本发明的有益效果为:The beneficial effects of the present invention are:
本发明提供了一种智能巡检机器人的机载云台像机角度自适应调整方法,以智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像构成两视图,基于ORB-FAST特征点的提取和匹配,得到两幅图像的单应性矩阵,并对目标点进行三维重建,得到智能巡检机器人机载云台像机的偏转角度,减小了机器人由于行走累积误差造成的对目标定位不准确的影响。利用本发明可实现复杂变电站环境下智能巡检机器人的像机角度自适应校准,为变电站机器人智能巡检以及安全运维提供了强有力的保障。The invention provides an adaptive adjustment method for the angle of an airborne pan-tilt camera of an intelligent inspection robot. Two images at two different moments at a certain position captured by the intelligent inspection robot form two views. Based on ORB-FAST Extraction and matching of feature points, obtain the homography matrix of the two images, and perform 3D reconstruction of the target point to obtain the deflection angle of the intelligent inspection robot's airborne pan-tilt camera, which reduces the robot's cumulative walking error. The impact of inaccurate targeting. The invention can realize the self-adaptive calibration of the camera angle of the intelligent inspection robot in the complex substation environment, and provide a strong guarantee for the intelligent inspection and safe operation and maintenance of the substation robot.
附图说明Description of drawings
图1为智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像构成单应性约束示例;Figure 1 is an example of a homography constraint formed by two images at two different times at a certain position captured by an intelligent inspection robot;
图2为图1所示的两幅图像的移动式目标三维重建示例。FIG. 2 is an example of three-dimensional reconstruction of a moving target of the two images shown in FIG. 1 .
具体实施方式Detailed ways
下面对本发明作进一步描述。以下实施例仅用于更加清楚地说明本发明的技术方案,而不能以此来限制本发明的保护范围。The present invention is further described below. The following examples are only used to illustrate the technical solutions of the present invention more clearly, and cannot be used to limit the protection scope of the present invention.
在不同时刻同一位置处,由于巡检机器人存在定位误差和固定像机的云台角度转动误差,导致拍摄的目标在图像中成像位置不固定和不同时刻捕获的目标在图像中出现不同程度的偏离,对机器人目标的智能巡检、故障诊断、识别和预警带来一定的困难。本发明提供一种智能巡检机器人的机载云台像机角度自适应调整方法,利用单目移动视觉技术,通过智能巡检机器人上的像机获取两个在某一位置附近处两个不同时刻的目标图像。根据两幅图像提取的特征点,计算云台像机在两个不同时刻的位置处目标点相对像机的位姿和角度,实现机器人对云台像机角度的自适应调整,使机器人在某一位置捕获的两个不同时刻的目标在图像的固定位置,为后续实现机器人对数据进行智能分析和图像智能检测提供重要依据。At the same position at different times, due to the positioning error of the inspection robot and the rotation error of the pan/tilt angle of the fixed camera, the imaging position of the photographed target in the image is not fixed and the target captured at different times has different degrees of deviation in the image. , which brings certain difficulties to the intelligent inspection, fault diagnosis, identification and early warning of robot targets. The present invention provides a method for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot, which utilizes the monocular mobile vision technology to obtain two different positions near a certain position through the camera on the intelligent inspection robot. The target image of the moment. According to the feature points extracted from the two images, the pose and angle of the target point relative to the camera at the positions of the gimbal camera at two different times are calculated, so as to realize the adaptive adjustment of the angle of the gimbal camera by the robot, so that the robot can adjust the angle of the gimbal camera at a certain time. Two targets captured at one location at different times are in a fixed position of the image, which provides an important basis for the subsequent realization of intelligent data analysis and intelligent image detection by the robot.
一种智能巡检机器人的机载云台像机角度自适应调整方法,包括:A method for adaptively adjusting the angle of an airborne pan-tilt camera of an intelligent inspection robot, comprising:
获取智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像,并在两幅图像的重叠区域进行特征点提取和匹配;Acquire two images at two different times at a certain position captured by the intelligent inspection robot, and perform feature point extraction and matching in the overlapping area of the two images;
基于提取的匹配点计算两幅图像的像机外参和两幅图像之间的单应性矩阵;Calculate the camera extrinsic parameters of the two images and the homography matrix between the two images based on the extracted matching points;
基于两幅图像的像机外参和两幅图像之间的单应性矩阵,以一幅图像为基准,计算像机的旋转角度,对云台进行调整。Based on the camera extrinsic parameters of the two images and the homography matrix between the two images, taking one image as the benchmark, the rotation angle of the camera is calculated to adjust the pan/tilt.
作为一种优选的实施方式,本发明的一个实施例中,基于单目移动视觉原理得到两个时刻摄像机的投影方程,并建立基本矩阵和本质矩阵,最终求得两视图之间的外部结构参数R和t。As a preferred implementation, in an embodiment of the present invention, the projection equations of the cameras at two moments are obtained based on the principle of monocular moving vision, and the fundamental matrix and the essential matrix are established, and the external structural parameters between the two views are finally obtained. R and t.
单目移动视觉原理如下:The principle of monocular mobile vision is as follows:
根据摄像机的针孔成像模型,利用平面方格点的摄像机标定方法,可以实现机载摄像机的精确标定。假定靶标平面的三维点的齐次坐标记为M=[x,y,z,1]T,图像平面的二维点齐次坐标为m=[u,v,1]T,二者之间的射影关系为:According to the pinhole imaging model of the camera, the accurate calibration of the airborne camera can be achieved by using the camera calibration method of plane grid points. Assuming that the homogeneous coordinates of the three-dimensional points on the target plane are marked as M=[x,y,z,1] T , and the homogeneous coordinates of the two-dimensional points on the image plane are m=[u,v,1] T , between the two The projective relation is:
sm=A[R t]M (1)sm=A[R t]M (1)
其中,s为一任意的非零尺度因子,[R t]是一个3行4列的矩阵,称为像机外参数矩阵,R称为旋转矩阵,t=(t1,t2,t3)T称为平移矩阵,A称为摄像机的内部参数矩阵。αx、αy是u轴和v轴的尺度因子,(u0,v0)为主点坐标,r是u轴和v轴的不垂直因子。由张氏平面标定法,可以求出像机的内参数矩阵A。Among them, s is an arbitrary non-zero scale factor, [R t] is a matrix with 3 rows and 4 columns, called the camera extrinsic parameter matrix, R is called the rotation matrix, t=(t 1 , t 2 , t 3 ) T is called the translation matrix, A is called the camera's internal parameter matrix. α x , α y are the scale factors of the u-axis and v-axis, (u 0 , v 0 ) are the coordinates of the main point, and r is the non-perpendicular factor of the u-axis and v-axis. By Zhang's plane calibration method, the internal parameter matrix A of the camera can be obtained.
移动式单目视觉测量系统是通过一个像机经过移动,虚拟成多个像机,形成多目视觉测量系统。本实施例以在某一位置处两个不同时刻的机器人拍摄的两幅图像构成两视图视觉测量为例,分析移动式单目视觉测量系统原理。The mobile monocular vision measurement system is a multi-eye vision measurement system formed by moving one camera and virtualizing it into multiple cameras. In this embodiment, the principle of a mobile monocular vision measurement system is analyzed by taking two images taken by a robot at a certain position at two different times to form a two-view vision measurement as an example.
假设空间一点P的世界三维齐次坐标为XW,分别在两个时刻拍摄的两幅图像中二维图像齐次坐标为p1和p2,则由式(1)可以得到两个时刻摄像机的投影方程为:Assuming that the world three-dimensional homogeneous coordinate of a point P in space is X W , and the two-dimensional image homogeneous coordinates of the two images captured at two moments are p 1 and p 2 , then the camera at two moments can be obtained by formula (1). The projection equation of is:
其中,s1和s2是两个像机的非零尺度因子,[R1 T1]和[R2 T2]分别是像机在时刻1和时刻2的像机外参,A1和A2分别是像机在时刻1和时刻2的像机内参,因为像机只有刚性移动,内部结构参数没有变化,所以A1=A2。Among them, s 1 and s 2 are the non-zero scale factors of the two cameras, [R 1 T 1 ] and [R 2 T 2 ] are the camera external parameters of the camera at time 1 and time 2, respectively, A 1 and A 2 is the camera internal parameters of the camera at time 1 and time 2 respectively, because the camera only moves rigidly and the internal structure parameters do not change, so A 1 =A 2 .
结合极线几何约束关系,由(2)式可以求出基本矩阵F和本质矩阵E的表达式,分别如下:Combined with the geometric constraints of epipolar lines, the expressions of the fundamental matrix F and the essential matrix E can be obtained from equation (2), which are respectively as follows:
E=SR (4)E=SR (4)
其中,S为反对称矩阵,R是像机在任何一个时刻下的像机外参的统一表示。where S is an antisymmetric matrix, R is a unified representation of the camera's external parameters at any time.
从(3)式可以看出,基础矩阵F只与两个摄像机的内部参数和系统的外部结构参数有关,而机载摄像机由于通过云台转动,只做刚体运动,像机内部参数不变。因此,可以得到(4)式的本质矩阵E,可以看出E只与视觉系统的外部参数有关,E可以分解求出移动式单目视觉测量系统的外部结构参数R和t。It can be seen from equation (3) that the fundamental matrix F is only related to the internal parameters of the two cameras and the external structural parameters of the system, while the airborne camera only performs rigid body motion due to the rotation of the PTZ, and the internal parameters of the camera remain unchanged. Therefore, the essential matrix E of formula (4) can be obtained. It can be seen that E is only related to the external parameters of the vision system, and E can be decomposed to obtain the external structural parameters R and t of the mobile monocular vision measurement system.
从极线几何约束关系以及本质矩阵的定义可以看出,基本矩阵为具有7个自由度的秩为2的矩阵,通过两幅图像特征点的提取和匹配,利用8点算法,可以求出两视图图像之间的基础矩阵F,结合像机的内部参数,可以求出本质矩阵E。通过对矩阵E的分解,可以最终求得两视图之间的外部结构参数R和t。From the epipolar geometric constraint relationship and the definition of the essential matrix, it can be seen that the basic matrix is a rank 2 matrix with 7 degrees of freedom. Through the extraction and matching of the feature points of the two images, the 8-point algorithm can be used to find the two The fundamental matrix F between the view images, combined with the internal parameters of the camera, can obtain the essential matrix E. By decomposing the matrix E, the external structural parameters R and t between the two views can be finally obtained.
作为一种优选的实施方式,本发明的一个实施例中,利用ORB(Oriented Fast andRotated BRIEF)-FAST特征点的特性,即ORB-FAST特征对图像旋转、尺度缩放以及亮度变化等保持不变,对两不同时刻在某一位置处由机器人像机捕获的具有重叠区域的图像进行ORB-FAST特征点提取,并利用随机抽样一致(Random Sample Consensus,RANSAC)算法剔除图像对中的误匹配点,实现两幅图像间的尺度不变特征变换(Scale-invariant featuretransform,SIFT)特征点的精确配准。As a preferred implementation, in an embodiment of the present invention, the characteristics of ORB (Oriented Fast and Rotated BRIEF)-FAST feature points are used, that is, the ORB-FAST feature remains unchanged for image rotation, scale scaling, and brightness changes. The ORB-FAST feature point extraction is performed on the images with overlapping areas captured by the robot camera at a certain position at two different times, and the Random Sample Consensus (RANSAC) algorithm is used to eliminate the mismatching points in the image pairs. Accurate registration of Scale-invariant feature transform (SIFT) feature points between two images.
如图1所示,在某一位置处,由于巡检机器人存在导航定位误差和云台角度转动误差,机载像机在不同时刻对同一目标成像,会获得具有重叠区域的两幅图像Ia和Ib,在图像的重叠区域对ORB-FAST特征点进行提取和匹配。结合公式(3)和(4),利用上述分析的8点算法,可以求出两个时刻下的像机坐标系之间的外部结构参数R和t,外部参数对空间三维点的重建具有重要意义。As shown in Figure 1, at a certain position, due to the navigation positioning error and the pan-tilt angle rotation error of the inspection robot, the airborne camera images the same target at different times, and two images I a with overlapping areas are obtained. and I b , ORB-FAST feature points are extracted and matched in the overlapping regions of the images. Combined with formulas (3) and (4), using the 8-point algorithm analyzed above, the external structural parameters R and t between the camera coordinate systems at two moments can be obtained. The external parameters are important for the reconstruction of three-dimensional points in space. significance.
作为一种优选的实施方式,本发明的一个实施例中,计算两视图之间的单应性矩阵,具体实现过程如下:As a preferred implementation, in an embodiment of the present invention, to calculate the homography matrix between two views, the specific implementation process is as follows:
结合两幅图像匹配特征点的信息,利用像机的透视投影模型,可以分别得到两个时刻的图像之间的匹配点的图像坐标系和世界坐标系之间的关系式,Combining the information of the matching feature points of the two images, and using the perspective projection model of the camera, the relationship between the image coordinate system and the world coordinate system of the matching point between the images at the two moments can be obtained respectively,
其中,(xa,ya,za)和(xb,yb,zb)分别表示图像Ia和Ib中匹配点对的齐次坐标,Ka、Kb是匹配点对的内参数,通过标定可以求出,[Ra Ta]和[Rb Tb]分别为匹配点对的像机外参。Among them, (x a , y a , z a ) and (x b , y b , z b ) represent the homogeneous coordinates of the matching point pairs in the images I a and I b respectively, and Ka and K b are the matching point pairs The internal parameters can be obtained by calibration, [R a T a ] and [R b T b ] are the camera external parameters of the matching point pair, respectively.
由公式(5)和(6)可得,It can be obtained from formulas (5) and (6),
令H为3×3的矩阵,H矩阵反映了两幅图像特征点之间的映射关系,定义H为两幅平面之间的单应性矩阵,假设代入公式(7)可以得到,make H is a 3×3 matrix, and the H matrix reflects the mapping relationship between the feature points of the two images. Define H as the homography matrix between the two planes. Suppose Substitute into formula (7) to get,
基于ua=xa/za,va=ya/za,Based on u a =x a /z a , v a =y a /z a ,
则由公式(8)可以得到,Then it can be obtained from formula (8),
其中,(ua,va)和(ub,vb)为两幅图像上的匹配点对的像素坐标,ua=xa/za,va=ya/za。Among them, (u a , v a ) and (u b , v b ) are the pixel coordinates of the matched point pairs on the two images, u a =x a /z a , v a =y a /z a .
由公式(9)可知,每一对特征点可以得到两个方程,H矩阵为秩为8的奇异矩阵,则至少4对匹配点即可解出两幅平面的单应性矩阵H。It can be seen from formula (9) that two equations can be obtained for each pair of feature points. The H matrix is a singular matrix of rank 8, and at least 4 pairs of matching points can solve the homography matrix H of the two planes.
作为一种优选的实施方式,本发明的一个实施例中,计算像机的偏移角度,具体实现过程如下:As a preferred implementation, in an embodiment of the present invention, the offset angle of the computer is calculated, and the specific implementation process is as follows:
以第一幅图像作为基准图像,选取两幅图像中匹配特征点中的一组匹配特征点对p和q,对应的空间特征点为P,其在对应两幅图像中的图像坐标分别为p=(xa,ya)和q=(xb,yb),如图2所示。Taking the first image as the reference image, select a set of matching feature point pairs p and q in the matching feature points in the two images, the corresponding spatial feature point is P, and its image coordinates in the corresponding two images are p respectively =(x a , y a ) and q=(x b , y b ), as shown in FIG. 2 .
结合双目立体视觉测量原理,对经过精确标定后的两视图像机测量系统,已知匹配点对的图像坐标和外部结构参数R和t,可以精确求取空间点P的三维世界坐标系,把世界坐标系建立在像机2坐标系下,目的是方便像机旋转角度的计算。假设此时计算出的空间点P的世界三维坐标为P=(X1,Y1,Z1)。Combined with the principle of binocular stereo vision measurement, for the accurately calibrated dual-view image machine measurement system, the image coordinates of the matching point pair and the external structural parameters R and t are known, and the three-dimensional world coordinate system of the space point P can be accurately obtained. The world coordinate system is established under the camera 2 coordinate system, in order to facilitate the calculation of the rotation angle of the camera. Assume that the calculated three-dimensional world coordinates of the space point P at this time are P=(X 1 , Y 1 , Z 1 ).
由于机器人存在导航和定位误差,不同时刻拍摄的两幅图像的匹配特征点图像坐标存在偏差,如图2所示的匹配特征点对p和q,一个在主点右侧,一个在主点左侧。假设机器人导航和定位精准,不存在上述误差,在图像1中设图像坐标为p=(xa,ya),那么时刻1像机所在位置和角度应与时刻2像机位置和角度相同,所以在第2幅图像中,与其在图像1中的p点对应的匹配点的图像坐标应为q'=(xa,ya)。而实际上由于误差的存在,导致在时刻2像机位置和角度偏离在时刻1的像机位置和角度。Due to the navigation and positioning errors of the robot, there is a deviation in the image coordinates of the matching feature points of the two images taken at different times. side. Assuming that the robot's navigation and positioning are accurate and the above errors do not exist, set the image coordinates in image 1 as p=(x a , y a ), then the position and angle of the camera at time 1 should be the same as the position and angle of the camera at time 2, Therefore, in the second image, the image coordinates of the matching point corresponding to point p in image 1 should be q'=(x a , y a ). In fact, due to the existence of errors, the camera position and angle at time 2 deviate from the camera position and angle at time 1.
本实施例是在导航误差存在的情况下,不改变巡检机器人的位置,通过转动云台角度,调整像机的拍摄位置,达到目标不偏离图像中心位置。云台转动角度,通过像机计算的角度误差来求出。In this embodiment, when the navigation error exists, the position of the inspection robot is not changed, and the shooting position of the camera is adjusted by rotating the angle of the pan/tilt, so that the target does not deviate from the center position of the image. The rotation angle of the gimbal is calculated by the angle error calculated by the camera.
由上述分析可知,目标点如果要想达到在时刻2拍摄的图像上的位置与在时刻1拍摄的图像上具有相同的图像坐标,在保持机器人不移动的情况下,位置2时刻的像机应该移动角度为∠qo2q'。It can be seen from the above analysis that if the target point wants to achieve the position on the image captured at time 2 and the image captured at time 1 with the same image coordinates, the camera at position 2 should have the same image coordinates as the robot does not move. The moving angle is ∠qo 2 q'.
实际上,由上述分析求出两幅图像的单应性矩阵H,可以求出与在图像2中的点q'=(xa,ya)对应的在图像1中的匹配点p',即,In fact, by obtaining the homography matrix H of the two images from the above analysis, the matching point p' in image 1 corresponding to the point q'=(x a , ya ) in image 2 can be obtained, which is,
p'=H-Tq'=H-T(xa,ya,1) (10)p'=H -T q'=H -T (x a ,y a ,1) (10)
称p'与q'为虚拟图像匹配点对,二者对应的空间点为P',称为虚拟空间三维点。同理,已知对应点对p'与q'图像坐标,和系统外部结构参数R和t,结合双目立体视觉测量原理,可以求出虚拟空间点P'的三维世界坐标,假设计算出的P'=(X'1,Y'1,Z'1)。We call p' and q' a virtual image matching point pair, and the space point corresponding to them is P', which is called a virtual space three-dimensional point. Similarly, given the image coordinates of the corresponding point pair p' and q', and the external structural parameters R and t of the system, combined with the principle of binocular stereo vision measurement, the three-dimensional world coordinates of the virtual space point P' can be obtained. P'=(X' 1 , Y' 1 , Z' 1 ).
由图2分析,因为空间点的三维坐标建立在像机2坐标系下,∠qo2q'=∠Po2P',则像机2移动的角度应该由下式来计算,即:From the analysis of Figure 2, because the three-dimensional coordinates of the space point are established in the coordinate system of the camera 2, ∠qo 2 q'=∠Po 2 P', then the angle of the camera 2 movement should be calculated by the following formula, namely:
通过对(11)式进行分解,即可以求出像机应该旋转的角度,也即是机器人云台应该转动的角度。By decomposing (11), the angle that the camera should rotate, that is, the angle that the robot head should rotate.
本发明实施例针对变电站智能巡检机器人巡检定位误差,以图像特征提取和匹配为出发点,分析了两视图图像几何约束关系,提出了一种基于ORB-FAST特征的图像三维点求取算法,对智能巡检机器人像机定位误差进行了校准,减小了机器人由于行走累积误差造成的对目标定位不准确的影响。The embodiment of the present invention analyzes the geometric constraint relationship of the two-view image for the inspection and positioning error of the intelligent inspection robot in the substation, taking the image feature extraction and matching as the starting point, and proposes an image three-dimensional point extraction algorithm based on ORB-FAST features, The camera positioning error of the intelligent inspection robot is calibrated, which reduces the impact of the robot on the inaccurate target positioning caused by the accumulated walking error.
实施例1Example 1
本实施例实验采用的仿真软件为Visual C++2019,主机信息为电脑主频3.6GHZ,内存32GB,操作系统为Windows 10,32位,变电站内部使用的机器人为浙江大立科技股份有限公司生产的型号为DL-RC63智能巡检机器人。本次实验中,利用张正友平面靶标像机标定方法,实现对巡检机器人像机的内参标定,如表1所示。然后利用本发明提出的算法,对智能巡检机器人在两个不同时刻同一地点获取的图像进行分析,以第一次拍摄的图像作为基准图像,按照本发明提出的算法,求解第二次获取的图像相对于基准图像的角度偏移值。根据求出的像机偏转角度,自动调整机器人转台角度,即可以实现智能巡检机器人搭载的像机在不同时刻对同一目标的视场角度调整,减少由于定位误差造成的智能巡检机器人的较大便宜。通过实验结果可以看到,本发明提出的算法,对不同时刻拍摄到的图像,利用图像间的几何约束关系,对巡检机器人像机角度的自适应校准具有较好的效果,验证了本发明所提算法对变电站内部复杂环境下巡检机器人的角度自适应校准具有一定的鲁棒性。The simulation software used in the experiment of this embodiment is Visual C++2019, the host information is the computer's main frequency 3.6GHZ, the memory is 32GB, the operating system is Windows 10, 32 bits, and the robot used in the substation is produced by Zhejiang Dali Technology Co., Ltd. The model is DL-RC63 intelligent inspection robot. In this experiment, the calibration method of Zhang Zhengyou's plane target camera is used to realize the calibration of the internal parameters of the inspection robot camera, as shown in Table 1. Then use the algorithm proposed by the present invention to analyze the images acquired by the intelligent inspection robot at the same place at two different times, take the image captured for the first time as the reference image, and solve the problem obtained by the second acquisition according to the algorithm proposed by the present invention. The angular offset value of the image relative to the base image. According to the obtained deflection angle of the camera, the angle of the robot turntable is automatically adjusted, that is, the camera mounted on the intelligent inspection robot can adjust the field of view angle of the same target at different times, reducing the comparison of the intelligent inspection robot caused by positioning errors. Great cheap. It can be seen from the experimental results that the algorithm proposed by the present invention has a good effect on the self-adaptive calibration of the camera angle of the inspection robot for the images captured at different times by using the geometric constraint relationship between the images, which verifies the present invention. The proposed algorithm has a certain robustness to the angle adaptive calibration of the inspection robot in the complex environment inside the substation.
表1像机内参数标定结果Table 1 Calibration results of camera parameters
实施例2Example 2
本实施例次实验采用本发明提出的算法,与其他经典特征提取算法相比,诸如SIFT特征点匹配定位方法、SURF特征点匹配定位方法,分别对智能巡检机器人实际拍摄到的变电站内电气柜仪表图像进行分析,对比拍摄获取的50幅图像中仪表,利用上述3种算法分别计算目标定位所用时间的平均值,如下表2可以看到,本发明提出的算法具有较快的处理速度,特别适合机器人在现场复杂环境下变智能巡检机器人的搭载的像机角度自适应校准,本发明提出的算法具有较高的校准效率,可以为变电站智能巡检机器人的在线智能检测提供重要技术支撑。In this example, the algorithm proposed by the present invention is used in the experiment. Compared with other classical feature extraction algorithms, such as the SIFT feature point matching and positioning method and the SURF feature point matching and positioning method, respectively, the electrical cabinets in the substation actually photographed by the intelligent inspection robot are analyzed. Analyze the instrument images, compare the instruments in the 50 images obtained by shooting, and use the above three algorithms to calculate the average value of the time used for target positioning. The camera angle self-adaptive calibration is suitable for the robot to transform into the intelligent inspection robot in the complex environment of the site. The algorithm proposed by the invention has high calibration efficiency and can provide important technical support for the online intelligent detection of the intelligent inspection robot of the substation.
表2不同算法对目标检测时间对比Table 2 Comparison of target detection time by different algorithms
另一方面,本发明提供一种智能巡检机器人的机载云台像机角度自适应调整装置,包括:On the other hand, the present invention provides an airborne pan-tilt camera angle adaptive adjustment device for an intelligent inspection robot, including:
预处理模块,用于获取智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像,并在两幅图像的重叠区域进行特征点提取和匹配;The preprocessing module is used to obtain two images at two different times at a certain position captured by the intelligent inspection robot, and perform feature point extraction and matching in the overlapping area of the two images;
计算模块,用于基于提取的匹配点计算两幅图像的像机外参和两幅图像之间的单应性矩阵;a calculation module for calculating the camera extrinsic parameters of the two images and the homography matrix between the two images based on the extracted matching points;
调整模块,用于基于两幅图像的像机外参和两幅图像之间的单应性矩阵,以一幅图像为基准,计算像机的旋转角度,对云台进行调整。The adjustment module is used for adjusting the pan/tilt by calculating the rotation angle of the camera based on the external parameters of the camera of the two images and the homography matrix between the two images, and taking one image as a reference.
作为一种优选的实施方式,本发明的一个实施例中,预处理模块具体用于,As a preferred implementation manner, in an embodiment of the present invention, the preprocessing module is specifically used to:
对智能巡检机器人拍摄的某一位置处两个不同时刻的两幅图像的重叠区域进行ORB-FAST特征点提取,并利用随机抽样一致算法剔除特征点对中的误匹配点,得到两幅图像的匹配点。The ORB-FAST feature point extraction is performed on the overlapping area of two images at two different times at a certain position captured by the intelligent inspection robot, and the random sampling consensus algorithm is used to eliminate the mismatched points in the feature point pair to obtain two images. matching point.
作为一种优选的实施方式,本发明的一个实施例中,计算模块具体用于,As a preferred implementation manner, in an embodiment of the present invention, the computing module is specifically used to:
基于单目移动视觉原理得到智能巡检机器人拍摄的某一位置处两个时刻的两幅图像的投影方程如下:Based on the principle of monocular mobile vision, the projection equations of two images at two moments at a certain position captured by the intelligent inspection robot are obtained as follows:
其中,s1和s2是两个时刻像机的非零尺度因子,p1和p2为两个时刻拍摄的两幅图像中二维图像齐次坐标,A1和A2分别是两个时刻的像机内参,且A1=A2,[R1 T1]和[R2 T2]分别是两个时刻的像机外参,XW为像机所拍摄的空间点P的世界三维齐次坐标;Among them, s 1 and s 2 are the non-zero scale factors of the cameras at the two moments, p 1 and p 2 are the homogeneous coordinates of the two-dimensional images in the two images captured at the two moments, and A 1 and A 2 are two The camera internal parameters at time, and A 1 =A 2 , [R 1 T 1 ] and [R 2 T 2 ] are the camera external parameters at two moments respectively, X W is the world of the spatial point P captured by the camera 3D homogeneous coordinates;
基于所述两幅图像的投影方程建立基本矩阵F和本质矩阵E如下:The fundamental matrix F and the essential matrix E are established based on the projection equations of the two images as follows:
E=SR,E=SR,
其中,R是像机在任何一个时刻下的像机外参的统一表示;Among them, R is the unified representation of the camera's external parameters at any time;
利用提取的匹配点采用8点算法,求解得到基础矩阵F;并采用张氏平面标定法得到像机内参;Using the extracted matching points, the 8-point algorithm is used to solve the fundamental matrix F; and the camera internal parameters are obtained by Zhang's plane calibration method;
基于像机内参和基础矩阵,求解得到本质矩阵E;Based on the internal parameters of the camera and the fundamental matrix, the essential matrix E is obtained by solving;
对本质矩阵E进行分解,得到两幅图像的像机外参。Decompose the essential matrix E to obtain the camera extrinsic parameters of the two images.
作为一种优选的实施方式,本发明的一个实施例中,计算模块具体用于,As a preferred implementation manner, in an embodiment of the present invention, the computing module is specifically used to:
基于像机的透视投影模型,得到两个时刻拍摄的两幅图像之间的匹配点的图像坐标系和世界坐标系之间的关系如下:Based on the perspective projection model of the camera, the relationship between the image coordinate system and the world coordinate system of the matching point between the two images taken at two moments is obtained as follows:
其中,(xa,ya,za)和(xb,yb,zb)分别表示两幅图像中匹配点对的齐次坐标,Ka、Kb是两幅图像的像机内参,通过标定求出,[Ra Ta]和[Rb Tb]分别为两幅图像的像机外参;Among them, (x a , y a , z a ) and (x b , y b , z b ) represent the homogeneous coordinates of the matched point pairs in the two images, respectively, and Ka and K b are the camera internal parameters of the two images. , obtained by calibration, [R a T a ] and [R b T b ] are the camera external parameters of the two images respectively;
转换后得到,After conversion we get,
令,并假设 make, and assume
得到,get,
基于匹配点对的像素坐标和齐次坐标之间的关系,得到:Based on the relationship between pixel coordinates and homogeneous coordinates of matched point pairs, we get:
ua=xa/za,va=ya/za;u a =x a /z a , v a =y a /z a ;
采用4对匹配点对上述方程进行求解,得出两幅图像之间的单应性矩阵H。Using 4 pairs of matching points to solve the above equation, the homography matrix H between the two images is obtained.
作为一种优选的实施方式,本发明的一个实施例中,调整模块具体用于,As a preferred implementation manner, in an embodiment of the present invention, the adjustment module is specifically used to:
根据两幅图像Ia和Ib中的一组匹配点对p和q的图像坐标和像机外参,计算像机所拍摄的空间点P的三维世界坐标系P=(X1,Y1,Z1);According to the image coordinates and camera external parameters of a set of matched point pairs p and q in the two images I a and I b , the three-dimensional world coordinate system P=(X 1 , Y 1 of the spatial point P captured by the computer , Z 1 );
确定在不存在导航和定位误差情况下,匹配点p在图像Ib中的对应点q'=(xa,ya);Determine the corresponding point q'=(x a , y a ) of the matching point p in the image I b in the absence of navigation and positioning errors;
基于两幅图像之间的单应性矩阵H和点q',计算得到点q'在图像Ia中的匹配点p'如下:Based on the homography matrix H and the point q' between the two images, the matching point p' of the point q' in the image I a is calculated as follows:
p'=H-Tq'=H-T(xa,ya,1);p'=H -T q'=H -T (x a , y a , 1);
根据点p'与q'的图像坐标,以及像机外参,计算点p'与q'对应的虚拟空间点P'的三维世界坐标P'=(X'1,Y'1,Z'1);According to the image coordinates of the points p' and q', and the camera external parameters, calculate the three-dimensional world coordinates of the virtual space point P' corresponding to the points p' and q'P'=(X' 1 , Y' 1 , Z' 1 );
则像机的旋转角度,计算如下:Then the rotation angle of the camera is calculated as follows:
其中,θ为像机相对于图像Ia的旋转角度,o2表示图像Ib所在坐标系中的原点。Among them, θ is the rotation angle of the camera relative to the image I a , and o 2 represents the origin in the coordinate system where the image I b is located.
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。As will be appreciated by those skilled in the art, the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block in the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to the processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing device to produce a machine such that the instructions executed by the processor of the computer or other programmable data processing device produce Means for implementing the functions specified in a flow or flow of a flowchart and/or a block or blocks of a block diagram.
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions The apparatus implements the functions specified in the flow or flow of the flowcharts and/or the block or blocks of the block diagrams.
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in the flow or blocks of the flowcharts and/or the block or blocks of the block diagrams.
最后应当说明的是:以上实施例仅用以说明本发明的技术方案而非对其限制,尽管参照上述实施例对本发明进行了详细的说明,所属领域的普通技术人员应当理解:依然可以对本发明的具体实施方式进行修改或者等同替换,而未脱离本发明精神和范围的任何修改或者等同替换,其均应涵盖在本发明的权利要求保护范围之内。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the present invention rather than to limit them. Although the present invention has been described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: the present invention can still be Modifications or equivalent replacements are made to the specific embodiments of the present invention, and any modifications or equivalent replacements that do not depart from the spirit and scope of the present invention shall be included within the protection scope of the claims of the present invention.
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210586045.6A CN114862969A (en) | 2022-05-27 | 2022-05-27 | A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210586045.6A CN114862969A (en) | 2022-05-27 | 2022-05-27 | A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114862969A true CN114862969A (en) | 2022-08-05 |
Family
ID=82640351
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210586045.6A Pending CN114862969A (en) | 2022-05-27 | 2022-05-27 | A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114862969A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115471573A (en) * | 2022-09-15 | 2022-12-13 | 齐丰科技股份有限公司 | Preset Offset Correction Method for Substation PTZ Camera Based on 3D Reconstruction |
CN116563336A (en) * | 2023-04-03 | 2023-08-08 | 国网江苏省电力有限公司南通供电分公司 | Self-adaptive positioning algorithm for digital twin machine room target tracking |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2950791A1 (en) * | 2013-08-19 | 2015-02-26 | State Grid Corporation Of China | Binocular visual navigation system and method based on power robot |
CN110728715A (en) * | 2019-09-06 | 2020-01-24 | 南京工程学院 | Camera angle self-adaptive adjusting method of intelligent inspection robot |
CN110969668A (en) * | 2019-11-22 | 2020-04-07 | 大连理工大学 | A Stereo Calibration Algorithm for Telephoto Binocular Camera |
-
2022
- 2022-05-27 CN CN202210586045.6A patent/CN114862969A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2950791A1 (en) * | 2013-08-19 | 2015-02-26 | State Grid Corporation Of China | Binocular visual navigation system and method based on power robot |
CN110728715A (en) * | 2019-09-06 | 2020-01-24 | 南京工程学院 | Camera angle self-adaptive adjusting method of intelligent inspection robot |
CN110969668A (en) * | 2019-11-22 | 2020-04-07 | 大连理工大学 | A Stereo Calibration Algorithm for Telephoto Binocular Camera |
Non-Patent Citations (2)
Title |
---|
孙温和;张国伟;卢秋红;: "基于改进ORB的巡检机器人视觉算法研究", 现代计算机(专业版), no. 23 * |
邱海洋: "《水下机器人同步定位与地图构建技术》", 吉林大学出版社, pages: 89 - 92 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115471573A (en) * | 2022-09-15 | 2022-12-13 | 齐丰科技股份有限公司 | Preset Offset Correction Method for Substation PTZ Camera Based on 3D Reconstruction |
CN116563336A (en) * | 2023-04-03 | 2023-08-08 | 国网江苏省电力有限公司南通供电分公司 | Self-adaptive positioning algorithm for digital twin machine room target tracking |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110728715B (en) | A method for self-adaptive adjustment of the camera angle of an intelligent inspection robot | |
CN105894499B (en) | A kind of space object three-dimensional information rapid detection method based on binocular vision | |
CN107301654B (en) | A multi-sensor high-precision real-time localization and mapping method | |
CN104484648B (en) | Robot variable viewing angle obstacle detection method based on contour recognition | |
CN102842117B (en) | Method for correcting kinematic errors in microscopic vision system | |
CN106408609B (en) | A kind of parallel institution end movement position and posture detection method based on binocular vision | |
CN103700140B (en) | Spatial modeling method used for linkage of single gun camera and multiple dome cameras | |
CN107843251B (en) | Pose Estimation Methods for Mobile Robots | |
CN110243307B (en) | An automated three-dimensional color imaging and measurement system | |
CN106981081A (en) | A kind of degree of plainness for wall surface detection method based on extraction of depth information | |
CN107358633A (en) | A Calibration Method of Internal and External Parameters of Multiple Cameras Based on Three-point Calibration Objects | |
CN101467887A (en) | X ray perspective view calibration method in operation navigation system | |
WO2011079258A1 (en) | System and method for runtime determination of camera miscalibration | |
Cvišić et al. | Recalibrating the KITTI dataset camera setup for improved odometry accuracy | |
KR102559963B1 (en) | Systems, methods and markers for determining the position of movable objects in space | |
WO2020063058A1 (en) | Calibration method for multi-degree-of-freedom movable vision system | |
Zhang et al. | A novel absolute localization estimation of a target with monocular vision | |
CN114862969A (en) | A method and device for self-adaptive adjustment of the angle of an airborne pan-tilt camera of an intelligent inspection robot | |
CN111060006A (en) | A Viewpoint Planning Method Based on 3D Model | |
CN117893610B (en) | Aviation assembly robot gesture measurement system based on zoom monocular vision | |
Dang et al. | Self-calibration for active automotive stereo vision | |
CN112362034A (en) | Solid engine multi-cylinder section butt joint guiding measurement algorithm based on binocular vision | |
JP2019032660A (en) | Imaging system and imaging method | |
CN113884017A (en) | A method and system for non-contact deformation detection of insulators based on trinocular vision | |
Thangarajah et al. | Vision-based registration for augmented reality-a short survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20220805 |
|
RJ01 | Rejection of invention patent application after publication |