CN108335332A - A kind of axial workpiece central axes measurement method based on binocular vision - Google Patents
A kind of axial workpiece central axes measurement method based on binocular vision Download PDFInfo
- Publication number
- CN108335332A CN108335332A CN201810057628.3A CN201810057628A CN108335332A CN 108335332 A CN108335332 A CN 108335332A CN 201810057628 A CN201810057628 A CN 201810057628A CN 108335332 A CN108335332 A CN 108335332A
- Authority
- CN
- China
- Prior art keywords
- image
- axial workpiece
- binocular vision
- central axes
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000691 measurement method Methods 0.000 title claims abstract description 15
- 238000000034 method Methods 0.000 claims abstract description 44
- 238000012545 processing Methods 0.000 claims abstract description 9
- 238000012937 correction Methods 0.000 claims abstract description 7
- 241000208340 Araliaceae Species 0.000 claims abstract 9
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 claims abstract 9
- 235000003140 Panax quinquefolius Nutrition 0.000 claims abstract 9
- 235000008434 ginseng Nutrition 0.000 claims abstract 9
- 230000000007 visual effect Effects 0.000 claims abstract 2
- 239000011159 matrix material Substances 0.000 claims description 39
- 239000013598 vector Substances 0.000 claims description 18
- 238000013519 translation Methods 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000001914 filtration Methods 0.000 claims description 2
- 238000013459 approach Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 claims 1
- 238000005259 measurement Methods 0.000 abstract description 15
- 238000012360 testing method Methods 0.000 abstract description 6
- 238000003384 imaging method Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 7
- 238000001514 detection method Methods 0.000 description 5
- 238000007689 inspection Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 238000012795 verification Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/26—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes
- G01B11/27—Measuring arrangements characterised by the use of optical techniques for measuring angles or tapers; for testing the alignment of axes for testing the alignment of axes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
技术领域technical field
本发明属于先进测量技术领域,涉及了一种基于双目视觉的轴类零件中轴线测量方法,尤其适用于非接触式工业检测和基于视觉的机器人导航系统。The invention belongs to the field of advanced measurement technology, and relates to a binocular vision-based measurement method for the central axis of shaft parts, and is especially suitable for non-contact industrial detection and vision-based robot navigation systems.
背景技术Background technique
轴类零件在工业应用中随处可见,对轴类零件的三维位姿的精确测量在工业检测中具有重要的意义。尤其在现代制造技术中,自动化装配技术已经逐步取代人工进行复杂的装配操作来提高效率,保证产品的质量及稳定性。在零件装配过程中,轴孔装配是经常会遇到的工作情况,在机械手进行抓取轴类零件以及进行装配时,必须准确测量出轴类零件的中轴线三维位姿,才能准确进行轴孔装配,并避免零件间发生碰撞而产生缺陷,进而顺利完成装配任务。Shaft parts can be seen everywhere in industrial applications, and the accurate measurement of the three-dimensional pose of shaft parts is of great significance in industrial inspection. Especially in modern manufacturing technology, automated assembly technology has gradually replaced manual complex assembly operations to improve efficiency and ensure product quality and stability. In the process of parts assembly, the shaft hole assembly is a frequently encountered working situation. When the manipulator grabs the shaft parts and assembles them, it is necessary to accurately measure the three-dimensional pose of the central axis of the shaft parts in order to accurately carry out the shaft hole. Assembly, and avoid defects caused by collisions between parts, and then successfully complete the assembly task.
随着机器视觉技术的发展,目前国内外已经开展了相关的研究及应用试验,将机器视觉系统引入到轴孔尺寸测量中来,但是目前大多数研究集中在空间圆孔及其椭圆孔的识别与参数测量,对于轴类零件尺寸及其三维位姿测量的研究相对较少。崔彦平等人研究了一种基于双目视觉的回转体目标空间三维姿态的测量方法,该方法能够在不进行特征点匹配的情况下实现回转体三维姿态的测量,但由于轴类零件表面的高反光性以及实际应用中光照不均的问题,使用该种方法在提取像面母线的亚像素直线方程时将导致精度下降,并且该种方法的验证过程需要精密工作转台,代价较高。Sun等人在Shaft diametermeasurement using a digital image一文中提出一种基于图像处理的轴类测量方法,这种基于单目视觉原理而脱离立体几何原理的测量方法必然过分依赖图像质量及其处理效果,缺乏良好的稳定性,并且该方法要想达到较高的精度需要利用已知直径的轴类尺寸作为先验知识进行重新标定,应用场合具有一定限制。此外,西安理工大学张凯提出的目标工件轴线计算方法由于在图像处理过程中需要采用灰度值阈值和RGB彩色阈值,使得在采用普通黑白摄像机的情况下可能无法识别,甚至导致计算失败。With the development of machine vision technology, relevant research and application tests have been carried out at home and abroad, and the machine vision system is introduced into the shaft hole size measurement. However, most of the current research focuses on the recognition of spatial round holes and their elliptical holes. Compared with parameter measurement, there are relatively few studies on the size of shaft parts and their three-dimensional pose measurement. Cui Yanping and others have studied a method for measuring the three-dimensional attitude of the revolving object based on binocular vision. This method can realize the measurement of the three-dimensional attitude of the revolving object without matching feature points. Due to high reflectivity and uneven illumination in practical applications, using this method will lead to a decrease in accuracy when extracting the sub-pixel straight line equation of the image plane bus, and the verification process of this method requires a precision work turntable, which is expensive. Sun et al. proposed a shaft measurement method based on image processing in the article Shaft diameter measurement using a digital image. This measurement method based on the principle of monocular vision and departing from the principle of stereo geometry must rely too much on image quality and its processing effect. Good stability, and this method needs to use the known diameter of the shaft size as prior knowledge for recalibration in order to achieve higher accuracy, and the application has certain limitations. In addition, the target workpiece axis calculation method proposed by Zhang Kai of Xi'an University of Technology needs to use gray value threshold and RGB color threshold in the image processing process, so it may not be recognized when ordinary black and white cameras are used, and even the calculation fails.
综上所述,提出一种无需人工介入而且能够精确测量出轴类零件中轴线三位姿态的低成本的方法具有较大的应用价值。To sum up, it is of great application value to propose a low-cost method that can accurately measure the three-dimensional attitude of the axis of shaft parts without manual intervention.
发明内容Contents of the invention
为了解决背景技术中存在的问题,本发明提出一种基于双目视觉的轴类零件中轴线测量方法,能够有效地测量空间轴类零件的中轴线,精度高、速度快,并且通过检验验证了方法简单实用,极大地节约实验成本,提高效率,为双目视觉算法验证工作提供了一种新思路。In order to solve the problems existing in the background technology, the present invention proposes a method for measuring the central axis of shaft parts based on binocular vision, which can effectively measure the central axis of space shaft parts with high precision and fast speed, and has been verified by inspection. The method is simple and practical, which greatly saves the experimental cost and improves the efficiency, and provides a new idea for the verification of the binocular vision algorithm.
为了实现上述目的,本发明采用的技术方案是包括以下步骤:In order to achieve the above object, the technical solution adopted in the present invention comprises the following steps:
1)根据张正友标定法分别对双目视觉系统的左右摄像机进行单目标定,由左右摄像机构成双目视觉系统,获得左右摄像机的内参、外参和径向畸变参数;根据双目视觉立体标定方法计算出双目视觉系统的结构参数,即左右摄像机的相对位置关系:然后利用相对位置关系对左右摄像机的内参、外参和径向畸变参数进行优化,再根据左右摄像机的内参、外参计算左右摄像机的投影矩阵;1) According to the Zhang Zhengyou calibration method, the left and right cameras of the binocular vision system are single-targeted, and the binocular vision system is composed of the left and right cameras, and the internal parameters, external parameters and radial distortion parameters of the left and right cameras are obtained; according to the binocular vision stereo calibration method Calculate the structural parameters of the binocular vision system, that is, the relative positional relationship between the left and right cameras: then use the relative positional relationship to optimize the internal parameters, external parameters, and radial distortion parameters of the left and right cameras, and then calculate the left and right The projection matrix of the camera;
2)将轴类零件放在左右摄像机的公共视野内,通过左右摄像机对轴类零件进行拍照分别获得左右图像的两幅图像(左图像和右图像),对两幅图像依次进行高斯滤波、灰度转换和阈值处理,然后利用步骤1)获得的左右摄像机的径向畸变参数对左右图像进行矫正处理;2) Put the shaft parts in the public field of view of the left and right cameras, take pictures of the shaft parts through the left and right cameras to obtain two images of the left and right images (left image and right image), and perform Gaussian filtering and graying on the two images in turn. Degree conversion and threshold value processing, then use the radial distortion parameters of the left and right cameras obtained in step 1) to correct the left and right images;
3)对矫正后的左右图像进行处理获得各自的轴轮廓;3) Process the corrected left and right images to obtain respective axis profiles;
4)利用左右图像的轴轮廓处理获得轴类零件的中轴线。4) Use the shaft contour processing of the left and right images to obtain the central axis of the shaft part.
所述的步骤1)获得的左右摄像机的内参、外参、径向畸变参数和投影矩阵包括左摄像机的内参数矩阵Al、左摄像机的外参数矩阵Rl和Tl、左摄像机的径向畸变参数矩阵Kl、左摄像机的投影矩阵Ml、右摄像机的内参数矩阵Ar、右摄像机的外参数矩阵Rr和Tr、右摄像机的径向畸变参数矩阵Kr和右摄像机的投影矩阵Mr。The internal parameters, external parameters, radial distortion parameters and projection matrices of the left and right cameras obtained in step 1) include the internal parameter matrix A l of the left camera, the external parameter matrices R l and T l of the left camera, the radial direction of the left camera Distortion parameter matrix K l , projection matrix M l of the left camera, intrinsic parameter matrix A r of the right camera, extrinsic parameter matrices R r and T r of the right camera, radial distortion parameter matrix K r of the right camera and projection of the right camera Matrix M r .
所述的步骤1)获得的左右摄像机的相对位置关系包括旋转矩阵R和平移向量T。The relative positional relationship between the left and right cameras obtained in step 1) includes a rotation matrix R and a translation vector T.
所述步骤1)中,利用相对位置关系对左右摄像机的内参、外参和径向畸变参数进行优化,再用左右摄像机的内参计算获得左右摄像机的外参,具体为:In said step 1), the internal reference, external reference and radial distortion parameters of the left and right cameras are optimized using the relative positional relationship, and then the internal reference calculations of the left and right cameras are used to obtain the external parameters of the left and right cameras, specifically:
利用旋转矩阵R和平移向量T,根据3D约束优化方法对每个摄像机标定得到的内参、外参和径向畸变参数进行优化以提高标定精度,然后用优化后的内参和外参计算出左摄像机的投影矩阵Ml和右摄像机的投影矩阵Mr。Using the rotation matrix R and the translation vector T, optimize the internal parameters, external parameters and radial distortion parameters obtained by each camera calibration according to the 3D constraint optimization method to improve the calibration accuracy, and then use the optimized internal parameters and external parameters to calculate the left camera The projection matrix M l of the right camera and the projection matrix M r of the right camera.
3D约束优化方法采用Yi Cui在Precise calibration of binocular visionsystem used for vision measurement中第8-10页提出的计算方法。The 3D constrained optimization method adopts the calculation method proposed by Yi Cui in Precise calibration of binocular visionsystem used for vision measurement, pages 8-10.
所述步骤3)具体为:对矫正后的左右图像和模板图像进行轮廓检测,在将检测到的左右图像中的所有轮廓与模板图像检测获得的模板轮廓进行匹配,获得左右图像中各自与模板轮廓匹配的轮廓,并作为轴轮廓,即与轴类零件对应的轮廓;The step 3) is specifically: carry out contour detection to the corrected left and right images and the template image, match all the contours in the detected left and right images with the template contours obtained by template image detection, and obtain the template contours in the left and right images respectively. The contour matched by the contour, and used as the shaft contour, that is, the contour corresponding to the shaft part;
所述的模板图像中具有且仅有一个与轴类零件匹配的矩形框。矩形框与轴类零件在尺寸和形状上相匹配,具体实施矩形框置于模板图像的中间。There is one and only one rectangular frame matching the shaft part in the template image. The size and shape of the rectangular frame match the shaft part, and the specific implementation is that the rectangular frame is placed in the middle of the template image.
所述步骤4)具体为:Described step 4) specifically is:
4.1)用最小旋转外接矩形逼近左右图像中的轴轮廓,提取获得每个轴轮廓的长边,左右图像中两个轴轮廓的长边共计有四条长边lab、lcd、lgh、lij,lab、lcd分别表示左图像中轴轮廓的两条长边,lgh、lij分别表示右图像中轴轮廓的两条长边;4.1) Approximate the shaft contours in the left and right images with the circumscribed rectangle of the minimum rotation, and extract the long side of each shaft contour. The long sides of the two shaft contours in the left and right images have four long sides lab , l cd , l gh , l ij , l ab , l cd represent the two long sides of the axis profile in the left image respectively, l gh , l ij respectively represent the two long sides of the axis profile in the right image;
4.2)采用以下公式计算通过左右摄像机中心并与轴类零件的轴轮廓相切的四个空间平面SOAB、SOCD、S0′GH、S0′IJ,进而获得四个空间平面的法向量NOAB、NOCD、N0′GH、N0′IJ:4.2) Use the following formula to calculate the four space planes S OAB , S OCD , S 0′GH , S 0′IJ that pass through the center of the left and right cameras and are tangent to the shaft profile of the shaft parts, and then obtain the normal vectors of the four space planes N OAB , N OCD , N 0′GH , N 0′IJ :
labMlSOAB=0,lcdMlSOCD=0,lghMrS0′GH=0,lijMrS0′IJ=0l ab M l S OAB =0, l cd M l S OCD =0, l gh M r S 0′GH =0, l ij M r S 0′IJ =0
式中,lab、lcd分别表示左图像中轴轮廓的两条长边,lgh、lij分别表示右图像中轴轮廓的两条长边;Ml、Mr分别表示左右摄像机的投影矩阵;SOAB、SOCD分别表示经过左摄像机中心分别与轴类零件的轴轮廓两条长边相切的两个空间平面,S0′GH、S0′IJ分别表示经过右摄像机中心分别与轴类零件的轴轮廓两条长边相切的两个空间平面;In the formula, l ab , l cd represent the two long sides of the axial contour of the left image respectively, l gh , l ij represent the two long sides of the central axis contour of the right image respectively; M l , M r represent the projections of the left and right cameras respectively matrix; S OAB , S OCD represent the two space planes that pass through the center of the left camera and are tangent to the two long sides of the shaft profile of the shaft part respectively; S 0′GH and S 0′IJ represent the planes passing through the center of the right camera and respectively Two spatial planes tangent to the two long sides of the shaft profile of shaft parts;
4.3)根据空间平面SOAB、SOCD及其各自的法向量NOAB、NOCD计算空间平面SOAB和空间平面SOCD之间的空间角平分平面Sl,根据空间平面S0'GH、S0'IJ及其各自的法向量N0′GH、N0′IJ计算空间平面S0′GH和空间平面S0'IJ之间的空间角平分平面Sr,取两个空间角平分平面Sl与Sr的交线作为轴类零件的中轴线。4.3) According to the space plane S OAB , S OCD and their respective normal vectors N OAB , N OCD calculate the space angle bisector plane S l between the space plane S OAB and the space plane S OCD , according to the space plane S 0'GH , S 0'IJ and their respective normal vectors N 0'GH , N 0'IJ calculate the space angle bisector plane S r between the space plane S 0'GH and the space plane S 0'IJ , take two space angle bisector planes S The intersection line of l and S r is used as the central axis of shaft parts.
所述步骤4.1)具体为:Described step 4.1) is specifically:
4.1.1)采用canny算子对矫正后的左右图像以及模板图像进行边缘检测,将左图像检测到的所有边缘存为左边缘集合vector_left,将右图像检测到的所有边缘存为右边缘集合vector_right,将模板图像边缘检测所得到的模板边缘结果记为mode_edge;4.1.1) Use the canny operator to perform edge detection on the corrected left and right images and the template image, store all the edges detected in the left image as the left edge set vector_left, and store all the edges detected in the right image as the right edge set vector_right , record the template edge result obtained by template image edge detection as mode_edge;
4.1.2)分别遍历左边缘集合vector_left和右边缘集合vector_right中的每一个元素,根据模板匹配原理将每个元素分别与模板边缘mode_edge进行匹配,分别计算获得匹配度,将左图像所有边缘的匹配度存储为左得分集合vector_LeftScores,将右图像所有边缘的匹配度存储为右得分集合vector_RightScores;4.1.2) Traverse each element in the left edge set vector_left and the right edge set vector_right respectively, match each element with the template edge mode_edge according to the template matching principle, calculate the matching degree respectively, and match all the edges of the left image The degree is stored as the left score set vector_LeftScores, and the matching degree of all edges of the right image is stored as the right score set vector_RightScores;
本步骤中具体实施中用OpenCV工具的matchTemplate函数进行匹配计算获得匹配度。In the specific implementation of this step, the matchTemplate function of the OpenCV tool is used for matching calculation to obtain the matching degree.
4.1.3)分别将左得分集合vector_LeftScores和右得分集合vector_RightScores中的元素按照从小到大进行排序,选取两个集合中最小得分对应的轮廓作为中心轴在左右摄像机中的成像轮廓;4.1.3) Sort the elements in the left score set vector_LeftScores and the right score set vector_RightScores respectively from small to large, and select the contour corresponding to the smallest score in the two sets as the imaging contour of the central axis in the left and right cameras;
4.1.4)采用最小旋转矩形方法对步骤4.1.3)中获得的两个成像轮廓进行逼近,得到两个极其近似轮廓的外接矩形rect_left和rect_right;4.1.4) Approximating the two imaging contours obtained in step 4.1.3) by using the minimum rotating rectangle method to obtain two circumscribed rectangles rect_left and rect_right that are extremely similar to the contours;
4.1.5)选取左图像的外接矩形rect_left中的两个长边作为左图像的轴轮廓的两条长边lab、lcd,选取右图像的外接矩形rect_right中的两个长边作为右图像的轴轮廓的两条长边,计算各条长边的直线方程。4.1.5) Select the two long sides in the circumscribed rectangle rect_left of the left image as the two long sides l ab and l cd of the axis profile of the left image, and select the two long sides in the circumscribed rectangle rect_right of the right image as the right image The two long sides of the axis profile of , calculate the straight line equation of each long side.
本发明实施中还提出了一种新的检验模型,该模型半径已知,并对中轴线进行可视化,将该条空间直线拟合、重建结果与本发明方法所得结果进行对比,验证本方法半径尺寸测量结果的有效性与精确度。In the implementation of the present invention, a new inspection model is also proposed. The radius of the model is known, and the central axis is visualized. The fitting and reconstruction results of the space straight line are compared with the results obtained by the method of the present invention to verify the radius of the method. Validity and accuracy of dimensional measurements.
本发明具有的有益效果是:The beneficial effects that the present invention has are:
1.本发明方法通过双目视觉测量技术,能够自动、高精度的测量出轴类零件中轴线的空间三维姿态,具有非接触式的优点,在用传统方法无法测量的场合具有很高的应用价值,尤其适用于非接触式工业检测和基于视觉的机器人导航系统。1. The method of the present invention can automatically and high-precision measure the spatial three-dimensional posture of the axis of the shaft part through the binocular vision measurement technology, has the advantage of non-contact, and has high application in the occasions that cannot be measured by traditional methods value, especially for non-contact industrial inspection and vision-based robot navigation systems.
2.本发明提出的利用最小旋转外接矩形逼近轴类零件在左右摄像机成像平面上的轮廓线的方法,能够弥补因光照不均和零件表面高反光性带来的检测误差,提高精度;自行设计的检验模型简单实用,避免了高价的精密试验台,节约实验成本,为双目视觉相关算法的检验工作提供了新思路。2. The method proposed by the present invention to approximate the contour line of shaft parts on the imaging plane of the left and right cameras by using the minimum rotating circumscribed rectangle can compensate for the detection error caused by uneven illumination and high reflectivity of the part surface, and improve the accuracy; self-designed The test model is simple and practical, which avoids expensive precision test benches, saves experimental costs, and provides a new idea for the test of binocular vision-related algorithms.
附图说明Description of drawings
图1为本发明方法的原理示意图。Figure 1 is a schematic diagram of the principle of the method of the present invention.
图2为本发明方法的流程示意图。Fig. 2 is a schematic flow chart of the method of the present invention.
图3为实施例的新模型验证方式示意图。Fig. 3 is a schematic diagram of the new model verification method of the embodiment.
具体实施方式Detailed ways
为了更好的理解本发明,下面结合附图和实施例对本发明的技术方案作详细的描述。In order to better understand the present invention, the technical solutions of the present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.
图1所示为双目立体视觉系统。OlXlYlZl和OrXrYrZr分别为左右摄像机坐标系,olulvl和orurvr分别为以像素为单位的左右图像坐标系,OwXwYwZw为世界坐标系,其中Z轴指向左摄像机坐标系原点Ol。设世界坐标系与左摄像机坐标系之间的旋转矩阵和平移向量分别为Ro和to,记Ro=[r1 r2 r3],其中三维列向量ri表示矩阵Ro的第i列,i=1,2,3。设左右摄像机坐标系之间的旋转矩阵和平移向量分别为R和T,则其中Xl和Xr分别为某一空间点在左右摄像机坐标下的坐标,R和T通过双目摄像机标定确定。Figure 1 shows the binocular stereo vision system. O l X l Y l Z l and O r X r Y r Z r are the left and right camera coordinate systems respectively, o l u l v l and o r u r v r are the left and right image coordinate systems in units of pixels, O w X w Y w Z w is the world coordinate system, where the Z axis points to the origin O l of the left camera coordinate system. Let the rotation matrix and translation vector between the world coordinate system and the left camera coordinate system be R o and t o respectively, write R o = [r 1 r 2 r 3 ], where the three-dimensional column vector r i represents the first Column i, i=1,2,3. Let the rotation matrix and translation vector between the left and right camera coordinate systems be R and T respectively, then Among them, X l and X r are the coordinates of a certain space point under the coordinates of the left and right cameras respectively, and R and T are determined by binocular camera calibration.
图1中EF表示轴类零件的中轴线,abcd表示轴类零件在左摄像机成像平面上的轮廓,ghij表示轴类零件在右摄像机成像平面上的轮廓。In Fig. 1, EF represents the central axis of the shaft part, abcd represents the profile of the shaft part on the imaging plane of the left camera, and ghij represents the contour of the shaft part on the imaging plane of the right camera.
下面详细叙述本发明方法的实施步骤:Describe the implementation steps of the inventive method in detail below:
1.对左右摄像机,利用张正友标定法(A Flexible New Technique for CameraCalibration.Zhengyou Zhang,December,2,1998.),确定出左摄像机的内参数矩阵Al、左摄像机的径向畸变参数Kl、右摄像机的内参数矩阵Ar、右摄像机的径向畸变参数Kr;并计算左摄像机对应的投影矩阵Ml、右摄像机对应的投影矩阵Mr;根据双目视觉立体标定方法(Jeans-Yves Bouguet,Camera Calibration Toolbox for Matlab,MRL-Intel Incorp.),计算出双目视觉系统的结构参数,即左右摄像机的相对位置关系:旋转矩阵R和平移向量T。其中,内参数矩阵的形式为:1. For the left and right cameras, use Zhang Zhengyou’s calibration method (A Flexible New Technique for CameraCalibration. Zhengyou Zhang, December, 2, 1998.) to determine the internal parameter matrix A l of the left camera, the radial distortion parameter K l of the left camera, Intrinsic parameter matrix A r of the right camera, radial distortion parameter K r of the right camera; and calculate the projection matrix M l corresponding to the left camera and the projection matrix M r corresponding to the right camera; according to the binocular vision stereo calibration method (Jeans-Yves Bouguet, Camera Calibration Toolbox for Matlab, MRL-Intel Incorp.), calculate the structural parameters of the binocular vision system, that is, the relative positional relationship between the left and right cameras: rotation matrix R and translation vector T. Among them, the form of the internal parameter matrix is:
其中,αl和βl分别表示左摄像机x轴方向的有效焦距和y轴方向的有效焦距,uol和vol表示左摄像机成像平面的主点坐标,γ表示左摄像机坐标轴倾斜参数,理想情况下为0;αr和βr分别表示右摄像机x轴方向的有效焦距和y轴方向的有效焦距,uor和vor表示右摄像机成像平面的主点坐标,γr表示右摄像机坐标轴倾斜参数,理想情况下为0。Among them, α l and β l represent the effective focal length of the left camera in the x-axis direction and the effective focal length in the y-axis direction respectively, u ol and v ol represent the principal point coordinates of the imaging plane of the left camera, γ represents the tilt parameter of the left camera coordinate axis, ideally α r and β r represent the effective focal length of the right camera in the x-axis direction and y-axis direction respectively, u or and v or represent the principal point coordinates of the imaging plane of the right camera, and γ r represents the coordinate axis of the right camera The tilt parameter, ideally 0.
本实例中摄像机标定与计算结果为:In this example, the camera calibration and calculation results are:
Kl=[-0.0114 -0.0578 -1.3177]K l =[-0.0114-0.0578-1.3177]
Kr=[-0.0233 -0.1116 -0.4499]K r =[-0.0233-0.1116-0.4499]
T=[-59.7610 0.4727 0.7066] T = [-59.7610 0.4727 0.7066]
2.将双目视觉系统放置在待检测的轴类零件附近,确保该轴类零件在左右摄像机的公共视场范围内,同时使背景尽量简单、使轴类零件的轴线方向尽可能平行于左右摄像机坐标系原点的连线,即双目视觉系统的基线。利用左右摄像机同时拍摄轴类零件,从而使在左摄像机得到一幅包含轴类零件影像的图像,相应地,在右摄像机也得到一幅包含轴类零件影像的图像。利用左摄像机的径向畸变参数Kl对左图像进行畸变矫正,得到不含有畸变信息的左图像,记为planel。同时,利用右摄像机的径向畸变参数Kr对右图像进行畸变矫正,得到不含有畸变信息的右图像,记为planer。2. Place the binocular vision system near the shaft part to be detected, ensure that the shaft part is within the common field of view of the left and right cameras, and at the same time keep the background as simple as possible, and make the axial direction of the shaft part as parallel as possible to the left and right The line connecting the origin of the camera coordinate system is the baseline of the binocular vision system. The left and right cameras are used to shoot the shaft part at the same time, so that an image containing the image of the shaft part is obtained from the left camera, and correspondingly, an image containing the image of the shaft part is also obtained from the right camera. Use the radial distortion parameter K l of the left camera to correct the distortion of the left image, and obtain the left image without distortion information, denoted as plane l . At the same time, use the radial distortion parameter K r of the right camera to correct the distortion of the right image, and obtain the right image without distortion information, denoted as plane r .
具体的矫正过程是:对左边的图像,设某包含畸变信息的图像点在以像素为单位的图像坐标系下的坐标为其归一化的图像坐标为它们对应的不含畸变信息的图像点坐标分别为(u,v)和(x,y)。根据文献(D.C.Brown,Close-range cameracalibration,Photogram-metric Engineering,37(8):855-866,1971),有:The specific correction process is: for the image on the left, set the coordinates of an image point containing distortion information in the image coordinate system in pixels as Its normalized image coordinates are Their corresponding image point coordinates without distortion information are (u, v) and (x, y) respectively. According to the literature (DC Brown, Close-range camera calibration, Photogram-metric Engineering, 37(8):855-866, 1971), there are:
利用坐标变化公式:Use the coordinate change formula:
其中,Kl为左摄像机径向畸变参数,Al为摄像机的内参数矩阵,都可以通过摄像机的单目标定确定。Among them, K l is the radial distortion parameter of the left camera, and A l is the internal parameter matrix of the camera, both of which can be determined by the single-target calibration of the camera.
可以得到:can get:
由于以上方程是非线性方程组,为了简化求解过程,上述方程组可近似为Since the above equations are nonlinear equations, in order to simplify the solution process, the above equations can be approximated as
利用上面两式可以对左右图像上的每一个图像点进行畸变矫正,从而得到不含有畸变信息的图像planel。对于右边的图像,矫正方法与左图像的矫正方法完全相同,不再赘述。The above two formulas can be used to correct the distortion of each image point on the left and right images, so as to obtain the image plane l without distortion information. For the image on the right, the rectification method is exactly the same as that of the left image, and will not be repeated here.
3.对矫正后的左图像planel和右图像planer进行自适应阈值将图像二值化,具体的自适应阈值采用以下方法计算:3. Perform adaptive thresholding on the corrected left image plane l and right image plane r to binarize the image. The specific adaptive threshold is calculated by the following method:
设变量t在灰度值范围(0~255)内依此取整数值(共256个灰度值),每次取值将左图像分为背景和前景两部分,同时计算下列两式:Assume that the variable t takes an integer value within the gray value range (0-255) accordingly (a total of 256 gray values), and divides the left image into two parts, the background and the foreground, and calculates the following two formulas at the same time:
u=w0*u0+w1*u1 u=w 0 *u 0 +w 1 *u 1
g=w0*(u-u0)2+w1*(u-u1)2 g=w 0 *(uu 0 ) 2 +w 1 *(uu 1 ) 2
其中,w0为图像背景像素点占整幅图像的比例,u0为图像背景像素点的平均灰度,w1为前景像素点占整幅图像的比例,u1为前景像素点的平均灰度,u表示整幅图像的平均灰度,计算结果g表示前景和背景图像灰度值的方差。比较所得的256个g值,g值最大时变量t的值为最佳阈值,根据最佳阈值进行图像二值化分割。Among them, w 0 is the proportion of image background pixels to the whole image, u 0 is the average gray level of image background pixels, w 1 is the proportion of foreground pixels to the whole image, u 1 is the average gray level of foreground pixels degree, u represents the average gray level of the entire image, and the calculation result g represents the variance of the gray value of the foreground and background images. Comparing the obtained 256 g values, the value of the variable t is the optimal threshold when the g value is the largest, and image binarization is performed according to the optimal threshold.
然后进行轮廓检测,并将检测到的所有轮廓与模板轮廓进行匹配,计算匹配相似度得分,根据得分高低获得左右图像中目标轴的轮廓;由于轴类零件的轴线方向基本平行于左右摄像机坐标系原点的连线,可知轴在左右成像平面上的影像近似于矩形,故本文采用最小旋转外接矩形逼近左右图像中匹配到的轴轮廓,进行计算得到左图像planel和右图像planer上轴轮廓的4条轴线方向的边lab、lcd、lgh、lij,采用最小旋转外接矩形的方法还能弥补由于轴类零件表面高光以及光照不均匀造成的图像轮廓分割误差。Then perform contour detection, and match all detected contours with template contours, calculate the matching similarity score, and obtain the contours of the target shaft in the left and right images according to the score; since the axis direction of the shaft parts is basically parallel to the left and right camera coordinate systems From the connection of the origin, it can be known that the image of the axis on the left and right imaging planes is similar to a rectangle, so this paper uses the minimum rotation circumscribed rectangle to approximate the matched axis contours in the left and right images, and calculates the upper axis contours of the left image plane l and the right image plane r For the four axial sides lab , l cd , l gh , l ij , the method of minimum rotating circumscribed rectangle can also make up for the image contour segmentation error caused by the surface highlight and uneven illumination of shaft parts.
本实例计算的lab、lcd、lgh、lij值为:The values of lab , l cd , l gh , and l ij calculated in this example are:
lab:[463.23004,-1358.9934,847808]l ab : [463.23004,-1358.9934,847808]
lcd:[463.23004,-1358.9935,292788.81]l cd : [463.23004,-1358.9935,292788.81]
lgh:[459.90479,-1310.1235,987763.63] lgh : [459.90479,-1310.1235,987763.63]
lij:[459.90479,-1310.1235,432691.72]l ij : [459.90479,-1310.1235,432691.72]
4.根据左图像planel和右图像planer上4条轴线方向的轮廓边lab、lcd、lgh、lij以及左摄像机投影矩阵Ml和右摄像机投影矩阵Mr计算通过左右摄像机中心并与空间轴类零件侧面相切的4个平面SOAB、SOCD、Sl′GH、S0′IJ。4. According to the silhouette edges lab , l cd , l gh , l ij of the four axis directions on the left image plane l and the right image plane r , as well as the left camera projection matrix M l and the right camera projection matrix M r , calculate through the center of the left and right cameras And 4 planes S OAB , S OCD , S l′GH , S 0′IJ that are tangent to the side of space axis parts.
其实现原理为(崔彦平等,回转体目标空间三维姿态测量方法研究.传感技术学报,Jan.2007.):已知图1中空间轴类零件的母线AB和CD在左摄像机成像平面上的像为ab和cd,母线GH和IJ在右摄像机成像平面上的像为gh和ij,则平面SOAB、SOCD、S0'GH、S0'IJ在世界坐标系下的方程为:Its realization principle is (Cui Yanping, Research on 3D Attitude Measurement Method of Revolving Object Space. Journal of Sensing Technology, Jan. 2007.): It is known that the bus bars AB and CD of the space axis parts in Figure 1 are on the imaging plane of the left camera The images of the busbars GH and IJ on the imaging plane of the right camera are gh and ij, and the equations of the planes S OAB , S OCD , S 0'GH , and S 0'IJ in the world coordinate system are:
labMlSOAB=0,lcdMlSOCD=0,lghMrS0′GH=0,lijMrS0′IJ=0l ab M l S OAB =0, l cd M l S OCD =0, l gh M r S 0′GH =0, l ij M r S 0′IJ =0
同时获得4个空间平面的法向量NOAB、NOCD、N0′GH、N0′IJ。Simultaneously obtain the normal vectors N OAB , N OCD , N 0′GH , N 0′IJ of the four space planes.
本实例中计算得到Calculated in this example
SOAB:[1744510.4,-5117928.5,224126.41,0] SOAB : [1744510.4,-5117928.5,224126.41,0]
SOCD:[1744510.4,-5117929,-330892.88,0]S OCD : [1744510.4,-5117929,-330892.88,0]
S0′GH:[1731987.6,-4933886,403896.97,-1.0351583e+008]S 0′GH : [1731987.6,-4933886,403896.97,-1.0351583e+008]
S0′IJ:[1731987.6,-4933886,-151174.95,-1.0351583e+008]S 0′IJ : [1731987.6,-4933886,-151174.95,-1.0351583e+008]
NOAB:[0.32235768,-0.94571155,0.041414984] NOAB : [0.32235768, -0.94571155, 0.041414984]
NOCD:[0.32235768,-0.94571155,0.041414984]N OCD : [0.32235768, -0.94571155, 0.041414984]
S0′IJ:[0.33024019,-0.94075006,0.077011526]S 0′IJ : [0.33024019,-0.94075006,0.077011526]
S0′IJ:[0.3310855,-0.94315809,-0.028898496]S 0′IJ : [0.3310855,-0.94315809,-0.028898496]
5.通过基本几何原理,由切平面SOAB、SOCD及其相应的法向量NOAB、NOCD可计算得到SOAB、SOCD的空间角平分平面Sl;根据切平面S0′GH、S0′IJ及其相应的法向量N0′GH、N0′IJ计算S0′IJ、S0′IJ的空间角平分平面Sr;则两个平分平面Sl与Sr的交线为空间轴的中轴线。5. Through basic geometric principles, the space angle bisector plane S l of S OAB and S OCD can be calculated from the tangent planes S OAB , S OCD and their corresponding normal vectors N OAB , N OCD ; according to the tangent planes S 0′GH , S 0′IJ and its corresponding normal vectors N 0′GH and N 0′IJ calculate the space angle bisector plane S r of S 0′IJ and S 0′IJ ; then the intersection line of the two bisector planes S l and S r is the central axis of the space axis.
本实例中计算的Sl和Sr的结果为:The results of S l and S r calculated in this example are:
Sl:[0.64438975,-1.8904679,-0.019666974,0]S l : [0.64438975,-1.8904679,-0.019666974,0]
Sr:[0.66132569,-1.8839082,0.048113029,-39.525501]S r : [0.66132569,-1.8839082,0.048113029,-39.525501]
本实施例最后设计一种检验模型,如图3所示,该模型设计为I和II两部分。第I部分为一段半径为30mm,长度为180mm的光轴,其作用是通过本发明所提出的方法对该段光轴进行中轴线测量。第II部分为一段光轴的四分之一,与第I部分光轴同轴且半径相同,长度为100mm,模型的这一部分对中轴线进行了可视化,通过对可视化中轴线的拟合结果与本发明所提出方法对第I部分光轴的测量结果进行对比,验证本发明方法的有效性与精确度。Finally, a test model is designed in this embodiment. As shown in FIG. 3 , the model is designed into two parts, I and II. Part I is an optical axis with a radius of 30mm and a length of 180mm, and its function is to measure the central axis of the optical axis by the method proposed by the present invention. Part II is a quarter of the optical axis, which is coaxial with the optical axis of Part I and has the same radius, with a length of 100mm. This part of the model visualizes the central axis, and the fitting results of the visualized central axis are compared with The method proposed by the present invention compares the measurement results of the optical axis of the first part to verify the effectiveness and accuracy of the method of the present invention.
本实施例中对检验模型可见直线的拟合结果以及本发明测量结果如下表1(空间直线表示为两个空间平面的交线):In the present embodiment, the fitting result of the visible straight line of the inspection model and the measurement results of the present invention are as follows in Table 1 (the space straight line is represented as the intersection line of two space planes):
表1Table 1
由此计算出拟合中轴线与算法重建中轴线间的夹角为0.54°,距离为1.11mm,由此可以看出,本发明所提出的基于双目视觉和最小外接矩形的轴类零件中轴线测量方法可以到达较高的精度,为实现轴类零件的自动化测量提供支持。From this, it is calculated that the included angle between the fitting central axis and the algorithm reconstruction central axis is 0.54 ° , and the distance is 1.11mm. It can be seen from this that the shaft parts based on binocular vision and the minimum circumscribed rectangle proposed by the present invention The axis measurement method can achieve high precision, which provides support for the automatic measurement of shaft parts.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810057628.3A CN108335332A (en) | 2018-01-22 | 2018-01-22 | A kind of axial workpiece central axes measurement method based on binocular vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810057628.3A CN108335332A (en) | 2018-01-22 | 2018-01-22 | A kind of axial workpiece central axes measurement method based on binocular vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108335332A true CN108335332A (en) | 2018-07-27 |
Family
ID=62925353
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810057628.3A Pending CN108335332A (en) | 2018-01-22 | 2018-01-22 | A kind of axial workpiece central axes measurement method based on binocular vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108335332A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110285870A (en) * | 2019-07-22 | 2019-09-27 | 深圳市卓城科技有限公司 | Vehicle spindle-type and wheel number determination method and its system |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN113500593A (en) * | 2021-06-25 | 2021-10-15 | 天津大学 | Method for grabbing designated part of shaft workpiece for loading |
CN114265586A (en) * | 2021-12-09 | 2022-04-01 | 深圳模德宝科技有限公司 | Automatic cutting programming method and device and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093160A (en) * | 2007-07-12 | 2007-12-26 | 上海交通大学 | Method for measuring geometric parameters of spatial circle based on technique of binocular stereoscopic vision |
JP2009168453A (en) * | 2008-01-10 | 2009-07-30 | Shigeki Kobayashi | Inspection device |
CN103616016A (en) * | 2013-11-29 | 2014-03-05 | 大连理工大学 | Visual position-pose measurement method based on point-line combination characteristics |
CN107192331A (en) * | 2017-06-20 | 2017-09-22 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of workpiece grabbing method based on binocular vision |
CN107255443A (en) * | 2017-07-14 | 2017-10-17 | 北京航空航天大学 | Binocular vision sensor field calibration method and device under a kind of complex environment |
-
2018
- 2018-01-22 CN CN201810057628.3A patent/CN108335332A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101093160A (en) * | 2007-07-12 | 2007-12-26 | 上海交通大学 | Method for measuring geometric parameters of spatial circle based on technique of binocular stereoscopic vision |
JP2009168453A (en) * | 2008-01-10 | 2009-07-30 | Shigeki Kobayashi | Inspection device |
CN103616016A (en) * | 2013-11-29 | 2014-03-05 | 大连理工大学 | Visual position-pose measurement method based on point-line combination characteristics |
CN107192331A (en) * | 2017-06-20 | 2017-09-22 | 佛山市南海区广工大数控装备协同创新研究院 | A kind of workpiece grabbing method based on binocular vision |
CN107255443A (en) * | 2017-07-14 | 2017-10-17 | 北京航空航天大学 | Binocular vision sensor field calibration method and device under a kind of complex environment |
Non-Patent Citations (6)
Title |
---|
YI CUI等: "Precise calibration of binocular vision system used for vision measurement", 《OPTICS EXPRESS》 * |
崔彦平等: "回转体目标空间三维姿态测量方法研究", 《传感技术学报》 * |
祁晓玲: "基于双目视觉的零件多尺寸测量研究", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
罗庆生等: "《仿生四足机器人技术》", 30 April 2016, 北京理工大学出版社 * |
莫德举等: "《数字图像处理》", 28 February 2015, 西安电子科技大学出版社 * |
龚声蓉等: "《数字图像处理与分析》", 31 May 2014, 清华大学出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110285870A (en) * | 2019-07-22 | 2019-09-27 | 深圳市卓城科技有限公司 | Vehicle spindle-type and wheel number determination method and its system |
CN110779933A (en) * | 2019-11-12 | 2020-02-11 | 广东省智能机器人研究院 | Surface point cloud data acquisition method and system based on 3D visual sensing array |
CN113500593A (en) * | 2021-06-25 | 2021-10-15 | 天津大学 | Method for grabbing designated part of shaft workpiece for loading |
CN114265586A (en) * | 2021-12-09 | 2022-04-01 | 深圳模德宝科技有限公司 | Automatic cutting programming method and device and computer readable storage medium |
CN114265586B (en) * | 2021-12-09 | 2023-12-05 | 深圳模德宝科技有限公司 | Automatic programming method and device for cutting and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109003258B (en) | A high-precision sub-pixel circular part measurement method | |
CN109612390B (en) | Large-size workpiece automatic measuring system based on machine vision | |
CN114119553B (en) | Binocular vision different-surface round hole detection method taking cross laser as reference | |
CN109060836B (en) | Machine vision-based high-pressure oil pipe joint external thread detection method | |
CN105894499B (en) | A kind of space object three-dimensional information rapid detection method based on binocular vision | |
CN108335332A (en) | A kind of axial workpiece central axes measurement method based on binocular vision | |
CN108267104A (en) | A kind of axial workpiece radius size measuring method based on binocular vision | |
CN110390696A (en) | A visual detection method of circular hole pose based on image super-resolution reconstruction | |
CN109470149B (en) | A method and device for measuring the pose of a pipeline | |
CN102589435A (en) | Efficient and accurate detection method of laser beam center under noise environment | |
CN109934839A (en) | A Vision-Based Workpiece Detection Method | |
CN114494045A (en) | Large-scale straight gear geometric parameter measuring system and method based on machine vision | |
CN107388991B (en) | A kind of more fillet axial workpiece radius of corner measurement methods in end face | |
CN107804708A (en) | A kind of pivot localization method of placement equipment feeding rotary shaft | |
CN107220999A (en) | The research of workpiece circular arc Edge Feature Points matching process | |
CN114252449B (en) | Aluminum alloy weld joint surface quality detection system and method based on line structured light | |
CN116091603A (en) | Box workpiece pose measurement method based on point characteristics | |
CN108109154A (en) | A kind of new positioning of workpiece and data capture method | |
CN116579955A (en) | New energy battery cell weld reflection point denoising and point cloud complement method and system | |
CN111415378B (en) | Image registration method for automobile glass detection and automobile glass detection method | |
CN114963981A (en) | Monocular vision-based cylindrical part butt joint non-contact measurement method | |
CN110136248A (en) | A device and method for three-dimensional reconstruction of transmission housing based on binocular stereo vision | |
CN114612412A (en) | Processing method of three-dimensional point cloud data, application of processing method, electronic device and storage medium | |
CN115731254A (en) | Method for solving size of image based on sub-pixel | |
CN117824502A (en) | Laser three-dimensional scanning-based non-contact detection method for assembling complex assembled workpiece |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180727 |