[go: up one dir, main page]

CN102107179A - Method for controlling single-layer leather gluing based on binocular vision - Google Patents

Method for controlling single-layer leather gluing based on binocular vision Download PDF

Info

Publication number
CN102107179A
CN102107179A CN2010105877098A CN201010587709A CN102107179A CN 102107179 A CN102107179 A CN 102107179A CN 2010105877098 A CN2010105877098 A CN 2010105877098A CN 201010587709 A CN201010587709 A CN 201010587709A CN 102107179 A CN102107179 A CN 102107179A
Authority
CN
China
Prior art keywords
mtd
mrow
mtr
msub
gluing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010105877098A
Other languages
Chinese (zh)
Other versions
CN102107179B (en
Inventor
赵燕伟
杨威
胡峰俊
陈建
吴茂敏
金亦挺
钟允辉
陈胜勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN 201010587709 priority Critical patent/CN102107179B/en
Publication of CN102107179A publication Critical patent/CN102107179A/en
Application granted granted Critical
Publication of CN102107179B publication Critical patent/CN102107179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention provides a method for controlling single-layer leather gluing based on binocular vision. As a binocular camera and a three-shaft gluing manipulator are adopted, the method comprises the following steps of: 1) performing three-dimensional calibration on the fixed binocular camera; 2) collecting a binocular image of single-layer leather and extracting an edge outline of the binocular image; 3) extracting characteristics according to the edge outline image in the step 2); 4) matching according to the characteristic points in the step 3); 5) reducing real coordinate values of a gluing platform system corresponding to the characteristic points in the step 4) according to a weight projection matrix value; and 6) inputting the real coordinate values of the characteristic points into a gluing coordinate queue of the three-shaft gluing manipulator to control the manipulator to finish the gluing. By the method, the working efficiency is improved, the single-layer leather is conveniently integrated to the processing of a production line, the precision is high, and harms to the health of workers can be avoided.

Description

一种基于双目视觉的单层皮革涂胶控制方法A single-layer leather gluing control method based on binocular vision

技术领域technical field

本发明涉及皮革涂胶方法领域,涉及一种基于双目视觉的单层皮革涂胶方法,尤其是一种基于双目视觉的包含检测、识别、测量到涂胶完整流程的单层皮革涂胶控制方法。The present invention relates to the field of leather gluing methods, and relates to a single-layer leather gluing method based on binocular vision, in particular to a single-layer leather gluing method based on binocular vision including detection, identification, measurement and gluing. Control Method.

背景技术Background technique

我国目前实际生产中皮革涂胶主要由手工完成,手工涂胶的不均匀以及涂料散发的有毒气体都是皮革涂胶的主要问题。手工涂胶的不均匀造成皮革后期缝合的困难,会影响到皮革生产的效率,而且因为涂胶过程所散发的有毒气体又会极大影响工人的健康,因此采用机械化的涂胶手段不仅能提高皮革生产的效率和工艺水平,也能有效保护工人的健康。In my country's current actual production, leather gluing is mainly done by hand, and the unevenness of manual gluing and the toxic gas emitted by paint are the main problems of leather gluing. The unevenness of manual gluing makes it difficult to suture the leather later, which will affect the efficiency of leather production, and because the toxic gas emitted during the gluing process will greatly affect the health of workers, so the use of mechanized gluing methods can not only improve The efficiency and craftsmanship of leather production can also effectively protect the health of workers.

现有的图像处理技术与双目视觉技术中,大部分功能不够全面,在单层皮革涂胶方面还没有一套从检测、识别、测量到涂胶的完整方法,如申请号为200810232122.8的发明,利用已知几何面积的标定物求得相机的像素当量会受计算精度影响,求得的皮革面积未必适合于进一步的工业应用,显然求出皮革的边缘以及加工的特征点更适合工业应用。申请号为200710190470.9的发明,利用视觉技术来识别汽车挡风玻璃的类别并根据已有参数涂胶,只是简单的利用了视觉识别技术,对于未采样的挡风玻璃需要进行采样并输入特征数据库的步骤,较为繁琐,同时汽车挡风玻璃的形状较为简单。申请号为200710123727.9的发明,在三轴伺服运动控制器的驱动下按照三维或二维点序列进行涂胶,但是众多的三维点或者二维点会影响涂胶的效率,对于同一直线的二维点应取其起点与终点从而减少点的数量。In the existing image processing technology and binocular vision technology, most of the functions are not comprehensive enough, and there is no complete method from detection, identification, measurement to gluing in single-layer leather gluing, such as the invention with application number 200810232122.8 , the calculation of the pixel equivalent of the camera using a calibration object with a known geometric area will be affected by the calculation accuracy, and the obtained leather area may not be suitable for further industrial applications. Obviously, it is more suitable for industrial applications to obtain the edge of the leather and the feature points of processing. The invention with application number 200710190470.9 uses visual technology to identify the type of car windshield and glues it according to the existing parameters. It simply uses the visual recognition technology. For the unsampled windshield, it needs to be sampled and input into the feature database. The steps are relatively complicated, and the shape of the car windshield is relatively simple. The invention with the application number 200710123727.9 applies glue according to the sequence of three-dimensional or two-dimensional points driven by the three-axis servo motion controller, but many three-dimensional points or two-dimensional points will affect the efficiency of glue application. Points should take their start and end points to reduce the number of points.

发明内容Contents of the invention

为了克服已有的纯手工皮革涂胶方法的工作效率低、不利于集成到流水线加工、精度低、危害工人健康的不足,本发明提供一种提升工作效率、便于集成到流水线加工、精度高、避免危害工人健康的基于双目视觉的单层皮革涂胶控制方法。In order to overcome the shortcomings of the existing pure manual leather gluing method, such as low work efficiency, unfavorable integration into assembly line processing, low precision, and harm to workers' health, the present invention provides a method that improves work efficiency, is easy to integrate into assembly line processing, and has high precision. A binocular vision-based single-layer leather gluing control method that avoids hazards to workers' health.

本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:

一种基于双目视觉的单层皮革涂胶控制方法,该方法采用双目相机和三轴涂胶机械臂,所述单层皮革涂胶控制方法包括以下步骤:A method for controlling the gluing of single-layer leather based on binocular vision, the method adopts a binocular camera and a three-axis gluing mechanical arm, and the method for controlling the gluing of single-layer leather comprises the following steps:

1)、对固定的双目相机进行立体标定;1) Stereo calibration of the fixed binocular camera;

2)、采集单层皮革双目图像并提取其边缘轮廓;2), collect single-layer leather binocular image and extract its edge contour;

3)、依据步骤2)的边缘轮廓图像进行特征提取:3), carry out feature extraction according to the edge profile image of step 2):

单层皮革边缘轮廓由有限对二维坐标点(xi,yi)构成的,i=1,2,L,n表示该轮廓有n对坐标点组成;对于轮廓上任意坐标点对(xiyi),其曲率计算公式如(1),计算所有坐标点对的曲率Ki,i=1,2,L,n表示该轮廓有n对坐标;The single-layer leather edge contour is composed of finite pairs of two-dimensional coordinate points (x i , y i ), i=1, 2, L, n means that the contour is composed of n pairs of coordinate points; for any pair of coordinate points on the contour (x i y i ), its curvature calculation formula is as (1), calculate the curvature K i of all coordinate point pairs, i=1, 2, L, n means that the contour has n pairs of coordinates;

K i = ( x i + 1 - x i - 1 2 ) ( y i + 1 - y i - 1 2 ) 2 - ( x i + 1 - x i - 1 2 ) 2 ( y i + 1 - y i - 1 2 ) [ ( x i + 1 - x i - 1 2 ) 2 + ( y i + 1 - y i - 1 2 ) 2 ] 2 3 , i=1,2,L,n(1) K i = ( x i + 1 - x i - 1 2 ) ( the y i + 1 - the y i - 1 2 ) 2 - ( x i + 1 - x i - 1 2 ) 2 ( the y i + 1 - the y i - 1 2 ) [ ( x i + 1 - x i - 1 2 ) 2 + ( the y i + 1 - the y i - 1 2 ) 2 ] 2 3 , i=1,2,L,n(1)

其中,(xi+1,yi+1)为该坐标点对(xi,yi)的下一个坐标点对,(xi-1,yi-1)为该坐标点对(xi,yi)的上一个坐标点对。Among them, (x i+1 , y i+1 ) is the next coordinate point pair of the coordinate point pair (x i , y i ), (xi -1 , y i-1 ) is the coordinate point pair (x i , y i ) last coordinate point pair.

特征提取按照以下步骤:Feature extraction follows the following steps:

(3.1)、以

Figure BDA0000038198390000031
计算轮廓局部曲率阈值,B为比例系数,选在[1,2]之间;(3.1), with
Figure BDA0000038198390000031
Calculate the local curvature threshold of the contour, B is the proportional coefficient, selected between [1, 2];

(3.2)、满足Ki≥T(i)的坐标点会选入候选特征点Pi,i=1,2,L,s<n表示共有s个候选特征点;(3.2), coordinate points satisfying K i ≥ T(i) will be selected as candidate feature points P i , i=1, 2, L, s<n means that there are s candidate feature points in total;

(3.3)、计算每个候选特征点的两条切线所形成的夹角θi+1:以候选特征点的点Pi为起点,Pi+1为终点,起点与终点所在线段的中点P(i+(i+1))/2,根据公式(2)计算理论圆心

Figure BDA0000038198390000032
同理,以Pi+1为起点,Pi+2为终点,起点与终点所在线段的中点P((i+1)+(i+2))/2,通过公式(2)得其理论圆心
Figure BDA0000038198390000033
(3.3), calculate the angle θ i+1 formed by the two tangent lines of each candidate feature point: take the point P i of the candidate feature point as the starting point, P i+1 as the end point, and the middle of the line segment where the starting point and the end point are located Point P (i+(i+1))/2 , calculate the theoretical circle center according to formula (2)
Figure BDA0000038198390000032
Similarly, with P i+1 as the starting point and P i+2 as the end point, the midpoint P ((i+1)+(i+2))/2 of the line segment between the starting point and the end point can be obtained by formula (2) its theoretical center
Figure BDA0000038198390000033

xx ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( ythe y ii -- ythe y ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) 22 [[ xx ii ++ 11 (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ xx ii ++ (( ii ++ 11 )) 22 (( ythe y ii -- ythe y ii ++ 11 )) ++ xx ii (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) ]] ,, -- -- -- (( 22 ))

ythe y ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( xx ii -- xx ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) 22 [[ ythe y ii ++ 11 (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ ythe y ii ++ (( ii ++ 11 )) 22 (( xx ii -- xx ii ++ 11 )) ++ ythe y ii (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) ]]

其中,xi+1,yi+1)为Pi+1坐标,

Figure BDA0000038198390000036
坐标,
Figure BDA0000038198390000038
Figure BDA0000038198390000039
坐标;Among them, x i+1 , y i+1 ) are the coordinates of P i+1 ,
Figure BDA0000038198390000036
for coordinate,
Figure BDA0000038198390000038
for
Figure BDA0000038198390000039
coordinate;

然后以公式(3)计算Pi+1

Figure BDA00000381983900000310
连线与Pi+1切线的夹角及Pi+1
Figure BDA00000381983900000312
连线与Pi+1切线的夹角
Figure BDA00000381983900000313
Then calculate P i+1 and
Figure BDA00000381983900000310
The angle between the connecting line and the tangent line of P i+1 and P i+1 and
Figure BDA00000381983900000312
The angle between the connecting line and the tangent line of P i+1
Figure BDA00000381983900000313

&tau;&tau; ii ++ 11 LL == aa tanthe tan (( ythe y ii ++ 11 CC -- ythe y ii ++ 11 )) (( xx ii ++ 11 CC -- xx ii ++ 11 ))

&tau;&tau; ii ++ 11 RR == aa tanthe tan (( ythe y ii ++ 22 CC -- ythe y ii ++ 11 )) (( xx ii ++ 22 CC -- xx ii ++ 11 )) -- -- -- (( 33 ))

Figure BDA00000381983900000316
Figure BDA00000381983900000316

其中,θi+1即为两个夹角

Figure BDA00000381983900000317
Figure BDA00000381983900000318
之和;Among them, θ i+1 is the two included angles
Figure BDA00000381983900000317
and
Figure BDA00000381983900000318
Sum;

(3.4)、根据θi+1值判断是否是所需特征点并标记特征点之间的线段形状:如果θi+1大于170度则认为该特征点所处的局部轮廓曲率变化不明显,不是所需的特征点,特征点之间的线段形状为Straight;否则,该特征点是所需特征点,特征点之间的线段形状为Arc;(3.4), according to the value of θi +1, judge whether it is the desired feature point and mark the shape of the line segment between the feature points: if θi +1 is greater than 170 degrees, it is considered that the curvature of the local contour where the feature point is located does not change significantly, If it is not the desired feature point, the shape of the line segment between the feature points is Straight; otherwise, the feature point is the desired feature point, and the shape of the line segment between the feature points is Arc;

4)、根据步骤3)的特征点进行匹配,特征匹配按照以下步骤:4), according to the feature point of step 3) match, feature matching follows the following steps:

(4.1)、以双目图像中的左图的特征点

Figure BDA0000038198390000041
为参照,i=0,1,2,L,m,其中,m<s,m表示左图有m个特征点;(4.1), take the feature points of the left image in the binocular image
Figure BDA0000038198390000041
For reference, i=0, 1, 2, L, m, among them, m<s, m means that there are m feature points in the left image;

(4.2)、计算与双目图像中的右图对准行周围2个像素内的所有特征点

Figure BDA0000038198390000042
的欧式距离,i=0,1,2,L,k,其中,k<m,k表示右图m个特征点内有k个特征点在2个像素范围内;(4.2), calculate all feature points within 2 pixels around the line aligned with the right image in the binocular image
Figure BDA0000038198390000042
The Euclidean distance, i=0, 1, 2, L, k, where, k<m, k means that there are k feature points within the range of 2 pixels among the m feature points in the right image;

(4.3)、取欧氏距离最小的

Figure BDA0000038198390000043
作为与之匹配的特征点,从而确定单层皮革边缘轮廓左右图的特征点一一对应关系;(4.3), take the smallest Euclidean distance
Figure BDA0000038198390000043
As the matching feature points, the one-to-one correspondence between the feature points of the left and right images of the edge contour of the single-layer leather is determined;

5)、根据式(7)还原步骤4)的特征点所对应涂胶平台系的真实坐标值;5), according to formula (7) reduction step 4) the real coordinate value of the corresponding gluing platform system of feature point;

6)、将特征点的真实坐标值输入到三轴涂胶机械臂的涂胶坐标队列中,从而控制机械臂完成涂胶过程。6) Input the real coordinate values of the feature points into the gluing coordinate queue of the three-axis gluing manipulator, so as to control the manipulator to complete the gluing process.

进一步,所述步骤6)中,将特征点之间按照步骤3)的线段形状和公式(4)来进行涂胶控制;Further, in the step 6), the gluing control is carried out according to the line segment shape and the formula (4) of the step 3) between the feature points;

LL (( PP ii ,, PP ii ++ 11 )) == StraigtStraigt ArcArc ,, ii == 0,1,20,1,2 ,, LmL m -- 11 -- -- -- (( 44 ))

其中,L(Pi,Pi+1)表示第Pi和第Pi+1个特征点之间的单层皮革边缘线段形状,Straight表示线段,涂胶时按照直线插补方式进行涂胶,Arc表示弧线段,涂胶时按照曲线插补方式进行涂胶,i=0,1,2,L m-1表示有m个边缘轮廓特征点。Among them, L(P i , P i+1 ) represents the shape of the single-layer leather edge segment between the P i and P i+1 feature points, Straight represents the line segment, and the glue is applied according to the linear interpolation method , Arc represents the arc segment, the glue is applied according to the curve interpolation method when applying glue, i=0, 1, 2, L m-1 means that there are m edge contour feature points.

再进一步,所述步骤1)中,拍摄不同姿态的标定物获得用于标定分析的图像,根据线性相机模型(5)得到各个相机的内参数(k l u0 v0 f θ)和外参数,由相机内外参数点乘得到的3×4矩阵M3×4即为透视投影矩阵;Further, in the step 1), the calibration objects of different postures are taken to obtain images for calibration analysis, and the internal parameters (k l u 0 v 0 f θ) and external parameters of each camera are obtained according to the linear camera model (5), and are obtained by The 3×4 matrix M 3×4 obtained by dot producting the internal and external parameters of the camera is the perspective projection matrix;

uu vv 11 == 11 &lambda;&lambda; ff kk -- ff kk cotcot &theta;&theta; uu 00 00 00 ff ll sinsin &theta;&theta; vv 00 00 00 00 11 00 RR 33 &times;&times; 33 tt 33 &times;&times; 11 00 TT 11 Xx WW YY WW ZZ WW 11 == Mm 33 &times;&times; 44 Xx WW YY WW ZZ WW 11 -- -- -- (( 55 ))

其中,k,l是双目相机的像素点大小,u0,v0是相机的光轴中心的坐标值,f是相机焦距,θ是相机坐标系的偏斜度(一般为90°),R3×3是相机的旋转矩阵,t3×1则是相机的平移矩阵,(u v 1)T是图像中的任一点,(XW YW ZW 1)T则是图像中任一点所对应的涂胶坐标系坐标;Among them, k, l is the pixel point size of the binocular camera, u 0 , v 0 are the coordinate values of the optical axis center of the camera, f is the focal length of the camera, θ is the skewness of the camera coordinate system (generally 90°), R 3×3 is the rotation matrix of the camera, t 3×1 is the translation matrix of the camera, (u v 1) T is any point in the image, (X W Y W Z W 1) T is any point in the image Corresponding gluing coordinate system coordinates;

图像矫正的过程如下:The process of image correction is as follows:

(1.1)、通过公式(6)及左右相机的内外参数得到右相机相对于左相机的旋转和平移矩阵,并矫正左右图像使之行对准;(1.1), through the formula (6) and the internal and external parameters of the left and right cameras, the rotation and translation matrix of the right camera relative to the left camera is obtained, and the left and right images are corrected so that the rows are aligned;

R=Rr(Rl)TR=R r (R l )T

                              (6)  (6)

T=Tr-RTl T= Tr - RTl

其中,Rl,Rr是左右相机的旋转矩阵,Tl,Tr是左右相机的平移矩阵,R,T是右相机相对于左相机的旋转矩阵和平移矩阵。Among them, R l , R r are the rotation matrices of the left and right cameras, T l , T r are the translation matrices of the left and right cameras, R, T are the rotation matrix and translation matrix of the right camera relative to the left camera.

(1.2)、通过公式(7)及前面的相机内外参数得到重投影矩阵Q:(1.2), obtain the reprojection matrix Q through the formula (7) and the previous internal and external parameters of the camera:

QQ == 11 00 00 -- cc xx 00 11 00 -- cc ythe y 00 00 00 ff 00 00 11 // TT xx 00 -- -- -- (( 77 ))

其中,Tx是双目相机外参平移向量T的x轴分量,cx和cy是左相机光学中心的世界坐标系坐标值,f是左相机的焦距。Among them, T x is the x-axis component of the extrinsic translation vector T of the binocular camera, c x and cy are the world coordinate system coordinates of the optical center of the left camera, and f is the focal length of the left camera.

所述步骤2)中,将单层皮革图像依据皮革和噪声的RGB值不同完成阈值滤波转换成灰度图像,具体过程如下:In the described step 2), the single-layer leather image is converted into a grayscale image according to the different RGB values of the leather and the noise by threshold filtering, and the specific process is as follows:

(2.1)、使用高斯函数对图像进行平滑;(2.1), use the Gaussian function to smooth the image;

(2.2)、采用数学形态学中的闭运算;(2.2), using the closing operation in mathematical morphology;

(2.3)、采用Canny算子提取单向素的皮革边缘轮廓。(2.3), using the Canny operator to extract the leather edge contour of the unidirectional voxel.

所述步骤2)中,在涂胶平台的四周安装光源,在所述涂胶平台的中心正上方布置双目相机。In the step 2), a light source is installed around the gluing platform, and a binocular camera is arranged directly above the center of the gluing platform.

本发明的技术构思为:利用视觉与图像技术,通过双目相机采集图像数据并分析处理,将能得到较为精确的皮革特征点并转化成涂胶装置的加工坐标,该方法的优点是非接触式的涂胶,实施难度低,成本低,精确度高,对人体无危害。The technical concept of the present invention is: using vision and image technology to collect image data through a binocular camera and analyze and process, to obtain more accurate leather feature points and convert them into processing coordinates of the gluing device. The advantage of this method is that it is non-contact. The gluing is low in implementation difficulty, low in cost, high in accuracy, and has no harm to the human body.

单层皮革边缘形状未知,若是经过处理的皮革则边缘较为规则,而未经处理的皮革则边缘较为复杂,在完成边缘检测的同时还需要用改良的基于曲率方法识别边缘轮廓特征点,在涂胶时以特征点点序列为涂胶控制点,以特征点之间的边缘线段为参照确定三轴机械臂的直线或曲线涂胶方式,本发明就是按照上述目的和要求有效的完成了从检测、识别、测量到涂胶的完整单层皮革涂胶方法。The shape of the edge of single-layer leather is unknown. If the leather is treated, the edge is more regular, while the edge of untreated leather is more complicated. When the edge detection is completed, it is necessary to use the improved curvature-based method to identify the edge contour feature points. When gluing, the feature point sequence is used as the gluing control point, and the edge line segment between the feature points is used as a reference to determine the straight or curved gluing method of the three-axis mechanical arm. The present invention effectively completes the process from detection, Identification, measurement to glue application of complete single ply leather gluing methods.

这种以视觉为基础的感知技术融入到传统皮革涂胶过程,将有效的提升其过程的智能化水平和自动化程度,可以避免因涂胶过程产生对工人健康的潜在危害,实现闭环控制流程,既保障了工人的健康又提高了生产效率。This vision-based perception technology is integrated into the traditional leather gluing process, which will effectively improve the intelligence level and automation of the process, avoid potential hazards to workers' health caused by the gluing process, and realize closed-loop control processes. It not only protects the health of workers but also improves production efficiency.

通过固定在涂胶平台中心正上方的双目相机获取待涂胶的单层皮革图像,由于光源的作用提供了较高的图像质量,通过阈值滤波、高斯函数和闭运算对图像进行平滑降噪,得到较为光滑的皮革灰度图像,通过坎尼算子提取得到单像素的皮革边缘轮廓,再通过改进的基于曲率的特征点提取算法提取边缘轮廓特征点并匹配,最后,通过匹配的特征点还原得到其对应涂胶坐标系的坐标值,结合特征点之间边缘轮廓的线段形状完成涂胶过程。The image of the single-layer leather to be glued is acquired by a binocular camera fixed directly above the center of the gluing platform. Due to the effect of the light source, a high image quality is provided, and the image is smoothed and denoised by threshold filtering, Gaussian function and closed operation. , to obtain a relatively smooth leather grayscale image, extract the single-pixel leather edge contour through the Canny operator, and then extract the edge contour feature points through the improved curvature-based feature point extraction algorithm and match them. Finally, through the matched feature points The coordinate values corresponding to the gluing coordinate system are obtained by restoring, and the gluing process is completed in combination with the line segment shape of the edge contour between feature points.

本发明的有益效果主要表现在:1、整个方法包含了对皮革的检测、识别、测量到涂胶的完整流程,有利于涂胶的工业自动化,且不需要工人参与其中;2、基于双目视觉的涂胶方法具有非接触、高精度的优势;3、改进的基于曲率的特征点提取算法能有效的提取特征点,并减少了输入涂胶机械臂加工序列的坐标数,提高涂胶速度。The beneficial effects of the present invention are mainly manifested in: 1. The whole method includes a complete process from leather detection, identification, measurement to gluing, which is beneficial to the industrial automation of gluing, and does not require workers to participate in it; 2. Based on binocular The visual gluing method has the advantages of non-contact and high precision; 3. The improved curvature-based feature point extraction algorithm can effectively extract feature points, reduce the number of coordinates input into the gluing robot arm processing sequence, and increase the gluing speed .

附图说明Description of drawings

图1是本发明的涂胶设备示意装置图。Fig. 1 is a schematic device diagram of the gluing equipment of the present invention.

图2是本发明的涂胶方法流程图。Fig. 2 is a flow chart of the gluing method of the present invention.

图3是左右相机边缘提取结果的示意图,其中,左侧图为左相机的边缘提取结果,右侧图为右相机的边缘提取结果,(1)表示第一种皮革实例样片;(2)表示第二种皮革实例样片;(3)表示第三种皮革实例样片。Fig. 3 is a schematic diagram of the edge extraction results of the left and right cameras, wherein the left image is the edge extraction result of the left camera, and the right image is the edge extraction result of the right camera, (1) represents the first leather example sample; (2) represents The second leather example sample; (3) represents the third leather example sample.

图4是左右相机边缘特征提取结果的示意图,其中,黑边框表示特征提取结果,左侧图为左相机的边缘特征提取结果,右侧图为右相机的边缘特征提取结果,(1)表示第一种皮革实例样片;(2)表示第二种皮革实例样片;(3)表示第三种皮革实例样片。Figure 4 is a schematic diagram of the edge feature extraction results of the left and right cameras, where the black border represents the feature extraction result, the left image is the edge feature extraction result of the left camera, and the right image is the edge feature extraction result of the right camera, (1) represents the first One sample of leather example; (2) represents the second sample of leather example; (3) represents the third sample of leather example.

图5是候选特征点的两条切线所形成夹角θi+1原理图。Fig. 5 is a schematic diagram of an angle θ i+1 formed by two tangent lines of candidate feature points.

图6是左右特征点匹配结果的示意图,其中,黑色数字表示配对特征点,左侧图为左图特征点匹配结果,右侧图为右图特征点匹配结果,(1)表示第一种皮革实例样片;(2)表示第二种皮革实例样片;(3)表示第三种皮革实例样片。Figure 6 is a schematic diagram of the matching results of the left and right feature points, where the black numbers indicate the paired feature points, the left picture shows the matching results of the feature points in the left picture, and the right picture shows the matching results of the feature points in the right picture, and (1) indicates the first kind of leather Example swatches; (2) indicates the second leather example swatches; (3) indicates the third leather example swatches.

图7是重建结果符号象限分布的示意图。Fig. 7 is a schematic diagram of the distribution of sign quadrants of the reconstruction results.

图8是重建结果与实际值的误差分析的示意图。Fig. 8 is a schematic diagram of the error analysis between the reconstruction result and the actual value.

具体实施方式Detailed ways

下面结合附图对本发明作进一步描述。The present invention will be further described below in conjunction with the accompanying drawings.

参照图1~图8,一种基于双目视觉的单层皮革涂胶控制方法,该方法采用双目相机和三轴涂胶机械臂,所述单层皮革涂胶控制方法包括以下步骤:Referring to Figures 1 to 8, a single-layer leather gluing control method based on binocular vision, the method uses a binocular camera and a three-axis gluing mechanical arm, and the single-layer leather gluing control method includes the following steps:

1)、对固定的双目相机进行立体标定;1) Stereo calibration of the fixed binocular camera;

2)、采集单层皮革双目图像并提取其边缘轮廓;2), collect single-layer leather binocular image and extract its edge contour;

3)、依据步骤2)的边缘轮廓图像进行特征提取:3), carry out feature extraction according to the edge profile image of step 2):

单层皮革边缘轮廓由有限对二维坐标点(xi,yi)构成的,i=1,2,L,n表示该轮廓有n对坐标点组成;对于轮廓上任意坐标点对(xi,yi),其曲率计算公式如(1),计算所有坐标点对的曲率Ki,i=1,2,L,n表示该轮廓有n对坐标;The single-layer leather edge contour is composed of finite pairs of two-dimensional coordinate points (x i , y i ), i=1, 2, L, n means that the contour is composed of n pairs of coordinate points; for any pair of coordinate points on the contour (x i , y i ), its curvature calculation formula is as (1), calculate the curvature K i of all coordinate point pairs, i=1, 2, L, n means that the contour has n pairs of coordinates;

KK ii == (( xx ii ++ 11 -- xx ii -- 11 22 )) (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) 22 -- (( xx ii ++ 11 -- xx ii -- 11 22 )) 22 (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) [[ (( xx ii ++ 11 -- xx ii -- 11 22 )) 22 ++ (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) 22 ]] 22 33 ,, ii == 1,21,2 ,, LL ,, nno -- -- -- (( 11 ))

其中,(xi+1,yi+1)为该坐标点对(xi,yi)的下一个坐标点对,(xi-1,yi-1)为该坐标点对(xi,yi)的上一个坐标点对。Among them, (x i+1 , y i+1 ) is the next coordinate point pair of the coordinate point pair (x i , y i ), (xi -1 , y i-1 ) is the coordinate point pair (x i , y i ) last coordinate point pair.

特征提取按照以下步骤:Feature extraction follows the following steps:

(3.1)、以

Figure BDA0000038198390000082
计算轮廓局部曲率阈值,B为比例系数,选在[1,2]之间;(3.1), with
Figure BDA0000038198390000082
Calculate the local curvature threshold of the contour, B is the proportional coefficient, selected between [1, 2];

(3.2)、满足Ki≥T(i)的坐标点会选入候选特征点Pi,i=1,2,L,s<n表示共有s个候选特征点;(3.2), coordinate points satisfying K i ≥ T(i) will be selected as candidate feature points P i , i=1, 2, L, s<n means that there are s candidate feature points in total;

(3.3)、计算每个候选特征点的两条切线所形成的夹角θi+1:以候选特征点的点Pi为起点,Pi+1为终点,起点与终点所在线段的中点P(i+(i+1))/2,根据公式(2)计算理论圆心

Figure BDA0000038198390000091
同理,以Pi+1为起点,Pi+2为终点,起点与终点所在线段的中点P((i+1)+(i+2))/2,通过公式(2)得其理论圆心
Figure BDA0000038198390000092
(3.3), calculate the angle θ i+1 formed by the two tangent lines of each candidate feature point: take the point P i of the candidate feature point as the starting point, P i+1 as the end point, and the middle of the line segment where the starting point and the end point are located Point P (i+(i+1))/2 , calculate the theoretical circle center according to formula (2)
Figure BDA0000038198390000091
Similarly, with P i+1 as the starting point and P i+2 as the end point, the midpoint P ((i+1)+(i+2))/2 of the line segment between the starting point and the end point can be obtained by formula (2) its theoretical center
Figure BDA0000038198390000092

xx ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( ythe y ii -- ythe y ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) 22 [[ xx ii ++ 11 (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ xx ii ++ (( ii ++ 11 )) 22 (( ythe y ii -- ythe y ii ++ 11 )) ++ xx ii (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) ]] ,, -- -- -- (( 22 ))

ythe y ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( xx ii -- xx ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) 22 [[ ythe y ii ++ 11 (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ ythe y ii ++ (( ii ++ 11 )) 22 (( xx ii -- xx ii ++ 11 )) ++ ythe y ii (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) ]]

其中,(xi+1,yi+1)为Pi+1坐标,

Figure BDA0000038198390000095
Figure BDA0000038198390000096
坐标,
Figure BDA0000038198390000097
Figure BDA0000038198390000098
坐标;Among them, (x i+1 , y i+1 ) is the coordinate of P i+1 ,
Figure BDA0000038198390000095
for
Figure BDA0000038198390000096
coordinate,
Figure BDA0000038198390000097
for
Figure BDA0000038198390000098
coordinate;

然后以公式(3)计算Pi+1

Figure BDA0000038198390000099
连线与Pi+1切线的夹角及Pi+1
Figure BDA00000381983900000911
连线与Pi+1切线的夹角
Figure BDA00000381983900000912
Then calculate P i+1 and
Figure BDA0000038198390000099
The angle between the connecting line and the tangent line of P i+1 and P i+1 and
Figure BDA00000381983900000911
The angle between the connecting line and the tangent line of P i+1
Figure BDA00000381983900000912

&tau;&tau; ii ++ 11 LL == aa tanthe tan (( ythe y ii ++ 11 CC -- ythe y ii ++ 11 )) (( xx ii ++ 11 CC -- xx ii ++ 11 ))

&tau;&tau; ii ++ 11 RR == aa tanthe tan (( ythe y ii ++ 22 CC -- ythe y ii ++ 11 )) (( xx ii ++ 22 CC -- xx ii ++ 11 )) -- -- -- (( 33 ))

其中,θi+1即为两个夹角

Figure BDA00000381983900000916
Figure BDA00000381983900000917
之和;Among them, θ i+1 is the two included angles
Figure BDA00000381983900000916
and
Figure BDA00000381983900000917
Sum;

(3.4)、根据θi+1值判断是否是所需特征点并标记特征点之间的线段形状:如果θi+1大于170度则认为该特征点所处的局部轮廓曲率变化不明显,不是所需的特征点,特征点之间的线段形状为Straight;否则,该特征点是所需特征点,特征点之间的线段形状为Arc;(3.4), according to the value of θi +1, judge whether it is the desired feature point and mark the shape of the line segment between the feature points: if θi +1 is greater than 170 degrees, it is considered that the curvature of the local contour where the feature point is located does not change significantly, If it is not the desired feature point, the shape of the line segment between the feature points is Straight; otherwise, the feature point is the desired feature point, and the shape of the line segment between the feature points is Arc;

4)、根据步骤3)的特征点进行匹配,特征匹配按照以下步骤:4), according to the feature point of step 3) match, feature matching follows the following steps:

(4.1)、以双目图像中的左图的特征点为参照,i=0,1,2,L,m,其中,m<s,m表示左图有m个特征点;(4.1), take the feature points of the left image in the binocular image For reference, i=0, 1, 2, L, m, among them, m<s, m means that there are m feature points in the left image;

(4.2)、计算与双目图像中的右图对准行周围2个像素内的所有特征点的欧式距离,i=0,1,2,L,k,其中,k<m,k表示右图m个特征点内有k个特征点在2个像素范围内;(4.2), calculate all feature points within 2 pixels around the line aligned with the right image in the binocular image The Euclidean distance, i=0, 1, 2, L, k, where, k<m, k means that there are k feature points within the range of 2 pixels among the m feature points in the right image;

(4.3)、取欧氏距离最小的

Figure BDA0000038198390000102
作为与之匹配的特征点,从而确定单层皮革边缘轮廓左右图的特征点一一对应关系;(4.3), take the smallest Euclidean distance
Figure BDA0000038198390000102
As the matching feature points, the one-to-one correspondence between the feature points of the left and right images of the edge contour of the single-layer leather is determined;

5)、根据式(6)还原步骤4)的特征点所对应涂胶平台系的真实坐标值;5), according to formula (6) reduction step 4), the true coordinate value of the corresponding gluing platform system of feature point;

6)、将特征点的真实坐标值输入到三轴涂胶机械臂的涂胶坐标队列中,从而控制机械臂完成涂胶过程。6) Input the real coordinate values of the feature points into the gluing coordinate queue of the three-axis gluing manipulator, so as to control the manipulator to complete the gluing process.

本实施例中,在涂胶平台的四周装有四个光源,可以消除待涂胶皮革边缘的阴影,提供更加清晰的图像,在平台中心正上方是由两台相机组成的双目视觉采集设备,该设备由USB连线与电脑主机相连,电脑则会完成随后的图像采集和分析及完成单层皮革涂胶功能。In this embodiment, four light sources are installed around the gluing platform, which can eliminate the shadow on the edge of the leather to be gluing and provide a clearer image. Right above the center of the platform is a binocular vision acquisition device composed of two cameras , the device is connected to the host computer by a USB cable, and the computer will complete the subsequent image acquisition and analysis and complete the single-layer leather gluing function.

1)、对固定的双目相机进行立体标定并矫正图像。将双目相机固定于涂胶平台的中心正上方,通过拍摄不同姿态的标定物(黑白相间的棋盘格标定板,板上每一棋盘格尺寸均已知)获得用于标定分析的图像。根据(Heikkila,Silven,《A Four-step Camera Calibration Procedurewith Implicit Image Correction》,CVPR97,p:1106-1112)中的线性相机模型公式(4)得到各个相机的内参数(k l u0 v0 f θ)和外参数(旋转矩阵R3×3,平移矩阵t3×1)。由相机内外参数点乘得到的3×4矩阵M3×4即为透视投影矩阵。1) Carry out stereo calibration on the fixed binocular camera and correct the image. Fix the binocular camera directly above the center of the gluing platform, and obtain images for calibration analysis by shooting calibration objects of different postures (black and white checkerboard calibration boards, each checkerboard size on the board is known). According to the linear camera model formula (4) in (Heikkila, Silven, "A Four-step Camera Calibration Procedure with Implicit Image Correction", CVPR97, p: 1106-1112), the internal parameters (k l u 0 v 0 f θ) of each camera are obtained and external parameters (rotation matrix R 3×3 , translation matrix t 3×1 ). The 3×4 matrix M 3×4 obtained by dot multiplying the internal and external parameters of the camera is the perspective projection matrix.

uu vv 11 == 11 &lambda;&lambda; ff kk -- ff kk cotcot &theta;&theta; uu 00 00 00 ff ll sinsin &theta;&theta; vv 00 00 00 00 11 00 RR 33 &times;&times; 33 tt 33 &times;&times; 11 00 TT 11 Xx WW YY WW ZZ WW 11 -- -- -- (( 44 ))

其中,k,l是双目相机的像素点大小,u0,v0是相机的光轴中心的坐标值,f是相机焦距,θ是相机坐标系的偏斜度(一般为90°),R3×3是相机的旋转矩阵,t3×1则是相机的平移矩阵,(u v 1)T是图像中的任一点,(XW YW ZW 1)T则是图像中任一点所对应的涂胶坐标系坐标。根据式(4)计算得到左右相机的M3×4矩阵如下所示:Among them, k, l is the pixel point size of the binocular camera, u 0 , v 0 are the coordinate values of the optical axis center of the camera, f is the focal length of the camera, θ is the skewness of the camera coordinate system (generally 90°), R 3×3 is the rotation matrix of the camera, t 3×1 is the translation matrix of the camera, (u v 1) T is any point in the image, (X W Y W Z W 1) T is any point in the image Corresponding glue coordinate system coordinates. The M 3×4 matrix of the left and right cameras calculated according to formula (4) is as follows:

Mm 33 &times;&times; 44 == 840.95380999999998840.95380999999998 00 394.97588729858398394.97588729858398 00 00 840.95380999999998840.95380999999998 243.11664962768555243.11664962768555 00 00 00 11 00

(1.1)、通过双目相机的外参数,以公式(5)得到右相机相对于左相机的旋转和平移向量,并矫正左右图像使之行对准。(1.1), through the external parameters of the binocular camera, the rotation and translation vectors of the right camera relative to the left camera are obtained by formula (5), and the left and right images are corrected to align them.

R=Rr(Rl)TR=R r (R l )T

                         (5)(5)

T=Tr-RTl T= Tr - RTl

其中,Rl,Rr是左右相机的旋转矩阵,Tl,Tr是左右相机的平移矩阵,R,T是右相机相对于左相机的旋转矩阵和平移矩阵。根据式(5)得到的结果如下所示:Among them, R l , R r are the rotation matrices of the left and right cameras, T l , T r are the translation matrices of the left and right cameras, R, T are the rotation matrix and translation matrix of the right camera relative to the left camera. The results obtained according to formula (5) are as follows:

RR == 0.99951784710056250.9995178471005625 0.02153909002700480.0215390900270048 -- 0.02236383080478860.0223638308047886 -- 0.02205007834536620.0220500783453662 0.99949549402800760.9994954940280076 -- 0.02285938456460050.0228593845646005 0.02186017777649210.0218601777764921 0.02334148706740150.0233414870674015 0.99948852300017060.9994885230001706

T=(-110892622920883670.06314981629290070.0921180076963402)T=(-110892622920883670.06314981629290070.0921180076963402)

(1.2)、通过公式(6)及前面的相机内外参数得到重投影矩阵Q:(1.2), obtain the reprojection matrix Q through the formula (6) and the previous internal and external parameters of the camera:

QQ == 11 00 00 -- cc xx 00 11 00 -- cc ythe y 00 00 00 ff 00 00 11 // TT xx 00 -- -- -- (( 66 ))

其中,

Figure BDA0000038198390000121
是双目相机外参平移向量T的x轴分量,cx和cy是左相机光学中心的世界坐标系坐标值,f是左相机的焦距。根据式(6)得到的结果如下所示:in,
Figure BDA0000038198390000121
is the x-axis component of the extrinsic translation vector T of the binocular camera, c x and cy are the world coordinate system coordinates of the optical center of the left camera, and f is the focal length of the left camera. The results obtained according to formula (6) are as follows:

QQ == 11 00 00 -- 394.97588729858398394.97588729858398 00 11 00 -- 243.11664962768555243.11664962768555 00 00 00 840.95380999999998840.95380999999998 00 00 0.90224051634649540.9022405163464954 00

2)、采集单层皮革双目图像并提取其边缘轮廓。轮廓提取结果如附图3所示2) Collect binocular images of single-layer leather and extract its edge contour. The contour extraction results are shown in Figure 3

(2.1)、由于涂胶平台的光源作用,本发明将单层皮革图像依据皮革和噪声的RGB值不同完成阈值滤波。阈值滤波即是在[0,1,2,L,255]区间内选取合适的灰度值,大于该灰度值的像素点会保留;(2.1), due to the effect of the light source on the gluing platform, the present invention completes the threshold filtering of the single-layer leather image according to the RGB values of the leather and the noise. Threshold filtering is to select a suitable gray value in the interval [0, 1, 2, L, 255], and the pixels greater than the gray value will be retained;

(2.2)、本发明使用高斯函数对图像进行平滑,以此减少可能存在的噪声对皮革边缘的影响。本发明选取3×3的高斯核函数(G.Bradski,A.Kaebler,《Learning OpenCV》,2008,110)对经过阈值滤波的灰度图平滑处理;(2.2), the present invention uses a Gaussian function to smooth the image, thereby reducing the influence of possible noise on the edge of the leather. The present invention selects the Gaussian kernel function (G.Bradski, A.Kaebler, "Learning OpenCV", 2008, 110) of 3 * 3 to smooth the grayscale image through threshold filtering;

(2.3)、本发明采用数学形态学中的闭运算保证在提取尽可能完整边缘的同时保留边缘的不规则性(参考文献:G.Bradski,A.Kaebler,《Learning OpenCV》,2008,120-121)。闭运算中的结构元素为3×3的矩形结构按照先膨胀后腐蚀的顺序对平滑的图像进行一次迭代运算,使图像更为光滑;(2.3), the present invention adopts the closed operation in mathematical morphology to ensure that the irregularity of the edge is preserved while extracting as complete an edge as possible (references: G.Bradski, A.Kaebler, "Learning OpenCV", 2008, 120- 121). The structural element in the closing operation is a 3×3 rectangular structure to perform an iterative operation on the smooth image in the order of first expansion and then erosion to make the image smoother;

(2.4)、本发明采用的是基于坎尼(Canny)算子(Fleck,《Somedefects in finite-difference edge finders》,IEEE PAMI,1992,14(3):337-345)对单层皮革图像边缘轮廓进行单像素提取,其中,坎尼算子的低阈值与高阈值选为[0.3 0.35]。(2.4), what the present invention adopted is based on Canny (Canny) operator (Fleck, "Some defects in finite-difference edge finders", IEEE PAMI, 1992,14 (3): 337-345) to single-layer leather image edge Single-pixel extraction is performed on the contour, and the low threshold and high threshold of the Canny operator are selected as [0.3 0.35].

3)、依据步骤2)的边缘轮廓图像进行特征提取。单层皮革边缘轮廓实际上就是由有限对二维坐标点(xi yi)构成的,本发明采用改良的基于轮廓坐标对之间曲率的方式(X.C.He,《Curvature Scale SpaceCorner Detector with Adaptive Threshold and Dynamic Region ofSupport》,ICPR,2004,vo1.2:791-794)来提取轮廓的特征点。对于轮廓上任意坐标点对(xi,yi),i=1,2,L,n表示该轮廓有n对坐标点组成,其曲率计算公式如(1)。特征点提取结果如附图4所示,黑边框点表示特征点。3), perform feature extraction according to the edge contour image in step 2). The single-layer leather edge contour is actually constituted by limited pairs of two-dimensional coordinate points (xi y i ), and the present invention adopts an improved method based on the curvature between the contour coordinate pairs (XCHe, "Curvature Scale Space Corner Detector with Adaptive Threshold and Dynamic Region of Support", ICPR, 2004, vol1.2: 791-794) to extract the feature points of the contour. For any pair of coordinate points (x i , y i ) on the contour, i=1, 2, L, n means that the contour consists of n pairs of coordinate points, and its curvature calculation formula is as (1). The feature point extraction results are shown in Figure 4, and the black frame points represent the feature points.

KK ii == (( xx ii ++ 11 -- xx ii -- 11 22 )) (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) 22 -- (( xx ii ++ 11 -- xx ii -- 11 22 )) 22 (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) [[ (( xx ii ++ 11 -- xx ii -- 11 22 )) 22 ++ (( ythe y ii ++ 11 -- ythe y ii -- 11 22 )) 22 ]] 22 33 ,, ii == 1,21,2 ,, LL ,, nno -- -- -- (( 11 ))

(3.1)、以

Figure BDA0000038198390000132
计算轮廓局部曲率阈值,B为比例系数,选在[1,2]之间较为合理,本发明中选为1.5;(3.1), with
Figure BDA0000038198390000132
Calculate the contour local curvature threshold, B is a proportional coefficient, it is more reasonable to be selected between [1, 2], and it is selected as 1.5 in the present invention;

(3.2)、满足Ki≥T(i)的坐标点会选入候选特征点Pi,i=1,2,L,s<n表示共有s个候选特征点;(3.2), coordinate points satisfying K i ≥ T(i) will be selected as candidate feature points P i , i=1, 2, L, s<n means that there are s candidate feature points in total;

(3.3)、计算每个候选特征点的两条切线所形成的夹角θi+1:以候选特征点的点Pi为起点,Pi+1为终点,起点与终点所在线段的中点P(i+(i+1)/2,根据公式(2)计算理论圆心

Figure BDA0000038198390000133
同理,以Pi+1为起点,Pi+2为终点,起点与终点所在线段的中点P((i+1)+(i+22))/2,通过公式(2)得其理论圆心 (3.3), calculate the angle θ i+1 formed by the two tangent lines of each candidate feature point: take the point P i of the candidate feature point as the starting point, P i+1 as the end point, and the middle of the line segment where the starting point and the end point are located Point P (i+(i+1)/2 , calculate the theoretical circle center according to formula (2)
Figure BDA0000038198390000133
Similarly, with P i+1 as the starting point and P i+2 as the end point, the midpoint P ((i+1)+(i+22))/2 of the line segment between the starting point and the end point can be obtained by formula (2) its theoretical center

xx ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( ythe y ii -- ythe y ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) 22 [[ xx ii ++ 11 (( ythe y ii ++ (( ii ++ 11 )) 22 -- ythe y ii )) ++ xx ii ++ (( ii ++ 11 )) 22 (( ythe y ii -- ythe y ii ++ 11 )) ++ xx ii (( ythe y ii ++ 11 -- ythe y ii ++ (( ii ++ 11 )) 22 )) ]] ,, -- -- -- (( 22 ))

ythe y ii ++ 11 CC == (( xx ii ++ 11 22 ++ ythe y ii ++ 11 22 )) (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ (( xx ii ++ (( ii ++ 11 )) 22 22 ++ ythe y ii ++ (( ii ++ 11 )) 22 22 )) (( xx ii -- xx ii ++ 11 )) ++ (( xx ii 22 ++ ythe y ii 22 )) (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) 22 [[ ythe y ii ++ 11 (( xx ii ++ (( ii ++ 11 )) 22 -- xx ii )) ++ ythe y ii ++ (( ii ++ 11 )) 22 (( xx ii -- xx ii ++ 11 )) ++ ythe y ii (( xx ii ++ 11 -- xx ii ++ (( ii ++ 11 )) 22 )) ]]

其中,(xi+1,yi+1)为Pi+1坐标,

Figure BDA0000038198390000143
Figure BDA0000038198390000144
坐标,
Figure BDA0000038198390000145
Figure BDA0000038198390000146
坐标;Among them, (x i+1 , y i+1 ) is the coordinate of P i+1 ,
Figure BDA0000038198390000143
for
Figure BDA0000038198390000144
coordinate,
Figure BDA0000038198390000145
for
Figure BDA0000038198390000146
coordinate;

然后以公式(3)计算Pi+1

Figure BDA0000038198390000147
连线与Pi+1切线的夹角
Figure BDA0000038198390000148
及Pi+1
Figure BDA0000038198390000149
连线与PI+1切线的夹角 Then calculate P i+1 and
Figure BDA0000038198390000147
The angle between the connecting line and the tangent line of P i+1
Figure BDA0000038198390000148
and P i+1 and
Figure BDA0000038198390000149
The angle between the connecting line and the tangent line of P I+1

&tau;&tau; ii ++ 11 LL == aa tanthe tan (( ythe y ii ++ 11 CC -- ythe y ii ++ 11 )) (( xx ii ++ 11 CC -- xx ii ++ 11 ))

&tau;&tau; ii ++ 11 RR == aa tanthe tan (( ythe y ii ++ 22 CC -- ythe y ii ++ 11 )) (( xx ii ++ 22 CC -- xx ii ++ 11 )) -- -- -- (( 33 ))

Figure BDA00000381983900001413
Figure BDA00000381983900001413

其中,θi+1即为两个夹角

Figure BDA00000381983900001414
Figure BDA00000381983900001415
之和;Among them, θ i+1 is the two included angles
Figure BDA00000381983900001414
and
Figure BDA00000381983900001415
Sum;

(3.4)、根据θi+1值判断是否是所需特征点并标记特征点之间的线段形状:如果θi+1大于170度则可认为该特征点所处的局部轮廓曲率变化不明显,不是所需的特征点,特征点之间的线段形状为Straight;否则,该特征点是所需特征点,特征点之间的线段形状为Arc。(3.4), according to the value of θi +1, judge whether it is the required feature point and mark the shape of the line segment between the feature points: if θi +1 is greater than 170 degrees, it can be considered that the curvature of the local contour where the feature point is located does not change significantly , is not the desired feature point, and the shape of the line segment between the feature points is Straight; otherwise, the feature point is the desired feature point, and the shape of the line segment between the feature points is Arc.

4)、根据步骤3)的特征点进行匹配。左右图特征匹配结果如附图6所示,黑边框点表示特征点。4), according to the characteristic point of step 3), match. The feature matching results of the left and right images are shown in Figure 6, and the black frame points represent feature points.

(4.1)、以左图的特征点

Figure BDA00000381983900001416
为参照(i=0,1,2,L,m<s表示左图有m个特征点);(4.1), with the feature points on the left
Figure BDA00000381983900001416
For reference (i=0, 1, 2, L, m<s means that there are m feature points in the left picture);

(4.2)、计算与其右图对准行周围2个像素内的所有特征点

Figure BDA00000381983900001417
的欧式距离(i=0,1,2,L,k,k<m表示右图m个特征点内有k个特征点在2个像素范围内);(4.2), calculate all feature points within 2 pixels around the line aligned with its right image
Figure BDA00000381983900001417
Euclidean distance (i=0, 1, 2, L, k, k<m means that there are k feature points in the m feature points in the right picture within the range of 2 pixels);

(4.3)、取欧氏距离最小的

Figure BDA00000381983900001418
作为与之匹配的特征点,从而确定单层皮革边缘轮廓左右图的特征点一一对应关系。欧氏距离计算公式如公式(8)所示(参考文献:杨淑莹,《模式识别与智能计算——Matlab技术实现》,2008:54)。(4.3), take the smallest Euclidean distance
Figure BDA00000381983900001418
As the matching feature points, the one-to-one correspondence between the feature points of the left and right images of the single-layer leather edge contour is determined. The Euclidean distance calculation formula is shown in formula (8) (reference: Yang Shuying, "Pattern Recognition and Intelligent Computing - Matlab Technology Implementation", 2008: 54).

EuclideanEuclidean ii == (( xx LL ii -- xx RR ii )) 22 ++ (( ythe y LL ii -- ythe y RR ii )) 22 -- -- -- (( 88 ))

其中,Euclideani表示

Figure BDA0000038198390000152
Figure BDA0000038198390000153
的最小欧式距离,
Figure BDA0000038198390000155
的坐标值,
Figure BDA0000038198390000156
Figure BDA0000038198390000157
的坐标值。Among them, Euclidean i means
Figure BDA0000038198390000152
and
Figure BDA0000038198390000153
The minimum Euclidean distance of , for
Figure BDA0000038198390000155
coordinate value,
Figure BDA0000038198390000156
for
Figure BDA0000038198390000157
coordinate value.

5)、根据式(6)还原步骤4)的特征点所对应涂胶平台系的真实坐标值。对于每一个特征点,由步骤4)得到该特征点在单层皮革左右图中的图像系坐标值以及式(6)重投影矩阵,还原得到其对应涂胶平台系的真实坐标值,本发明中的涂胶平台坐标系即是三轴涂胶机械臂的运动坐标系。根据附图6中(1)-(3)所示的黑色数字标记特征点,还原得到的真实坐标值,其中左图、右图为特征点的像素坐标值,重建结果即为真实坐标值:5), according to formula (6), restore the real coordinate value of the gluing platform system corresponding to the feature point in step 4). For each feature point, obtain the image system coordinate value of the feature point in the left and right pictures of the single-layer leather and the reprojection matrix of formula (6) by step 4), restore and obtain the real coordinate value of its corresponding gluing platform system, the present invention The coordinate system of the gluing platform in is the motion coordinate system of the three-axis gluing robot arm. According to the black digital marking feature points shown in (1)-(3) in accompanying drawing 6, restore the real coordinate values obtained, wherein the left and right pictures are the pixel coordinate values of the feature points, and the reconstruction result is the real coordinate value:

  图1 figure 1   左图left picture   右图right picture   重建结果(cm)Reconstruction result (cm)   1 1   (149,187)(149, 187)   (94,185)(94, 185)   (16.85,3.62,50.84)(16.85, 3.62, 50.84)

  2 2   (170,262)(170, 262)   (115,259)(115, 259)   (15.41,-1.22,50.84)(15.41, -1.22, 50.84)   33   (227,303)(227, 303)   (178,301)(178, 301)   (12.92,-4.33,57.07)(12.92, -4.33, 57.07)   44   (225,316)(225, 316)   (178,312)(178, 312)   (13.63,-5.5,59.49)(13.63, -5.5, 59.49)   55   (502,364)(502, 364)   (448,361)(448, 361)   (-7.47,-7.94,51.78)(-7.47, -7.94, 51.78)   66   (511,357)(511, 357)   (454,355)(454, 355)   (-7.67,-7.09,49.05)(-7.67, -7.09, 49.05)   77   (526,355)(526, 355)   (469,353)(469, 353)   (-8.66,-6.96,49.05)(-8.66, -6.96, 49.05)   8 8   (555,267)(555, 267)   (496,267)(496, 267)   (-10.22,-1.44,47.39)(-10.22, -1.44, 47.39)   9 9   (283,170)(283, 170)   (227,167)(227, 167)   (7.54,4.63,49.93)(7.54, 4.63, 49.93)   1010   (559,186)(559, 186)   (501,185)(501, 185)   (-10.66,3.49,48.21)(-10.66, 3.49, 48.21)   1111   (540,93)(540, 93)   (484,92)(484, 92)   (-9.76,9.51,49.93)(-9.76, 9.51, 49.93)   1212   (527,91)(527, 91)   (471,91)(471, 91)   (-8.88,9.63,49.93)(-8.88, 9.63, 49.93)   1313   (519,79)(519, 79)   (464,79)(464, 79)   (-8.5,10.58,50.84)(-8.5, 10.58, 50.84)   1414   (251,87)(251, 87)   (197,87)(197, 87)   (9.86,10.07,50.84)(9.86, 10.07, 50.84)   1515   (243,101)(243, 101)   (185,100)(185, 100)   (9.87,8.69,48.21)(9.87, 8.69, 48.21)   1616   (184,126)(184, 126)   (128,125)(128, 125)   (14.2,7.42,49.93)(14.2, 7.42, 49.93)

  图2 figure 2   左图left picture   右图right picture   重建结果(cm)Reconstruction result (cm)   1 1   (167,223)(167, 223)   (110,223)(110, 223)   (15.07,1.25,49.06)(15.07, 1.25, 49.06)   2 2   (176,252)(176, 252)   (117,251)(117, 251)   (13.99,-0.53,47.39)(13.99, -0.53, 47.39)   33   (507,347)(507, 347)   (448,347)(448, 347)   (-7.16,-6.24,47.39)(-7.16, -6.24, 47.39)   44   (508,258)(508, 258)   (448,258)(448, 258)   (-7.1,-0.88,46.6)(-7.1, -0.88, 46.6)   55   (287,250)(287, 250)   (229,242)(229, 242)   (7.02,-0.42,48.21)(7.02, -0.42, 48.21)   66   (292,176)(292, 176)   (238,173)(238, 173)   (7.19,4.41,51.78)(7.19, 4.41, 51.78)

  77   (326,162)(326, 162)   (259,162)(259, 162)   (3.88,4.29,41.73)(3.88, 4.29, 41.73)   8 8   (514,188)(514, 188)   (454,187)(454, 187)   (-7.48,3.06,46.6)(-7.48, 3.06, 46.6)   9 9   (521,99)(521, 99)   (459,99)(459, 99)   (-7.66,8.34,45.1)(-7.66, 8.34, 45.1)   1010   (168,201)(168, 201)   (111,200)(111, 200)   (15.01,2.62,49.06)(15.01, 2.62, 49.06)   1111   (197,213)(197, 213)   (141,212)(141, 212)   (13.32,1.91,49.93)(13.32, 1.91, 49.93)

  图3 image 3   左图left picture   右图right picture   重建结果(cm)Reconstruction result (cm)   1 1   (86,252)(86, 252)   (30,249)(30, 249)   (20.79,-0.56,49.93)(20.79, -0.56, 49.93)   2 2   (197,349)(197, 349)   (141,346)(141, 346)   (13.32,-6.71,49.93)(13.32, -6.71, 49.93)   33   (248,336)(248,336)   (187,334)(187,334)   (9.08,-5.4,45.84)(9.08, -5.4, 45.84)   44   (536,423)(536, 423)   (479,420)(479, 420)   (-9.32,-11.19,49.06)(-9.32, -11.19, 49.06)   55   (602,161)(602, 161)   (548,159)(548, 159)   (-14.45,5.39,51.78)(-14.45, 5.39, 51.78)   66   (249,129)(249, 129)   (194,126)(194, 126)   (10.00,7.36,50.84)(10.00, 7.36, 50.84)   77   (201,170)(201, 170)   (147,166)(147, 166)   (13.54,4.8,51.78)(13.54, 4.8, 51.78)   8 8   (178,175)(178, 175)   (131,172)(131, 172)   (17.4,5.14,59.49)(17.4, 5.14, 59.49)   9 9   (117,167)(117, 167)   (62,164)(62, 164)   (19.05,4.91,50.84)(19.05, 4.91, 50.84)

上述重建结果中,根据式(6)的矩阵Q值,可以发现左相机的光轴中心所对应的像素坐标值为(394,243),再结合矩阵Q的定义推出如附图7所示的符号象限分布。(+,+)意味着x方向坐标值小于394的像素点重建后符号为+,y方向坐标值小于243的像素点重建后符号为+;(+,-)意味着x方向坐标值小于394的像素点重建后符号为+,y方向坐标值大于243的像素点重建后符号为-;(-,-)意味着x方向坐标值大于394的像素点重建后符号为-,y方向坐标值大于243的像素点重建后符号为-;(-,+)意味着x方向坐标值大于394的像素点重建后符号为-,y方向坐标值小于243的像素点重建后符号为+。In the above reconstruction results, according to the matrix Q value of formula (6), it can be found that the pixel coordinate value corresponding to the center of the optical axis of the left camera is (394, 243), and combined with the definition of matrix Q, it can be derived as shown in Figure 7 Symbolic quadrant distribution. (+, +) means that the pixel with a coordinate value in the x direction less than 394 is reconstructed with a + sign, and the pixel with a coordinate value in the y direction with a value less than 243 is reconstructed with a + sign; (+, -) means that the coordinate value in the x direction is less than 394 The symbol of the reconstructed pixel point is +, and the symbol of the reconstructed pixel point whose y-direction coordinate value is greater than 243 is -; (-, -) means that the reconstructed symbol of the pixel point whose x-direction coordinate value is greater than 394 is -, and the y-direction coordinate value Pixels greater than 243 have a symbol of - after reconstruction; (-, +) means that pixels with a coordinate value in the x direction greater than 394 have a symbol of - after reconstruction, and pixels with a coordinate value in the y direction of less than 243 have a symbol of + after reconstruction.

附图8是根据附图6的匹配结果及重建结果后得到的误差分析图,其中,在图1的重建结果中,点4与点14之间的垂直距离为15.57,实际距离为16,误差2.7%,点5与点13之间的垂直距离为18.52,实际距离为18.8,误差1.5%,点11与点7之间的垂直距离为16.47,实际距离为17,误差3.2%,点10与点8之间的垂直距离为4.93,实际距离5.1,误差3.4%;在图2的重建结果中,点3与点4之间的垂直距离为5.36,实际距离为5.4,误差0.7%,点8与点9之间的垂直距离为5.28,实际距离为5.4,误差2.2%,点10与点11之间的距离为1.83,实际距离为1.88,误差2.7%,点11与点1之间的距离1.86,实际距离为1.88,误差1.1%;在图3的重建结果中,点1与点2之间的距离为9.69,实际距离为10,误差3.2%,点4与点5之间的距离为17.36,实际距离为17.9,误差3.1%,点9与点1之间的距离为5.74,实际距离为5.9,误差2.8%;所有坐标值中的第三个坐标为z轴值,即该特征点与左相机光轴中心的距离,本文中该值为48cm,误差3%左右,故本方法可行。Accompanying drawing 8 is the error analysis diagram obtained after the matching result and the reconstruction result according to the accompanying drawing 6, wherein, in the reconstruction result of Fig. 1, the vertical distance between point 4 and point 14 is 15.57, the actual distance is 16, and the error 2.7%, the vertical distance between point 5 and point 13 is 18.52, the actual distance is 18.8, the error is 1.5%, the vertical distance between point 11 and point 7 is 16.47, the actual distance is 17, the error is 3.2%, point 10 and The vertical distance between point 8 is 4.93, the actual distance is 5.1, and the error is 3.4%. In the reconstruction results in Figure 2, the vertical distance between point 3 and point 4 is 5.36, the actual distance is 5.4, and the error is 0.7%. Point 8 The vertical distance from point 9 is 5.28, the actual distance is 5.4, the error is 2.2%, the distance between point 10 and point 11 is 1.83, the actual distance is 1.88, the error is 2.7%, the distance between point 11 and point 1 1.86, the actual distance is 1.88, and the error is 1.1%. In the reconstruction results in Figure 3, the distance between point 1 and point 2 is 9.69, the actual distance is 10, and the error is 3.2%. The distance between point 4 and point 5 is 17.36, the actual distance is 17.9, the error is 3.1%, the distance between point 9 and point 1 is 5.74, the actual distance is 5.9, the error is 2.8%; the third coordinate of all coordinate values is the z-axis value, that is, the feature point The distance from the center of the optical axis of the left camera is 48cm in this paper, with an error of about 3%, so this method is feasible.

6)、将特征点的真实坐标值输入到三轴涂胶机械臂的涂胶坐标队列中,从而控制机械臂完成涂胶过程。由于涂胶平台坐标系又正是涂胶机械臂的坐标系,因此控制卡会将该单层皮革边缘轮廓的每一个特征点的坐标值转换成加工序列,在特征点与点按照步骤2的线段形状和公式(7)来进行涂胶。6) Input the real coordinate values of the feature points into the gluing coordinate queue of the three-axis gluing manipulator, so as to control the manipulator to complete the gluing process. Since the coordinate system of the gluing platform is exactly the coordinate system of the gluing robot arm, the control card will convert the coordinate value of each feature point of the edge contour of the single-layer leather into a processing sequence. The shape of the line segment and the formula (7) are used for gluing.

LL (( PP ii ,, PP ii ++ 11 )) == StraigtStraigt ArcArc ,, ii == 0,1,20,1,2 ,, LmL m -- 11 -- -- -- (( 77 ))

L(Pi,Pi+1)表示第Pi和第Pi+1个特征点之间的单层皮革边缘线段形状,Straight表示线段,涂胶时按照直线插补方式进行涂胶,Arc表示弧线段,涂胶时按照曲线插补方式进行涂胶,i=0,1,2,L m-1表示有m个边缘轮廓特征点。L(P i , P i+1 ) represents the shape of the single-layer leather edge line segment between the P i- th and P i+1 feature points, Straight represents the line segment, and the glue is applied according to the linear interpolation method when applying glue. Arc Indicates the arc segment, and the glue is applied according to the curve interpolation method when applying glue, i=0, 1, 2, L m-1 means that there are m edge contour feature points.

Claims (5)

1. A single-layer leather gluing control method based on binocular vision is characterized by comprising the following steps: the method adopts a binocular camera and a triaxial gluing mechanical arm, and the single-layer leather gluing control method comprises the following steps:
1) carrying out three-dimensional calibration on the fixed binocular camera;
2) acquiring a single-layer leather binocular image and extracting an edge profile of the single-layer leather binocular image;
3) and carrying out feature extraction according to the edge contour image in the step 2):
single layer leather edge profile two-dimensional coordinate from finite pairPoint (x)i,yi) The structure is that i is 1, 2, L, n represents that the contour is composed of n pairs of coordinate points; for any coordinate point pair (x) on the contouri,yi) The curvature calculation formula is shown as (1), and the curvatures K of all coordinate point pairs are calculatediI-1, 2, L, n indicates that the contour has n pairs of coordinates;
K i = ( x i + 1 - x i - 1 2 ) ( y i + 1 - y i - 1 2 ) 2 - ( x i + 1 - x i - 1 2 ) 2 ( y i + 1 - y i - 1 2 ) [ ( x i + 1 - x i - 1 2 ) 2 + ( y i + 1 - y i - 1 2 ) 2 ] 2 3 , i=1,2,L,n(1)
wherein,(xi+1,yi+1) Is the coordinate point pair (x)i,yi) (ii) the next coordinate point pair of (x)i-1,yi-1) Is the coordinate point pair (x)i,yi) The last coordinate point pair.
The feature extraction is carried out according to the following steps:
(3.1) by
Figure FDA0000038198380000012
Calculating the local curvature threshold of the contour, wherein B is a proportionality coefficient and is selected to be [1, 2 ]]To (c) to (d);
(3.2) satisfies KiSelecting candidate feature points P from coordinate points not less than T (i)iI is 1, 2, L, s < n, and s is a total of s candidate feature points;
(3.3) calculating an included angle theta formed by two tangent lines of each candidate feature pointi+1: point P with candidate feature pointiAs a starting point, Pi+1As an end point, the midpoint P of the line segment where the start point and the end point are located(i+(i+1))/2Calculating the theoretical center of a circle according to the formula (2)
Figure FDA0000038198380000013
In the same way, with Pi+1As a starting point, Pi+2As an end point, the midpoint P of the line segment where the start point and the end point are located((i+1)+(i+2))/2The theoretical center of a circle is obtained by the formula (2)
Figure FDA0000038198380000021
x i + 1 C = ( x i + 1 2 + y i + 1 2 ) ( y i + ( i + 1 ) 2 - y i ) + ( x i + ( i + 1 ) 2 2 + y i + ( i + 1 ) 2 2 ) ( y i - y i + 1 ) + ( x i 2 + y i 2 ) ( y i + 1 - y i + ( i + 1 ) 2 ) 2 [ x i + 1 ( y i + ( i + 1 ) 2 - y i ) + x i + ( i + 1 ) 2 ( y i - y i + 1 ) + x i ( y i + 1 - y i + ( i + 1 ) 2 ) ] , - - - ( 2 )
y i + 1 C = ( x i + 1 2 + y i + 1 2 ) ( x i + ( i + 1 ) 2 - x i ) + ( x i + ( i + 1 ) 2 2 + y i + ( i + 1 ) 2 2 ) ( x i - x i + 1 ) + ( x i 2 + y i 2 ) ( x i + 1 - x i + ( i + 1 ) 2 ) 2 [ y i + 1 ( x i + ( i + 1 ) 2 - x i ) + y i + ( i + 1 ) 2 ( x i - x i + 1 ) + y i ( x i + 1 - x i + ( i + 1 ) 2 ) ]
Wherein (x)i+1,yi+1) Is Pi+1The coordinates of the position of the object to be imaged,
Figure FDA0000038198380000024
is composed of
Figure FDA0000038198380000025
The coordinates of the position of the object to be imaged,
Figure FDA0000038198380000026
is composed of
Figure FDA0000038198380000027
Coordinates;
then, P is calculated by the formula (3)i+1And
Figure FDA0000038198380000028
connecting line with Pi+1Angle of tangent
Figure FDA0000038198380000029
And Pi+1And
Figure FDA00000381983800000210
connecting line with Pi+1Angle of tangent
Figure FDA00000381983800000211
<math><mrow><msubsup><mi>&tau;</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mi>L</mi></msubsup><mo>=</mo><mi>a</mi><mi>tan</mi><mfrac><mrow><mo>(</mo><msubsup><mi>y</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mi>C</mi></msubsup><mo>-</mo><msub><mi>y</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow><mrow><mo>(</mo><msubsup><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mi>C</mi></msubsup><mo>-</mo><msub><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow></mfrac></mrow></math>
<math><mrow><msubsup><mi>&tau;</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow><mi>R</mi></msubsup><mo>=</mo><mi>a</mi><mi>tan</mi><mfrac><mrow><mo>(</mo><msubsup><mi>y</mi><mrow><mi>i</mi><mo>+</mo><mn>2</mn></mrow><mi>C</mi></msubsup><mo>-</mo><msub><mi>y</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow><mrow><mo>(</mo><msubsup><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>2</mn></mrow><mi>C</mi></msubsup><mo>-</mo><msub><mi>x</mi><mrow><mi>i</mi><mo>+</mo><mn>1</mn></mrow></msub><mo>)</mo></mrow></mfrac><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>3</mn><mo>)</mo></mrow></mrow></math>
Figure FDA00000381983800000214
Wherein, thetai+1Namely two included angles
Figure FDA00000381983800000215
And
Figure FDA00000381983800000216
summing;
(3.4) according to thetai+1The value judges whether the characteristic points are required characteristic points and marks the shapes of line segments among the characteristic points: if theta is greater than thetai+1Greater than 170 deg. is recognizedThe curvature change of the local contour where the characteristic point is located is not obvious and is not the required characteristic point, and the shape of a line segment between the characteristic points is Stright; otherwise, the feature point is the required feature point, and the shape of the line segment between the feature points is Arc;
4) matching according to the characteristic points in the step 3), wherein the characteristic matching comprises the following steps:
(4.1) feature points of left image in binocular image
Figure FDA00000381983800000217
For reference, i is 0, 1, 2, L, m, where m < s, m indicates that the left graph has m feature points;
(4.2) calculating all feature points within 2 pixels around the alignment line with the right image in the binocular image
Figure FDA00000381983800000218
The euclidean distance of (a), i is 0, 1, 2, L, k, where k < m, k indicates that k feature points in m feature points of the right image are in a range of 2 pixels;
(4.3) taking the minimum Euclidean distance
Figure FDA0000038198380000031
As the characteristic points matched with the profile, determining the one-to-one corresponding relation of the characteristic points of the left and right images of the edge profile of the single-layer leather;
5) restoring the real coordinate value of the gluing platform system corresponding to the characteristic point in the step 4) according to the re-projection matrix;
6) and inputting the real coordinate values of the characteristic points into a gluing coordinate queue of the three-axis gluing mechanical arm so as to control the mechanical arm to finish the gluing process.
2. The binocular vision-based single-layer leather gluing control method as claimed in claim 1, wherein: in the step 6), gluing control is carried out between the characteristic points according to the line segment shape in the step 3) and the formula (4);
L ( P i , P i + 1 ) = Straigt Arc , i=0,1,2,L m-1 (4)
wherein, L (P)i,Pi+1) Denotes the P thiAnd P isi+1The shape of the edge line segment of the single-layer leather between the characteristic points, Stright represents the line segment, gluing is carried out according to a Straight line interpolation mode during gluing, Arc represents the Arc line segment, gluing is carried out according to a curve interpolation mode during gluing, and i is 0, 1, 2, L m-1 and represents that m edge contour characteristic points exist.
3. The binocular vision-based single-layer leather gluing control method as claimed in claim 1 or 2, wherein: in the step 1), images for calibration analysis are obtained by shooting calibration objects with different postures, and internal parameters (klu) of each camera are obtained according to a linear camera model (5)0 v0f theta) and extrinsic parameters, a 3 × 4 matrix M obtained by dot multiplication of the intrinsic and extrinsic parameters of the camera3×4Namely a perspective projection matrix;
<math><mrow><mfenced open='(' close=')'><mtable><mtr><mtd><mi>u</mi></mtd></mtr><mtr><mtd><mi>v</mi></mtd></mtr><mtr><mtd><mn>1</mn></mtd></mtr></mtable></mfenced><mo>=</mo><mfrac><mn>1</mn><mi>&lambda;</mi></mfrac><mfenced open='(' close=')'><mtable><mtr><mtd><mfrac><mi>f</mi><mi>k</mi></mfrac></mtd><mtd><mo>-</mo><mfrac><mi>f</mi><mi>k</mi></mfrac><mi>cot</mi><mi>&theta;</mi></mtd><mtd><msub><mi>u</mi><mn>0</mn></msub></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mfrac><mi>f</mi><mrow><mi>l</mi><mi>sin</mi><mi>&theta;</mi></mrow></mfrac></mtd><mtd><msub><mi>v</mi><mn>0</mn></msub></mtd><mtd><mn>0</mn></mtd></mtr><mtr><mtd><mn>0</mn></mtd><mtd><mn>0</mn></mtd><mtd><mn>1</mn></mtd><mtd><mn>0</mn></mtd></mtr></mtable></mfenced><mfenced open='(' close=')'><mtable><mtr><mtd><msub><mi>R</mi><mrow><mn>3</mn><mo>&times;</mo><mn>3</mn></mrow></msub></mtd><mtd><msub><mi>t</mi><mrow><mn>3</mn><mo>&times;</mo><mn>1</mn></mrow></msub></mtd></mtr><mtr><mtd><msup><mn>0</mn><mi>T</mi></msup></mtd><mtd><mn>1</mn></mtd></mtr></mtable></mfenced><mfenced open='(' close=')'><mtable><mtr><mtd><msub><mi>X</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><msub><mi>Y</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><msub><mi>Z</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><mn>1</mn></mtd></mtr></mtable></mfenced><mo>=</mo><msub><mi>M</mi><mrow><mn>3</mn><mo>&times;</mo><mn>4</mn></mrow></msub><mfenced open='(' close=')'><mtable><mtr><mtd><msub><mi>X</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><msub><mi>Y</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><msub><mi>Z</mi><mi>W</mi></msub></mtd></mtr><mtr><mtd><mn>1</mn></mtd></mtr></mtable></mfenced><mo>-</mo><mo>-</mo><mo>-</mo><mrow><mo>(</mo><mn>5</mn><mo>)</mo></mrow></mrow></math>
wherein k, l is the pixel point size of the binocular camera, u0,v0Is a coordinate value of the center of the optical axis of the camera, f is the focal length of the camera, theta is the degree of skewness (generally 90 DEG) of the camera coordinate system, R3×3Is the rotation matrix of the camera, t3×1Then the translation matrix of the camera (u v 1)TIs any point in the image, (X)W YW ZW 1)TThe coordinate of the gluing coordinate system corresponding to any point in the image is obtained;
the image rectification process is as follows:
(1.1) obtaining a rotation and translation matrix of the right camera relative to the left camera through a formula (6) and internal and external parameters of the left camera and the right camera, and correcting the left image and the right image to align the rows;
R=Rr(Rl)T
(6)
T=Tr-RTl
wherein R isl,RrIs a rotation matrix, T, of the left and right camerasl,TrIs the translation matrix for the left and right cameras, and R, T are the rotation matrix and translation matrix for the right camera relative to the left camera.
(1.2) obtaining a reprojection matrix Q through the formula (7) and the internal and external parameters of the camera:
Q = 1 0 0 - c x 0 1 0 - c y 0 0 0 f 0 0 1 / T x 0 - - - ( 7 )
wherein, TxIs the x-axis component, c, of the binocular camera extrinsic reference translation vector TxAnd cyIs the world coordinate system coordinate value of the optical center of the left camera, and f is the focal length of the left camera.
4. The binocular vision-based single-layer leather gluing control method as claimed in claim 1 or 2, wherein: in the step 2), threshold filtering is completed on the single-layer leather image according to different RGB values of leather and noise to convert the single-layer leather image into a gray image, and the specific process is as follows:
(2.1) smoothing the image by using a Gaussian function;
(2.2) adopting closed operation in mathematical morphology;
and (2.3) extracting the leather edge profile of the one-way element by adopting a Canny operator.
5. The binocular vision-based single-layer leather gluing control method as claimed in claim 1 or 2, wherein: in the step 2), light sources are arranged on the periphery of the gluing platform, and binocular cameras are arranged right above the center of the gluing platform.
CN 201010587709 2010-12-14 2010-12-14 Method for controlling single-layer leather gluing based on binocular vision Active CN102107179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010587709 CN102107179B (en) 2010-12-14 2010-12-14 Method for controlling single-layer leather gluing based on binocular vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010587709 CN102107179B (en) 2010-12-14 2010-12-14 Method for controlling single-layer leather gluing based on binocular vision

Publications (2)

Publication Number Publication Date
CN102107179A true CN102107179A (en) 2011-06-29
CN102107179B CN102107179B (en) 2013-07-24

Family

ID=44171526

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010587709 Active CN102107179B (en) 2010-12-14 2010-12-14 Method for controlling single-layer leather gluing based on binocular vision

Country Status (1)

Country Link
CN (1) CN102107179B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354151A (en) * 2011-08-04 2012-02-15 浙江工业大学 Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool
CN102783769A (en) * 2012-07-20 2012-11-21 浙江工业大学 Glue coating control method at small-curvature turning part
CN102981406A (en) * 2012-11-26 2013-03-20 浙江工业大学 Sole glue spraying thickness control method based on binocular vision
CN103316827A (en) * 2013-06-09 2013-09-25 深圳众为兴技术股份有限公司 Adhesive dispensing method and device
CN103841311A (en) * 2012-11-20 2014-06-04 广州三星通信技术研究有限公司 Method for generating 3D image and portable terminals
CN106583178A (en) * 2016-11-01 2017-04-26 浙江理工大学 Leather edge positioning method and device of automatic edge painting machine
CN106868229A (en) * 2017-01-05 2017-06-20 四川大学 A kind of device of the leather processed that stretches tight automatically
CN107413590A (en) * 2017-07-05 2017-12-01 佛山缔乐视觉科技有限公司 A kind of watchband automatic glue spreaders based on machine vision
CN107597497A (en) * 2017-09-08 2018-01-19 佛山缔乐视觉科技有限公司 A kind of automatic ceramic glue spreading apparatus and method based on machine vision
CN107726985A (en) * 2017-11-13 2018-02-23 易思维(天津)科技有限公司 A kind of three-dimensional gluing detection method and device in real time
CN107976147A (en) * 2017-12-11 2018-05-01 西安迈森威自动化科技有限公司 A kind of glass locating and detecting device based on machine vision
CN108089544A (en) * 2017-12-25 2018-05-29 厦门大学嘉庚学院 A kind of orbit generation method and control system of sole glue spraying robot
CN109046846A (en) * 2018-10-30 2018-12-21 石家庄辐科电子科技有限公司 A kind of intelligent circuit board paint spraying apparatus based on linear motor
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN109798831A (en) * 2018-12-28 2019-05-24 辽宁红沿河核电有限公司 A kind of Binocular vision photogrammetry method for fuel assembly
CN111122581A (en) * 2019-12-25 2020-05-08 北京中远通科技有限公司 Binocular vision detection system and method and glue spraying device
CN111664809A (en) * 2020-06-15 2020-09-15 苏州亿视智能科技有限公司 Intelligent high-precision modular three-dimensional detection equipment and method
CN112197715A (en) * 2020-10-27 2021-01-08 上海市特种设备监督检验技术研究院 Elevator brake wheel and brake shoe gap detection method based on image recognition
CN115846129A (en) * 2022-11-08 2023-03-28 成都市鸿侠科技有限责任公司 Special-shaped complex curved surface glue joint device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1030917A (en) * 1996-07-16 1998-02-03 Tsubakimoto Chain Co Object recognition method and device therefor, and recording medium
JP2002213929A (en) * 2000-10-27 2002-07-31 Korea Inst Of Science & Technology Method and apparatus for three-dimensional visual inspection of semiconductor package
CN101517615A (en) * 2006-09-29 2009-08-26 冲电气工业株式会社 Personal authentication system and personal authentication method
CN101876533A (en) * 2010-06-23 2010-11-03 北京航空航天大学 A Microscopic Stereo Vision Calibration Method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1030917A (en) * 1996-07-16 1998-02-03 Tsubakimoto Chain Co Object recognition method and device therefor, and recording medium
JP2002213929A (en) * 2000-10-27 2002-07-31 Korea Inst Of Science & Technology Method and apparatus for three-dimensional visual inspection of semiconductor package
CN101517615A (en) * 2006-09-29 2009-08-26 冲电气工业株式会社 Personal authentication system and personal authentication method
CN101876533A (en) * 2010-06-23 2010-11-03 北京航空航天大学 A Microscopic Stereo Vision Calibration Method

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102354151B (en) * 2011-08-04 2013-06-05 浙江工业大学 Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool
CN102354151A (en) * 2011-08-04 2012-02-15 浙江工业大学 Tangential following interpolation method applied to multilayer shoe leather numerical control cutting machine tool
CN102783769A (en) * 2012-07-20 2012-11-21 浙江工业大学 Glue coating control method at small-curvature turning part
CN102783769B (en) * 2012-07-20 2015-06-03 浙江工业大学 Glue coating control method at small-curvature turning part
CN103841311A (en) * 2012-11-20 2014-06-04 广州三星通信技术研究有限公司 Method for generating 3D image and portable terminals
CN102981406B (en) * 2012-11-26 2016-02-24 浙江工业大学 A kind of sole glue spraying method for controlling thickness based on binocular vision
CN102981406A (en) * 2012-11-26 2013-03-20 浙江工业大学 Sole glue spraying thickness control method based on binocular vision
CN103316827A (en) * 2013-06-09 2013-09-25 深圳众为兴技术股份有限公司 Adhesive dispensing method and device
CN103316827B (en) * 2013-06-09 2015-10-28 深圳众为兴技术股份有限公司 A kind of dispensing method, device and spot gluing equipment
CN106583178A (en) * 2016-11-01 2017-04-26 浙江理工大学 Leather edge positioning method and device of automatic edge painting machine
CN106583178B (en) * 2016-11-01 2019-01-18 浙江理工大学 A kind of edge positioning method and device of the leather substance of automatic oil edge machine
CN106868229A (en) * 2017-01-05 2017-06-20 四川大学 A kind of device of the leather processed that stretches tight automatically
CN107413590A (en) * 2017-07-05 2017-12-01 佛山缔乐视觉科技有限公司 A kind of watchband automatic glue spreaders based on machine vision
CN107413590B (en) * 2017-07-05 2023-06-02 佛山缔乐视觉科技有限公司 Automatic spreading machine of watchband based on machine vision
CN107597497A (en) * 2017-09-08 2018-01-19 佛山缔乐视觉科技有限公司 A kind of automatic ceramic glue spreading apparatus and method based on machine vision
CN107726985A (en) * 2017-11-13 2018-02-23 易思维(天津)科技有限公司 A kind of three-dimensional gluing detection method and device in real time
CN107976147A (en) * 2017-12-11 2018-05-01 西安迈森威自动化科技有限公司 A kind of glass locating and detecting device based on machine vision
CN107976147B (en) * 2017-12-11 2019-08-06 西安迈森威自动化科技有限公司 A kind of glass locating and detecting device based on machine vision
CN108089544A (en) * 2017-12-25 2018-05-29 厦门大学嘉庚学院 A kind of orbit generation method and control system of sole glue spraying robot
CN108089544B (en) * 2017-12-25 2021-03-30 厦门大学嘉庚学院 Trajectory generation method and control system for sole glue spraying robot
CN109522935A (en) * 2018-10-22 2019-03-26 易思维(杭州)科技有限公司 The method that the calibration result of a kind of pair of two CCD camera measure system is evaluated
CN109522935B (en) * 2018-10-22 2021-07-02 易思维(杭州)科技有限公司 Method for evaluating calibration result of binocular vision measurement system
CN109046846A (en) * 2018-10-30 2018-12-21 石家庄辐科电子科技有限公司 A kind of intelligent circuit board paint spraying apparatus based on linear motor
CN109798831A (en) * 2018-12-28 2019-05-24 辽宁红沿河核电有限公司 A kind of Binocular vision photogrammetry method for fuel assembly
CN111122581A (en) * 2019-12-25 2020-05-08 北京中远通科技有限公司 Binocular vision detection system and method and glue spraying device
CN111664809A (en) * 2020-06-15 2020-09-15 苏州亿视智能科技有限公司 Intelligent high-precision modular three-dimensional detection equipment and method
CN112197715A (en) * 2020-10-27 2021-01-08 上海市特种设备监督检验技术研究院 Elevator brake wheel and brake shoe gap detection method based on image recognition
CN112197715B (en) * 2020-10-27 2022-07-08 上海市特种设备监督检验技术研究院 A method for detecting gap between elevator brake wheel and brake shoe based on image recognition
CN115846129A (en) * 2022-11-08 2023-03-28 成都市鸿侠科技有限责任公司 Special-shaped complex curved surface glue joint device
CN115846129B (en) * 2022-11-08 2023-12-15 成都市鸿侠科技有限责任公司 Special-shaped complex curved surface cementing device

Also Published As

Publication number Publication date
CN102107179B (en) 2013-07-24

Similar Documents

Publication Publication Date Title
CN102107179B (en) Method for controlling single-layer leather gluing based on binocular vision
CN114549746B (en) A high-precision true-color 3D reconstruction method
CN106780619B (en) Human body size measuring method based on Kinect depth camera
CN105354531B (en) A kind of mask method of face key point
WO2019105044A1 (en) Method and system for lens distortion correction and feature extraction
CN104748683B (en) A kind of on-line automatic measurement apparatus of Digit Control Machine Tool workpiece and measuring method
CN111932678B (en) Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN112132958A (en) A 3D reconstruction method of underwater environment based on binocular vision
CN116072583B (en) Wafer pre-alignment platform and alignment method based on vision
CN101763643A (en) Automatic calibration method for structured light three-dimensional scanner system
CN105823416A (en) Method for measuring object through multiple cameras and device thereof
CN107423772A (en) A kind of new binocular image feature matching method based on RANSAC
CN103632366A (en) Parameter identification method for elliptical target
CN113129384B (en) Flexible calibration method of binocular vision system based on one-dimensional encoding target
CN103247048A (en) Camera mixing calibration method based on quadratic curve and straight lines
CN108154536A (en) The camera calibration method of two dimensional surface iteration
CN107220999A (en) The research of workpiece circular arc Edge Feature Points matching process
CN112907631A (en) Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN110030979B (en) A method for measuring relative pose of non-cooperative targets in space based on sequence images
CN109784297A (en) A 3D object recognition and optimal grasping method based on deep learning
CN107328371A (en) Sub-pix contours extract based on Gaussian and the optimization using Softmax recurrence in the case where metal plate detects scene
CN104167001B (en) Large-visual-field camera calibration method based on orthogonal compensation
CN101661623B (en) Three-dimensional tracking method of deformable body based on linear programming
CN106570908A (en) Stereoscopic vision apparatus for testing scattering area of mother-son flyer
CN106408600A (en) Image registration method applied to solar high-resolution image

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant