[go: up one dir, main page]

CN102213581A - Object measuring method and system - Google Patents

Object measuring method and system Download PDF

Info

Publication number
CN102213581A
CN102213581A CN2010101639565A CN201010163956A CN102213581A CN 102213581 A CN102213581 A CN 102213581A CN 2010101639565 A CN2010101639565 A CN 2010101639565A CN 201010163956 A CN201010163956 A CN 201010163956A CN 102213581 A CN102213581 A CN 102213581A
Authority
CN
China
Prior art keywords
capturing device
image capturing
overbar
image
sin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010101639565A
Other languages
Chinese (zh)
Other versions
CN102213581B (en
Inventor
黄国唐
江博通
吕尚杰
谢伯璜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Priority to CN201010163956.5A priority Critical patent/CN102213581B/en
Publication of CN102213581A publication Critical patent/CN102213581A/en
Application granted granted Critical
Publication of CN102213581B publication Critical patent/CN102213581B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

A method for measuring the stereo coordinate of object includes such steps as using two non-parallel image pick-up devices to pick up the calibration points of known stereo coordinate, deriving the collinearity function of light beams of said image pick-up devices, calibrating said image pick-up devices to obtain the internal and external parameters of said image pick-up devices, and finding out the collinearity function of light beams of said image pick-up devices. Then, the image capturing devices respectively capture images of the target object, so that the processing module calculates the three-dimensional coordinate of the object according to the light beam intersection collinear function. Therefore, the invention not only can rapidly obtain the three-dimensional coordinate and the posture of the target object, but also can improve the accuracy and the convenience of the target object in measurement, and is beneficial to the application of various working environments.

Description

物件测量方法与系统Object measurement method and system

技术领域technical field

本申请涉及一种物件测量方法与系统,具体地说,涉及一种利用非平行设置的二图像撷取装置依据光束交会共线函数来计算出物件的立体坐标的物件测量方法与系统。The present application relates to a method and system for object measurement, in particular, to an object measurement method and system for calculating the three-dimensional coordinates of an object by using two non-parallel image capture devices according to the intersection and collinear function of light beams.

背景技术Background technique

由于科技的快速演进,无论是在商品设计、工业制造或是高精密的操作领域中,需要借重于机器人或机器手臂等自动化系统来进行的操作程序也越来越多,因此,如何提升自动化系统的运作效率也成为重要的课题。其关键就在于如何令机器人或机器手臂等自动化系统得以精确地辨识出空间中的物件的立体坐标,据此,各种可测量出物件的立体坐标的测量方法也应运而生。Due to the rapid evolution of science and technology, whether it is in the field of commodity design, industrial manufacturing or high-precision operation, there are more and more operating procedures that need to be carried out by automation systems such as robots or robot arms. Therefore, how to improve the automation system Operational efficiency has also become an important issue. The key lies in how to enable automatic systems such as robots or robotic arms to accurately identify the three-dimensional coordinates of objects in space. Accordingly, various measurement methods that can measure the three-dimensional coordinates of objects have emerged as the times require.

如美国US 06795200号专利申请所公开的物件测量方法,其先将结构光源投射于待测平面上,接着利用平行设置的二摄影机来分别取得待测平面上的物件图像。然而,实际使用时,结构光源摆设与配置往往会增加使用者额外的负担。其次,以简易的三角几何原理来计算立体坐标时,因无法兼顾到摄影机本身的观测误差,以致其所计算出的物件的立体坐标的精确度不足,而精确度不足的立体坐标更会使系统于后续作业中产生过大的误差,故上述US 06795200号专利申请不但实用性不佳,也无法适用于高精确度的操作领域中。For example, in the object measurement method disclosed in US Patent Application No. US 06795200, a structured light source is first projected on the plane to be measured, and then two cameras arranged in parallel are used to respectively obtain images of objects on the plane to be measured. However, in actual use, the arrangement and configuration of the structured light source will often increase the additional burden on the user. Secondly, when using simple triangular geometry principles to calculate the three-dimensional coordinates, because the observation error of the camera itself cannot be taken into account, the accuracy of the calculated three-dimensional coordinates of the object is insufficient, and the insufficient three-dimensional coordinates will make the system Excessive errors are generated in the follow-up operation, so the above-mentioned US 06795200 patent application is not only not practical, but also cannot be applied in the field of high-precision operation.

再者,在美国US 20060088203号专利申请所公开的物件测量方法中,乃先于工作区域上方同时架设多台固定式的摄影机,藉此对工作区域中的物件进行三维的取像作业,接着,在取像完成后再利用简易的三角几何原理来计算出物件的立体坐标。然而,通过固定架设于工作区域上的多台摄影机来进行三维的取像作业不但成本较高,且使用弹性不佳,同时,也容易因视觉死角而阻碍三维的取像作业的进行,无法适用于高精细的操作领域。Furthermore, in the object measurement method disclosed in the U.S. Patent Application No. US 20060088203, a plurality of fixed cameras are set up above the working area at the same time, so as to perform three-dimensional imaging operations on the objects in the working area, and then, After the image acquisition is completed, the three-dimensional coordinates of the object are calculated by using the simple triangular geometry principle. However, the three-dimensional imaging operation by fixing multiple cameras on the working area is not only expensive, but also has poor flexibility in use. At the same time, it is easy to hinder the progress of the three-dimensional imaging operation due to blind spots of vision, which is not applicable. In the field of high-precision operation.

另外,欧洲WO 2008076942号专利申请亦公开了一种物件测量方法,其先将单摄影机设置于移动式的机器手臂上,以利用移动式的机器手臂来针对工作区域中的物件进行多次、不同角度的取像作业,接着,再通过简易的三角几何原理来计算出物件的立体坐标。然而,运用单摄影机来对工作区域中的物件进行多次、不同角度的取像作业需要耗费额外的时间,不但提高了成本,也降低了实用性。其次,与前述US 06795200号专利申请及US 20060088203号专利申请相同,利用简易的三角几何原理所计算出的物件的立体坐标,同样地会使系统于后续的作业中产生过大的误差,当然也无法适用于极精细的操作领域。In addition, the European patent application WO 2008076942 also discloses an object measurement method, which firstly installs a single camera on a mobile robot arm, so as to use the mobile robot arm to perform multiple and different measurements on objects in the work area. Angle imaging operation, and then calculate the three-dimensional coordinates of the object through the simple triangular geometry principle. However, it takes extra time to use a single camera to take images of objects in the working area multiple times and from different angles, which not only increases the cost, but also reduces the practicability. Secondly, the same as the aforementioned US 06795200 patent application and US 20060088203 patent application, the three-dimensional coordinates of the object calculated by the simple triangular geometry principle will also cause excessive errors in the subsequent operations of the system. It cannot be applied to extremely fine operating fields.

有鉴于此,如何提供一种用以测量物件的立体坐标的物件测量方法与系统,不但可方便、快速、精确地取得物件的立体坐标,更可适用于高精细的操作领域,亟为各界所急待解决的课题。In view of this, how to provide an object measurement method and system for measuring the three-dimensional coordinates of objects, which can not only obtain the three-dimensional coordinates of objects conveniently, quickly and accurately, but also be applicable to high-precision operation fields, is urgently requested by all walks of life. urgent issues to be resolved.

发明内容Contents of the invention

为达上述目的及其他目的,本发明提出一种物件测量方法,利用一组并排且向内旋转的非平行设置的第一图像撷取装置与第二图像撷取装置以及与该第一及第二图像撷取装置相连接的处理模块对物件进行测量,该物件测量方法包括以下步骤:(1)令该第一图像撷取装置与该第二图像撷取装置分别撷取至少一已知立体坐标的镜头校正点的第一图像及第二图像,再令该处理模块通过镜头校正演算法依据该第一图像及该第二图像分别求得该第一图像撷取装置的第一镜头扭曲参数及该第二图像撷取装置的第二镜头扭曲参数;(2)令该第一图像撷取装置与该第二图像撷取装置撷取相同的多个已知立体坐标的姿态校正点的图像坐标,再令该处理模块将该姿态校正点的立体坐标、该第一镜头扭曲参数及该第二镜头扭曲参数代入以光束交会共线成像原理为基础的几何函数,其中,该几何函数中包含未知的该第一图像撷取装置的第一镜头中心与第一姿态参数以及未知的该第二图像撷取装置的第二镜头中心与第二姿态参数;以及(3)令该处理模块利用预设的演算法计算该几何函数,以解出该第一图像撷取装置的该第一镜头中心与该第一姿态参数以及该第二图像撷取装置的该第二镜头中心与该第二姿态参数,并将所解出的该第一镜头中心、该第一姿态参数、该第二镜头中心与该第二姿态参数代入该以光束交会共线成像原理为基础的几何函数中,以产生对应该第一及第二图像撷取装置的第一光束交会共线函数与第二光束交会共线函数。In order to achieve the above and other objectives, the present invention proposes a method for measuring an object, which uses a set of non-parallel first image capture devices and second image capture devices that are arranged side by side and rotate inwards and are connected with the first and second image capture devices. The processing module connected to the two image capture devices measures the object, and the object measurement method includes the following steps: (1) causing the first image capture device and the second image capture device to respectively capture at least one known stereo The coordinates of the first image and the second image of the lens correction point, and then let the processing module calculate the first lens distortion parameter of the first image capture device according to the first image and the second image through the lens correction algorithm and the second lens distortion parameter of the second image capture device; (2) make the first image capture device and the second image capture device capture images of the same plurality of posture correction points with known three-dimensional coordinates Coordinates, and then let the processing module substitute the stereoscopic coordinates of the attitude correction point, the first lens distortion parameter and the second lens distortion parameter into a geometric function based on the beam intersection collinear imaging principle, wherein the geometric function includes The unknown first lens center and first pose parameter of the first image capture device and the unknown second lens center and second pose parameter of the second image capture device; and (3) making the processing module use the preset The established algorithm calculates the geometric function to solve the first lens center and the first pose parameter of the first image capture device and the second lens center and the second pose of the second image capture device parameters, and substitute the solved first lens center, the first attitude parameter, the second lens center and the second attitude parameter into the geometric function based on the beam intersection collinear imaging principle to generate a pair The first beam intersection collinear function and the second beam intersection collinear function of the first and second image capture devices.

在一较佳态样中,还包括步骤(4),令该第一图像撷取装置与该第二图像撷取装置同时撷取一目标物件的特征点坐标,并将该第一图像撷取装置所撷取的特征点坐标与该第二图像撷取装置所撷取的特征点坐标代入该第一及第二光束交会共线函数中,以计算出该目标物件的立体空间坐标。In a preferred form, step (4) is also included, making the first image capture device and the second image capture device simultaneously capture the feature point coordinates of a target object, and capture the first image The feature point coordinates captured by the device and the feature point coordinates captured by the second image capture device are substituted into the first and second beam intersection collinear functions to calculate the three-dimensional space coordinates of the target object.

在另一较佳态样中,上述的步骤(2)中的该以光束共线成像原理为基础的几何函数满足

Figure GSA00000091235500031
其中,展开后为In another preferred aspect, the geometric function based on the beam collinear imaging principle in the above step (2) satisfies
Figure GSA00000091235500031
Among them, the expanded

xx cc == kk 22 xx ‾‾ 55 ++ (( kk 11 ++ 22 kk 22 ythe y ‾‾ 22 )) xx ‾‾ 33 ++ (( 33 pp 11 )) xx ‾‾ 22 ++ (( 11 ++ kk 00 ++ kk 11 ythe y ‾‾ 22 ++ kk 22 ythe y ‾‾ 44 ++ 22 pp 22 ythe y ‾‾ )) xx ‾‾ ++ pp 11 ythe y ‾‾ 22

ythe y cc == kk 22 ythe y ‾‾ 55 ++ (( kk 11 ++ 22 kk 22 xx ‾‾ 22 )) ythe y ‾‾ 33 ++ (( 33 pp 22 )) ythe y ‾‾ 22 ++ (( 11 ++ kk 00 ++ kk 11 xx ‾‾ 22 ++ kk 22 xx ‾‾ 44 ++ 22 pp 22 xx ‾‾ )) ythe y ‾‾ ++ pp 11 xx ‾‾ 22

,而(XA,YA,ZA)为该姿态校正点的已知立体坐标,(xc,yc)为该第一/第二图像撷取装置对该姿态校正点撷取的该图像坐标,f为该第一/第二图像撷取装置的已知焦距,k0、k1、k2、p1、p2为该第一/第二镜头扭曲参数,(XL,YL,ZL)为该第一/第二镜头中心,其中,,m11=cosφcosκ、m12=sinωsinφcosκ+cosωsinκ、m13=-cosωsinφcosκ+sinωsinκ、m21=-cosφsinκ、m22=-sinωsinφsinκ+cosωcosκ、m23=cosωsinφsinκ+sinωcosκ、m31=sinφ、m32=-sinωcosφ以及m33=cosωcosφ,而ω、φ、κ为该第一/第二姿态参数。, and (X A , Y A , Z A ) are the known three-dimensional coordinates of the attitude correction point, (x c , y c ) are the attitude correction point captured by the first/second image capture device Image coordinates, f is the known focal length of the first/second image capture device, k 0 , k 1 , k 2 , p 1 , p 2 are the distortion parameters of the first/second lens, (X L , Y L , Z L ) is the first/second lens center, where, m 11 =cosφcosκ, m 12 =sinωsinφcosκ+cosωsinκ, m 13 =-cosωsinφcosκ+sinωsinκ, m 21 =-cosφsinκ, m 22 =-sinωsinφsinκ+ cosωcosκ, m 23 =cosωsinφsinκ+sinωcosκ, m 31 =sinφ, m 32 =-sinωcosφ and m 33 =cosωcosφ, and ω, φ, κ are the first/second attitude parameters.

其次,本发明亦提出一种物件测量方法,包括以下步骤:(1)令该第一图像撷取装置与该第二图像撷取装置分别撷取至少一已知立体坐标的校正点的第一图像及第二图像;(2)令该处理模块将该第一图像及该第二图像中对应该已知立体坐标的校正点的参数代入以光束交会共线成像原理为基础的几何函数,以利用预设的演算法计算该几何函数进而解出该第一图像撷取装置的第一镜头扭曲参数、第一镜头中心与第一姿态参数以及该第二图像撷取装置的第二镜头扭曲参数、第二镜头中心与第二姿态参数;以及(3)令该处理模块将所解出的该第一镜头扭曲参数、该第一镜头中心、该第一姿态参数、该第二镜头扭曲参数、该第二镜头中心与该第二姿态参数代入该以光束交会共线成像原理为基础的几何函数中,以产生对应该第一及第二图像撷取装置的第一光束交会共线函数与第二光束交会共线函数。Secondly, the present invention also proposes a method for measuring an object, which includes the following steps: (1) making the first image capture device and the second image capture device respectively capture at least one first calibration point of known three-dimensional coordinates. image and the second image; (2) make the processing module substitute the parameters of the correction point corresponding to the known stereo coordinates in the first image and the second image into the geometric function based on the principle of beam intersection and collinear imaging, to Using a preset algorithm to calculate the geometric function and then solve the first lens distortion parameter, the first lens center and the first attitude parameter of the first image capture device and the second lens distortion parameter of the second image capture device , the second lens center and the second attitude parameter; and (3) make the processing module solve the first lens distortion parameter, the first lens center, the first attitude parameter, the second lens distortion parameter, The second lens center and the second attitude parameter are substituted into the geometric function based on the beam intersection collinear imaging principle to generate the first beam intersection collinear function and the second beam intersection collinear function corresponding to the first and second image capture devices. Two-beam intersection collinear function.

另外,本发明还提出一种物件测量系统,包括第一图像撷取装置及第二图像撷取装置,用以撷取校正点及目标物件的图像,其中,该第一图像撷取装置及该第二图像撷取装置并排且向内旋转的非平行设置;以及处理模块,连接该第一及第二图像撷取装置,用以依据该第一及第二图像撷取装置所撷取的该校正点的图像进行镜头校正以及物件测量,其中,该处理模块将该校正点的图像的参数代入以光束交会共线成像原理为基础的几何函数,以利用预设的演算法计算该几何函数进而解出该第一图像撷取装置的第一镜头扭曲参数、第一镜头中心与第一姿态参数以及该第二图像撷取装置的第二镜头扭曲参数、第二镜头中心与第二姿态参数,再令该第一图像撷取装置与该第二图像撷取装置同时撷取该目标物件的特征点坐标,并将该第一及该第二图像撷取装置所撷取到的特征点坐标、该第一镜头扭曲参数、该第一镜头中心、该第一姿态参数、该第二镜头扭曲参数、该第二镜头中心与该第二姿态参数代入该以光束交会共线成像原理为基础的几何函数中,以计算出该目标物件的立体空间坐标。In addition, the present invention also proposes an object measurement system, including a first image capture device and a second image capture device for capturing images of calibration points and target objects, wherein the first image capture device and the A non-parallel arrangement of the second image capture devices side by side and rotated inward; and a processing module connected to the first and second image capture devices for processing according to the images captured by the first and second image capture devices The image of the correction point is used for lens correction and object measurement, wherein the processing module substitutes the parameters of the image of the correction point into a geometric function based on the principle of beam intersection and collinear imaging, so as to use a preset algorithm to calculate the geometric function and then solving the first lens distortion parameter, the first lens center and the first pose parameter of the first image capture device and the second lens distortion parameter, the second lens center and the second pose parameter of the second image capture device, Then make the first image capture device and the second image capture device simultaneously capture the feature point coordinates of the target object, and use the feature point coordinates captured by the first and the second image capture device, The first lens distortion parameter, the first lens center, the first attitude parameter, the second lens distortion parameter, the second lens center, and the second attitude parameter are substituted into the geometry based on the beam intersection collinear imaging principle function to calculate the three-dimensional space coordinates of the target object.

综上所述,本发明可利用二非平行设置的图像撷取装置来对物件进行图像撷取并推导出的图像撷取装置的光束交会共线函数,再通过该光束交会共线函数计算出物件的立体坐标,由于这些图像撷取装置在对物件进行图像撷取之前,会先对立体坐标已知的校正点进行图像撷取,并据此对这些图像撷取装置进行镜头校正与姿态校正,因此,更可进一步提升所测量出的物件的立体坐标的精确度。In summary, the present invention can use two non-parallel image capture devices to capture images of objects and derive the beam intersection collinear function of the image capture device, and then calculate the beam intersection collinear function through the beam intersection collinear function The three-dimensional coordinates of the object, because these image capture devices will first capture the image of the correction points with known three-dimensional coordinates before capturing the image of the object, and then perform lens correction and posture correction on these image capture devices , therefore, the accuracy of the measured three-dimensional coordinates of the object can be further improved.

附图说明Description of drawings

图1A是本发明的物件测量方法的流程图;Fig. 1A is the flow chart of object measurement method of the present invention;

图1B是本发明的物件测量方法的另一实施例的流程图;FIG. 1B is a flow chart of another embodiment of the object measuring method of the present invention;

图2是本发明的物件测量系统的架构图;Fig. 2 is the structural diagram of object measurement system of the present invention;

图3是本发明的光束交会关系图;Fig. 3 is a diagram of beam intersection relationship of the present invention;

图4A是本发明的图像撷取装置以平行的设置方式所设置的示意图;以及FIG. 4A is a schematic diagram of the image capturing devices of the present invention arranged in parallel; and

图4B为本发明的本发明的图像撷取装置以非平行的设置方式所设置的示意图。FIG. 4B is a schematic diagram of the image capture device of the present invention arranged in a non-parallel arrangement.

【主要元件符号说明】[Description of main component symbols]

S1~S5、S1’~S4’步骤Steps S1~S5, S1’~S4’

1                 物件测量系统1 Object measurement system

10、10’          图像撷取装置10, 10’ image capture device

11、11’          转向机构11. 11' steering mechanism

12                固定基座12 Fixed base

13                处理模块13 processing module

2                 图像画面2 Image screen

A1、A2            视野交集区域A1, A2 Field of view intersection area

具体实施方式Detailed ways

以下通过特定的具体实例说明本发明的实施方式,本领域技术人员可由本说明书所公开的内容轻易地了解本发明的其他优点与功效。Embodiments of the present invention are described below through specific examples, and those skilled in the art can easily understand other advantages and effects of the present invention from the content disclosed in this specification.

请同时参阅图1A、图1B及图2,图1A与图1B是本发明的物件测量方法的步骤流程图,而图2是本发明的物件测量系统的架构图。Please refer to FIG. 1A , FIG. 1B and FIG. 2 at the same time. FIG. 1A and FIG. 1B are flow charts of the object measuring method of the present invention, and FIG. 2 is a structure diagram of the object measuring system of the present invention.

图1A的流程应用于例如图2所示的物件测量系统1中,该系统包含至少一组并排且向内旋转的非平行设置的图像撷取装置10及图像撷取装置10’,转向机构11及转向机构11’、固定基座12,及与图像撷取装置10、10’相连接的处理模块13。The process of FIG. 1A is applied to the object measurement system 1 shown in FIG. 2, which includes at least one set of non-parallel image capture devices 10 and image capture devices 10' arranged side by side and rotated inwardly, and a steering mechanism 11. and a steering mechanism 11', a fixed base 12, and a processing module 13 connected to the image capturing devices 10, 10'.

在本实施例中,图像撷取装置10与图像撷取装置10’可例如为包含电荷耦合元件(Charge Coupled Device,CCD)的摄影机或数字相机,且分别固设于可例如为活动式转盘的转向机构11与转向机构11’上,而转向机构11、11’则可再以可旋转活动的方式设置于具有尺规刻度的固定基座12上。再者,处理模块13可为具有逻辑运算功能的计算机或微处理芯片。In this embodiment, the image capture device 10 and the image capture device 10' can be, for example, a video camera or a digital camera including a Charge Coupled Device (CCD), and are respectively fixed on a movable turntable, for example. The steering mechanism 11 and the steering mechanism 11 ′, and the steering mechanisms 11 and 11 ′ can be rotatably arranged on the fixed base 12 with scales. Furthermore, the processing module 13 can be a computer or a micro-processing chip with a logic operation function.

执行步骤S1时,可先将与处理模块13相连接的图像撷取装置10与图像撷取装置10’分别通过转向机构11及转向机构11’以可转动的方式设置于固定基座12上,接着再依据至少一校正点的立体坐标来调整转向机构11及转向机构11’的转向角度,使图像撷取装置10与图像撷取装置10’同时对准该校正点并以非平行设置的设置方式设置于固定基座12上。When executing step S1, the image capture device 10 and the image capture device 10' connected to the processing module 13 can be rotatably arranged on the fixed base 12 through the steering mechanism 11 and the steering mechanism 11' respectively, Then adjust the steering angle of the steering mechanism 11 and the steering mechanism 11' according to the three-dimensional coordinates of at least one calibration point, so that the image capture device 10 and the image capture device 10' are simultaneously aligned with the calibration point and arranged in a non-parallel manner. The way is set on the fixed base 12.

具体实施时,图像撷取装置10与图像撷取装置10’间可具有不大于10公分的间距,例如5公分,而固定基座12更可设置于机器人或机器手臂(未图示)上,且处理模块13可内建于机器人或机器手臂中,当然,图像撷取装置10、10’及转向机构11、11’的数量可随使用者需求而增加,而处理模块13也可为简易的数据转换装置,并将图像撷取装置10与图像撷取装置10’所取得的数据通过USB、IEEE1394a或IEEE1394b等传输接口传输至外部的运算单元(未图示)再进行后续的运算。During specific implementation, the distance between the image capture device 10 and the image capture device 10' may be no greater than 10 cm, such as 5 cm, and the fixed base 12 may be further arranged on a robot or a robot arm (not shown), Moreover, the processing module 13 can be built into a robot or a robot arm. Of course, the number of image capture devices 10, 10' and steering mechanisms 11, 11' can increase according to user needs, and the processing module 13 can also be a simple The data conversion device transmits the data obtained by the image capture device 10 and the image capture device 10' to an external computing unit (not shown) through a USB, IEEE1394a or IEEE1394b transmission interface for subsequent calculations.

在步骤S2中,令图像撷取装置10与图像撷取装置10’分别撷取至少一已知立体坐标的镜头校正点的第一图像及第二图像,再令处理模块13通过镜头校正演算法依据该第一图像及该第二图像分别求得图像撷取装置10以及图像撷取装置10’的镜头扭曲参数,接着进至步骤S3。In step S2, let the image capture device 10 and the image capture device 10' respectively capture the first image and the second image of at least one lens correction point with known three-dimensional coordinates, and then let the processing module 13 pass the lens correction algorithm According to the first image and the second image, the lens distortion parameters of the image capture device 10 and the image capture device 10' are obtained respectively, and then proceed to step S3.

在本实施例中,处理模块13会先从第一图像及第二图像中计算出镜头校正点的图像坐标,再利用镜头校正演算法,例如奇变差,依据该第一图像及该第二图像中的校正点的图像坐标,分别求得图像撷取装置10以及图像撷取装置10’的镜头扭曲参数,而通过所求得的镜头扭曲参数,即可将镜头成像边缘的扭曲曲线调整为直线。再者,所述的镜头扭曲参数,也可指图像撷取装置10、10’的镜头的径向失真与筒状失真。In this embodiment, the processing module 13 will first calculate the image coordinates of the lens correction point from the first image and the second image, and then use the lens correction algorithm, such as odd variation, according to the first image and the second image The image coordinates of the correction points in the image are used to obtain the lens distortion parameters of the image capture device 10 and the image capture device 10' respectively, and through the obtained lens distortion parameters, the distortion curve of the lens imaging edge can be adjusted as straight line. Furthermore, the above lens distortion parameters may also refer to the radial distortion and cylindrical distortion of the lens of the image capture device 10, 10'.

在步骤S3中,令图像撷取装置10与图像撷取装置10’同时撷取相同的多个已知立体坐标的姿态校正点的图像坐标,再令处理模块13将该姿态校正点的立体坐标、图像撷取装置10以及图像撷取装置10’的镜头扭曲参数代入以光束交会共线成像原理为基础的几何函数中,其中,该几何函数中包含未知的图像撷取装置10以及图像撷取装置10’的镜头中心与姿态参数。接着进至步骤S4。In step S3, the image capture device 10 and the image capture device 10' simultaneously capture the image coordinates of the same plurality of known three-dimensional coordinates of the posture correction points, and then let the processing module 13 use the three-dimensional coordinates of the posture correction points , the lens distortion parameters of the image capture device 10 and the image capture device 10' are substituted into a geometric function based on the beam intersection collinear imaging principle, wherein the geometric function includes unknown image capture device 10 and image capture Lens center and attitude parameters of the device 10 ′. Then proceed to step S4.

在本实施例中,以光束共线成像原理为基础的几何函数满足且将镜面扭曲向量代入后可展开为In this embodiment, the geometric function based on the beam collinear imaging principle satisfies And after substituting the mirror distortion vector, it can be expanded as

xx cc == kk 22 xx ‾‾ 55 ++ (( kk 11 ++ 22 kk 22 ythe y ‾‾ 22 )) xx ‾‾ 33 ++ (( 33 pp 11 )) xx ‾‾ 22 ++ (( 11 ++ kk 00 ++ kk 11 ythe y ‾‾ 22 ++ kk 22 ythe y ‾‾ 44 ++ 22 pp 22 ythe y ‾‾ )) xx ‾‾ ++ pp 11 ythe y ‾‾ 22

ythe y cc == kk 22 ythe y ‾‾ 55 ++ (( kk 11 ++ 22 kk 22 xx ‾‾ 22 )) ythe y ‾‾ 33 ++ (( 33 pp 22 )) ythe y ‾‾ 22 ++ (( 11 ++ kk 00 ++ kk 11 xx ‾‾ 22 ++ kk 22 xx ‾‾ 44 ++ 22 pp 22 xx ‾‾ )) ythe y ‾‾ ++ pp 11 xx ‾‾ 22

,而此时,“XA,YA,ZA”代表为姿态校正点的已知立体坐标,“xc,yc”为图像撷取装置10、10’对姿态校正点所撷取的该图像坐标,“f”为图像撷取装置10、10’的已知焦距,“k0、k1、k2、p1、p2”为图像撷取装置10、10’的镜头扭曲参数,而“XL,YL,ZL”为图像撷取装置10、10’的镜头中心。, and at this time, “X A , Y A , Z A ” represent the known three-dimensional coordinates of the attitude correction points, and “x c , y c ” are the attitude correction points captured by the image capture devices 10 and 10 ′ The image coordinates, "f" is the known focal length of the image capture device 10, 10', and "k 0 , k 1 , k 2 , p 1 , p 2 " are the lens distortion parameters of the image capture device 10, 10' , and "X L , Y L , Z L " are the lens centers of the image capture devices 10, 10'.

再者,前述的m11=cosφcosκ、m12=sinωsinφcosκ+cosωsinκ、m13=-cosωsinφcosκ+sinωsinκ、m21=-cosφsinκ、m22=-sinωsinφsinκ+cosωcosκ、m23=cosωsinφsinκ+sinωcosκ、m31=sinφ、m32=-sinωcosφ以及m33=cosωcosφ,而ω、φ、κ即为为图像撷取装置10、10’的姿态参数。Furthermore, m 11 = cosφcosκ, m 12 = sinωsinφcosκ + cosωsinκ, m 13 = -cosωsinφcosκ + sinωsinκ, m 21 = -cosφsinκ, m 22 = -sinωsinφsinκ + cosωcosκ, m 23 = cosωsinφsinκ + sinωcos = φm, , m 32 =-sinωcosφ and m 33 =cosωcosφ, and ω, φ, κ are the attitude parameters of the image capturing devices 10 and 10 ′.

在步骤S4中,令处理模块13利用预设的演算法,例如数值迭代法或最小平方法等,计算该几何函数,以同时解出该图像撷取装置10以及图像撷取装置10’的镜头中心与姿态参数,并将所解出的图像撷取装置10以及图像撷取装置10’的镜头中心与姿态参数,代入前述以光束交会共线成像原理为基础的几何函数中,以分别产生对应图像撷取装置10及图像撷取装置10’的光束交会共线函数In step S4, let the processing module 13 use a preset algorithm, such as the numerical iteration method or the least square method, to calculate the geometric function, so as to simultaneously solve the lens of the image capture device 10 and the image capture device 10' Center and attitude parameters, and the solved image capture device 10 and the lens center and attitude parameters of the image capture device 10' are substituted into the aforementioned geometric functions based on the beam intersection collinear imaging principle to generate corresponding Beam intersection collinear function of image capture device 10 and image capture device 10'

为了清楚说明前述步骤S2~S4,请参阅图3,以图像撷取装置10为例说明校正点A(XA,YA,ZA)、校正点A(XA,YA,ZA)的图像坐标Aa(xc,yc),及图像撷取装置10的镜头中心L(XL,YL,ZL)于三维空间中的位置关系图。In order to clearly describe the aforementioned steps S2-S4, please refer to FIG. 3 , and take the image capture device 10 as an example to illustrate the calibration point A (X A , Y A , Z A ), calibration point A (X A , Y A , Z A ) The image coordinates A a (x c , y c ) of , and the position relationship diagram of the lens center L (X L , Y L , Z L ) of the image capture device 10 in three-dimensional space.

首先,A(XA,YA,ZA)作为镜头校正点,而于图像撷取装置10针对A(XA,YA,ZA)进行图像撷取后,会撷取出一图像画面2,且图像画面2会有A(XA,YA,ZA)的图像坐标Aa(xc,yc)。因此,当处理模块13将多个A(XA,YA,ZA)的数值、多个Aa(xc,yc)的数值、以及图像撷取装置10的焦距f代入上述以光束共线成像原理为基础的几何函数后,处理模块13即可计算出图像撷取装置10的镜头扭曲参数“k0、k1、k2、p1、p2”,进而对图像撷取装置10完成镜头校正。当然,对图像撷取装置10’也可通过相同的方法完成镜头校正,值得一提的是,对图像撷取装置10及图像撷取装置10’所作的镜头校正,可同步实施或先后实施。Firstly, A(X A , Y A , Z A ) is used as the lens correction point, and after the image capture device 10 performs image capture on A(X A , Y A , Z A ), an image frame 2 will be captured , and the image frame 2 will have image coordinates A a (x c , y c ) of A (X A , Y A , Z A ). Therefore, when the processing module 13 substitutes multiple values of A(X A , Y A , Z A ), multiple values of A a (x c , y c ), and the focal length f of the image capture device 10 into the above-mentioned light beam After the geometric function based on the principle of collinear imaging, the processing module 13 can calculate the lens distortion parameters “k 0 , k 1 , k 2 , p 1 , p 2 ” of the image capture device 10, and then the image capture device 10Complete lens correction. Of course, the same method can be used to complete the lens correction for the image capture device 10 ′. It is worth mentioning that the lens correction for the image capture device 10 and the image capture device 10 ′ can be implemented simultaneously or sequentially.

接着,再把A(XA,YA,ZA)作为姿态校正点,因此,当处理模块13计算出及图像撷取装置10的姿态参数“ω、ψ,及K”的数值后,因“ω、ψ,及K”可代表图像撷取装置10与空间中的方向轴的偏转角度,因此处理模块13遂可通过“ω、ψ,及K”的数值对图像撷取装置10完成姿态校正。值得一提的是,对图像撷取装置10及图像撷取装置10’所作的姿态校正,必须为同步实施。Next, A(X A , Y A , Z A ) is used as the attitude correction point. Therefore, after the processing module 13 calculates the values of the attitude parameters "ω, ψ, and K" of the image capture device 10, because "ω, ψ, and K" can represent the deflection angles between the image capture device 10 and the direction axis in space, so the processing module 13 can complete the attitude of the image capture device 10 through the values of "ω, ψ, and K". Correction. It is worth mentioning that the posture corrections for the image capture device 10 and the image capture device 10' must be performed synchronously.

最后,处理模块13即可解出该图像撷取装置10镜头中心,即为图示中的L(XL,YL,ZL),于是,当处理模块13将所解出的图像撷取装置10的镜头中心L(XL,YL,ZL)的数值,图像撷取装置10的姿态参数“ω、ψ,及K”的数值代入该以光束交会共线成像原理为基础的几何函数中,即可产生对应图像撷取装置10的光束交会共线函数,当然,图像撷取装置10’的光束交会共线函数也可通过相同的方法所产生。Finally, the processing module 13 can solve the lens center of the image capture device 10, which is L(X L , Y L , Z L ) in the figure, so when the processing module 13 captures the solved image The values of the lens center L (X L , Y L , Z L ) of the device 10, and the values of the attitude parameters "ω, ψ, and K" of the image capture device 10 are substituted into the geometry based on the principle of cross-beam collinear imaging. In the function, the beam intersection collinear function corresponding to the image capture device 10 can be generated. Of course, the beam intersection collinear function of the image capture device 10 ′ can also be generated by the same method.

在步骤S5中,令图像撷取装置10与图像撷取装置10’同时撷取一目标物件的特征点坐标,并令处理模块13将该图像撷取装置10与图像撷取装置10’所撷取到的特征点坐标分别代入图像撷取装置10与图像撷取装置10’的光束交会共线函数中,进而计算出目标物件的立体空间坐标。In step S5, let the image capture device 10 and the image capture device 10' simultaneously capture the feature point coordinates of a target object, and let the processing module 13 capture the image capture device 10 and the image capture device 10' The obtained feature point coordinates are respectively substituted into the beam intersection collinear functions of the image capture device 10 and the image capture device 10 ′, and then the three-dimensional space coordinates of the target object are calculated.

在本实施例中,处理模块13还可将图像撷取装置10所撷取到的特征点坐标与图像撷取装置10’所撷取到的特征点坐标进行匹配与相似判断的动作,进而建立该目标物件的平面方程式,并通过所建立的平面方程式计算出该目标物件的立体空间坐标与姿态。In this embodiment, the processing module 13 can also perform matching and similarity judgments between the feature point coordinates captured by the image capture device 10 and the feature point coordinates captured by the image capture device 10', and then establish The plane equation of the target object, and calculate the three-dimensional space coordinates and posture of the target object through the established plane equation.

而为了更加清楚说明步骤S5的作法,请再次参阅图3,此时,A(XA,YA,ZA)代表目标物件的特征点、Aa(xc,yc)代表目标物件的特征点A(XA,YA,ZA)的图像坐标,而L(XL,YL,ZL)则与前述相同,代表图像撷取装置10的镜头中心。In order to illustrate the method of step S5 more clearly, please refer to Fig. 3 again. At this time, A(X A , Y A , Z A ) represents the feature point of the target object, and A a (x c , y c ) represents the feature point of the target object. The image coordinates of the feature point A (X A , Y A , Z A ), and L (X L , Y L , Z L ) are the same as above, representing the lens center of the image capture device 10 .

因此,当图像撷取装置10及图像撷取装置10分别对A(XA,YA,ZA)进行图像撷取后,会从图像画面2中得到两组A(XA,YA,ZA)的图像坐标Aa(xc,yc),藉此,处理模块即可将两组Aa(xc,yc)的数值分别代回图像撷取装置10及图像撷取装置10’的光束交会共线函数中,进而再解出A(XA,YA,ZA)的数值,并得到目标物件的特征点的空间坐标。Therefore, when the image capture device 10 and the image capture device 10 respectively capture images of A(X A , Y A , Z A ), two sets of A(X A , Y A , Z A ) will be obtained from the image frame 2. The image coordinates A a (x c , y c ) of Z A ), whereby the processing module can replace the values of the two sets of A a (x c , y c ) back to the image capture device 10 and the image capture device respectively 10′ beam intersection collinear function, and then solve the value of A(X A , Y A , Z A ), and obtain the spatial coordinates of the feature points of the target object.

在此需特别说明的是,由于校正点或目标物件的位置必需被图像撷取装置10与图像撷取装置10’的视野交集区域所涵盖,才能精确地算出空间坐标进而产生立体视觉,因此,视野交集区域的面积大小也就直接影响运算结果的精确度。另一方面,当图像撷取装置10与图像撷取装置10’的视野交集区域可距离摄影机的距离越近,也就越不容易发生因物件或校正点距离图像撷取装置10与图像撷取装置10’过近而发生的近距离失焦情形,因此,本发明将图像撷取装置10及图像撷取装置10’以非平行的设置方式予以设置,可更适用于高精细的操作领域中。It should be noted here that the spatial coordinates can be accurately calculated to generate stereoscopic vision because the position of the calibration point or the target object must be covered by the intersection area of the field of view of the image capture device 10 and the image capture device 10 ′. The size of the area where the field of view intersects directly affects the accuracy of the calculation results. On the other hand, when the intersection area of the field of view of the image capture device 10 and the image capture device 10' is closer to the camera, it is less likely to occur due to the distance between the image capture device 10 and the image capture device or the calibration point. The near-distance out-of-focus situation occurs when the device 10' is too close. Therefore, the present invention sets the image capture device 10 and the image capture device 10' in a non-parallel arrangement, which can be more suitable for high-definition operation fields .

而为了清楚说明本发明将图像撷取装置10及图像撷取装置10’以非平行的设置方式予以设置的优点,请参阅图4A及图4B,其中,图4A是绘示图2中的图像撷取装置10与图像撷取装置10’以平行的设置方式的视野示意图,而图4B是绘示图2中的图像撷取装置10与图像撷取装置10’以非平行的设置方式所设置时的视野示意图。如图4A所示,以平行的设置方式所设置的图像撷取装置10与图像撷取装置10’会产生视野交集区域A1,且视野交集区域A1与平行设置的图像撷取装置10与图像撷取装置10’的距离为d1;而如图4B所示,以非平行的设置方式所设置的图像撷取装置10与图像撷取装置10’会产生视野交集区域A2,且视野交集区域A2与非平行设置的图像撷取装置10与图像撷取装置10’的距离为d2。经比较可知,因视野交集区域A2的面积大于视野交集区域A1的面积,且距离d1的长度大于距离d2的长度,所以非平行设置的图像撷取装置10与图像撷取装置10’更适用于极精细的操作领域中。In order to clearly illustrate the advantages of the present invention that the image capture device 10 and the image capture device 10' are arranged in a non-parallel manner, please refer to FIGS. 4A and 4B, wherein FIG. 4A shows the image in FIG. 2 A schematic view of the field of view of the capture device 10 and the image capture device 10' arranged in parallel, and FIG. 4B is a diagram illustrating the non-parallel setup of the image capture device 10 and the image capture device 10' in FIG. 2 Schematic diagram of the field of view. As shown in FIG. 4A , the image capture device 10 and the image capture device 10 ′ arranged in parallel will generate a visual field intersection area A1, and the visual field intersection area A1 and the parallel image capture device 10 and image capture device The distance between the capturing device 10' is d1; and as shown in FIG. 4B, the image capturing device 10 and the image capturing device 10' arranged in a non-parallel manner will generate a field of view intersection area A2, and the field of view intersection area A2 and The distance between the non-parallel image capture device 10 and the image capture device 10' is d2. It can be seen from the comparison that the area of the visual field intersection area A2 is larger than the area of the visual field intersection area A1, and the length of the distance d1 is greater than the length of the distance d2, so the non-parallel image capture device 10 and the image capture device 10' are more suitable for In the field of extremely fine operation.

另外,请再次参阅图1B,以进一步说明本发明的物件测量方法的另一实施例。In addition, please refer to FIG. 1B again to further illustrate another embodiment of the object measuring method of the present invention.

在此实施例的步骤S1’中,可先令第一图像撷取装置10与第二图像撷取装置10’分别撷取至少一已知立体坐标的校正点的第一图像及第二图像;接着,在步骤S2’,再令处理模块13将对应该第一图像及该第二图像的参数代入以光束交会共线成像原理为基础的几何函数中,以利用预设的演算法计算该几何函数,进而解出第一图像撷取装置10的第一镜头扭曲参数、第一镜头中心与第一姿态参数,以及第二图像撷取装置10’的第二镜头扭曲参数、第二镜头中心与第二姿态参数;在步骤S3’中,处理模块13即可将所解出的第一镜头扭曲参数、第一镜头中心、第一姿态参数、第二镜头扭曲参数、第二镜头中心与第二姿态参数代入前述以光束交会共线成像原理为基础的几何函数中,进而产生对应该第一及第二图像撷取装置的第一光束交会共线函数与第二光束交会共线函数。In the step S1' of this embodiment, the first image capture device 10 and the second image capture device 10' may be firstly ordered to respectively capture a first image and a second image of at least one calibration point with known three-dimensional coordinates; Next, in step S2', let the processing module 13 substitute the parameters corresponding to the first image and the second image into the geometric function based on the principle of intersection and collinear imaging of beams, so as to calculate the geometric function, and then solve the first lens distortion parameter, the first lens center and the first attitude parameter of the first image capture device 10, and the second lens distortion parameter, the second lens center and the second image capture device 10' The second attitude parameter; in step S3', the processing module 13 can obtain the first lens distortion parameter, the first lens center, the first attitude parameter, the second lens distortion parameter, the second lens center and the second The attitude parameters are substituted into the geometric function based on the beam intersection collinear imaging principle to generate a first beam intersection collinear function and a second beam intersection collinear function corresponding to the first and second image capture devices.

当然,在此实施例的步骤S4’中,可再令第一图像撷取装置10与第二图像撷取装置10’同时撷取一目标物件的特征点坐标,并将第一图像撷取装置10所撷取到的特征点坐标与第二图像撷取装置10’所撷取到的特征点坐标代入前述第一及第二光束交会共线函数中,以计算出该目标物件的立体空间坐标。Of course, in step S4' of this embodiment, the first image capture device 10 and the second image capture device 10' can be used to simultaneously capture the feature point coordinates of a target object, and the first image capture device The coordinates of the feature points captured by 10 and the coordinates of the feature points captured by the second image capture device 10' are substituted into the aforementioned first and second beam intersection collinear functions to calculate the three-dimensional space coordinates of the target object .

在此需特别说明的是,与前述实施例的差别在于,本实施例仅需撷一次校正点图像(亦即本实施例中第一图像撷取装置10与第二图像撷取装置10’仅分别撷取一次校正点图像进行校正,但图像中的校正点可能为多个),且本实施例的处理模块13同步地求得第一图像撷取装置10的第一镜头扭曲参数、第一姿态参数、第一镜头中心,以及第二图像撷取装置10’的第二镜头扭曲参数、第二姿态参数、第二镜头中心。It should be noted here that the difference from the previous embodiments is that this embodiment only needs to capture the calibration point image once (that is, the first image capture device 10 and the second image capture device 10' in this embodiment only The correction point images are captured once for correction, but there may be multiple correction points in the image), and the processing module 13 of this embodiment obtains the first lens distortion parameter of the first image capture device 10 synchronously, the first The attitude parameter, the first lens center, and the second lens distortion parameter, the second attitude parameter, and the second lens center of the second image capture device 10 ′.

换句话说,本实施例通过例如调整第一图像撷取装置10与第二图像撷取装置10’的转向角度的方法,可通过单一的校正点取代前述实施例的镜头校正点与姿态校正点,接着再令处理模块13同步完成对第一图像撷取装置10及第二图像撷取装置10’的镜头校正与姿态校正。而本实施例中的演算方法与相关的参数及函数,皆与前述实施例相同,故在此不再赘述。In other words, in this embodiment, for example, by adjusting the steering angles of the first image capture device 10 and the second image capture device 10 ′, a single calibration point can be used to replace the lens calibration point and the attitude calibration point of the previous embodiment. , and then make the processing module 13 synchronously complete the lens correction and posture correction of the first image capture device 10 and the second image capture device 10'. The calculation method and related parameters and functions in this embodiment are the same as those in the previous embodiment, so they will not be repeated here.

综上所述,本发明利用二非平行设置的图像撷取装置来对目标物件进行图像撷取,并依据图像撷取装置的光束交会共线函数计算出目标物件的立体坐标,因此,可方便、快速、精确地得到目标物件的立体坐标。同时,由于这些图像撷取装置在对物件进行图像撷取之前,会先对立体坐标已知的校正点进行图像撷取,并据此对这些图像撷取装置进行镜头校正与姿态校正,因此,可进一步提升所测量出的物件的立体坐标的精确度。据此,本发明不但可快速地取得目标物件的立体坐标与姿态,更可提高目标物件测量时的精确性与便利性,有利于各种不同工作环境的应用。In summary, the present invention utilizes two non-parallel image capture devices to capture the image of the target object, and calculates the three-dimensional coordinates of the target object according to the beam intersection collinear function of the image capture device, therefore, it is convenient , Get the three-dimensional coordinates of the target object quickly and accurately. At the same time, since these image capture devices first capture images of calibration points with known stereo coordinates before capturing images of objects, and perform lens correction and posture correction on these image capture devices accordingly, therefore, The accuracy of the measured three-dimensional coordinates of the object can be further improved. Accordingly, the present invention can not only quickly obtain the three-dimensional coordinates and attitude of the target object, but also improve the accuracy and convenience of measuring the target object, which is beneficial to the application in various working environments.

上述实施类型仅例示性说明本发明的原理及其功效,而非用于限制本发明。本领域技术人员均可在不违背本发明的精神及范围下,对上述实施例进行修饰与改变。因此,本发明的权利保护范围,应如所附权利要求书所列。The above implementation types are only illustrative to illustrate the principles and effects of the present invention, but are not intended to limit the present invention. Those skilled in the art can modify and change the above embodiments without departing from the spirit and scope of the present invention. Therefore, the protection scope of the present invention should be listed in the appended claims.

Claims (20)

1. object measuring method, first image capturing device that utilizes one group of non-parallel setting and second image capturing device and the processing module that is connected with this first and second image capturing device are measured object, and this method may further comprise the steps:
(1) make this first image capturing device and this second image capturing device capture first image and second image of the check point of at least one known spatial coordinate respectively;
(2) make this processing module with in this first image and this second image to the parameter substitution of check point that should known spatial coordinate geometric function based on light beam intersection collinear image formation principle, twist parameter, second optical center and second attitude parameter with second camera lens that utilizes default algorithm to calculate this geometric function and then to solve the distortion of first camera lens parameter, first optical center and first attitude parameter and this second image capturing device of this first image capturing device; And
(3) make this processing module with this first camera lens distortion parameter of being solved, this first optical center, this first attitude parameter, this second camera lens distortion parameter, this second optical center and this second attitude parameter substitution should geometric function based on light beam intersection collinear image formation principle in, with generation to the first light beam intersection conllinear function and the second light beam intersection conllinear function that should first and second image capturing device.
2. object measuring method as claimed in claim 1, also comprise step (4), make this first image capturing device and this second image capturing device capture the unique point coordinate of a target piece simultaneously, and in this first and second light beam intersection conllinear function of unique point coordinate substitution of being captured of the unique point coordinate that this first image capturing device is captured and this second image capturing device, to calculate the solid space coordinate of this target piece.
3. object measuring method as claimed in claim 2, wherein, this step (4) comprises that also the unique point coordinate with this first image capturing device mates the plane equation formula of setting up this target piece to similar judgement to the unique point coordinate of this second image capturing device, to calculate the solid space coordinate and the attitude of this target piece by this plane equation formula.
4. object measuring method as claimed in claim 1, wherein, the algorithm of this step (2) is to utilize strange variation that the distortion curve at lens imaging edge is adjusted into straight line, and the algorithm that should preset is the numerical value process of iteration.
5. object measuring method as claimed in claim 1, wherein, should satisfy in the step (2) based on the geometric function of light-beam collinear image-forming principle x c = - f [ m 11 ( X A - X L ) + m 12 ( Y A - Y L ) + m 13 ( Z A - Z L ) m 31 ( X A - X L ) + m 32 ( Y A - Y L ) + m 33 ( Z A - Z L ) ] y c = - f [ m 21 ( X A - X L ) + m 22 ( Y A - Y L ) + m 23 ( Z A - Z L ) m 31 ( X A - X L ) + m 32 ( Y A - Y L ) + m 33 ( Z A - Z L ) ] , Wherein, after the expansion be
x c = k 2 x ‾ 5 + ( k 1 + 2 k 2 y ‾ 2 ) x ‾ 3 + ( 3 p 1 ) x ‾ 2 + ( 1 + k 0 + k 1 y ‾ 2 + k 2 y ‾ 4 + 2 p 2 y ‾ ) x ‾ + p 1 y ‾ 2 ,
y c = k 2 y ‾ 5 + ( k 1 + 2 k 2 x ‾ 2 ) y ‾ 3 + ( 3 p 2 ) y ‾ 2 + ( 1 + k 0 + k 1 x ‾ 2 + k 2 x ‾ 4 + 2 p 2 x ‾ ) y ‾ + p 1 x ‾ 2
And (X A, Y A, Z A) be the known spatial coordinate of this check point, (x c, y c) be this first/the second image capturing device this image coordinate to this check point acquisition, f is the known focal length of this first/the second image capturing device, k 0, k 1, k 2, p 1, p 2Be this first/the second camera lens distortion parameter, (X L, Y L, Z L) be this first/the second optical center, and m 11=cos φ cos κ, m 12=sin ω sin φ cos κ+cos ω sin κ, m 13=-cos ω sin φ cos κ+sin ω sin κ, m 21=-cos φ sin κ, m 22=-sin ω sin φ sin κ+cos ω cos κ, m 23=cos ω sin φ sin κ+sin ω cos κ, m 31=sin φ, m 32=-sin ω cos φ and m 33=cos ω cos φ, and ω, φ, κ are this first/the second attitude parameter.
6. object measuring method as claimed in claim 1, wherein, this first/the second camera lens distortion parameter is meant the radial distortion and the tubular distortion of the camera lens of this first/the second image capturing device.
7. object measuring method, first image capturing device that utilizes one group of non-parallel setting and second image capturing device and the processing module that is connected with this first and second image capturing device are measured object, and this method may further comprise the steps:
(1) makes this first image capturing device and this second image capturing device capture first image and second image of the lens correction point of at least one known spatial coordinate respectively, make this processing module try to achieve first camera lens distortion parameter of this first image capturing device and second camera lens distortion parameter of this second image capturing device respectively according to this first image and this second image again by the lens correction algorithm;
(2) make this first image capturing device and this second image capturing device capture the image coordinate of the attitude correction point of identical a plurality of known spatial coordinates, make this processing module with the spatial coordinate of this attitude correction point, this first camera lens distortion parameter and the substitution of this second camera lens distortion parameter geometric function again based on light beam intersection collinear image formation principle, wherein, comprise first optical center of this unknown first image capturing device and second optical center and second attitude parameter of this second image capturing device of first attitude parameter and the unknown in this geometric function; And
(3) make the default algorithm of this processing module utilization calculate this geometric function, with this first optical center that solves this first image capturing device and this second optical center and this second attitude parameter of this first attitude parameter and this second image capturing device, and with this first optical center that is solved, this first attitude parameter, in geometric function that this second optical center and this second attitude parameter substitution should be based on light beam intersection collinear image formation principles, with generation to the first light beam intersection conllinear function and the second light beam intersection conllinear function that should first and second image capturing device.
8. object measuring method as claimed in claim 7, also comprise step (4), make this first image capturing device and this second image capturing device capture the unique point coordinate of a target piece simultaneously, and in this first and second light beam intersection conllinear function of unique point coordinate substitution of being captured of the unique point coordinate that this first image capturing device is captured and this second image capturing device, to calculate the solid space coordinate of this target piece.
9. object measuring method as claimed in claim 8, wherein, this step (4) comprises that also the unique point coordinate with this first image capturing device mates the plane equation formula of setting up this target piece to similar judgement to the unique point coordinate of this second image capturing device, with by this plane equation formula to calculate the solid space coordinate and the attitude of this target piece.
10. object measuring method as claimed in claim 7, wherein, the lens correction algorithm of this step (1) is to utilize strange variation that the distortion curve at lens imaging edge is adjusted into straight line.
11. object measuring method as claimed in claim 7, wherein, should satisfy in the step (2) based on the geometric function of light-beam collinear image-forming principle x c = - f [ m 11 ( X A - X L ) + m 12 ( Y A - Y L ) + m 13 ( Z A - Z L ) m 31 ( X A - X L ) + m 32 ( Y A - Y L ) + m 33 ( Z A - Z L ) ] y c = - f [ m 21 ( X A - X L ) + m 22 ( Y A - Y L ) + m 23 ( Z A - Z L ) m 31 ( X A - X L ) + m 32 ( Y A - Y L ) + m 33 ( Z A - Z L ) ] , Wherein, after the expansion be
x c = k 2 x ‾ 5 + ( k 1 + 2 k 2 y ‾ 2 ) x ‾ 3 + ( 3 p 1 ) x ‾ 2 + ( 1 + k 0 + k 1 y ‾ 2 + k 2 y ‾ 4 + 2 p 2 y ‾ ) x ‾ + p 1 y ‾ 2
y c = k 2 y ‾ 5 + ( k 1 + 2 k 2 x ‾ 2 ) y ‾ 3 + ( 3 p 2 ) y ‾ 2 + ( 1 + k 0 + k 1 x ‾ 2 + k 2 x ‾ 4 + 2 p 2 x ‾ ) y ‾ + p 1 x ‾ 2
, and (X A, Y A, Z A) be the known spatial coordinate of this attitude correction point, (x c, y c) be this first/the second image capturing device this image coordinate to this attitude correction point acquisition, f is the known focal length of this first/the second image capturing device, k 0, k 1, k 2, p 1, p 2Be this first/the second camera lens distortion parameter, (X L, Y L, Z L) be this first/the second optical center, and m 11=cos φ cos κ, m 12=sin ω sin φ cos κ+cos ω sin κ, m 13=-cos ω sin φ cos κ+sin ω sin κ, m 21=-cos φ sin κ, m 22=-sin ω sin φ sin κ+cos ω cos κ, m 23=cos ω sin φ sin κ+sin ω cos κ, m 31=sin φ, m 32=-sin ω cos φ and m 33=cos ω cos φ, and ω, φ, κ are this first/the second attitude parameter.
12. object measuring method as claimed in claim 7, wherein, the default algorithm of this step (3) is the numerical value process of iteration.
13. object measuring method as claimed in claim 7, wherein, this first/the second camera lens distortion parameter is meant the radial distortion and the tubular distortion of the camera lens of this first/the second image capturing device.
14. an object measuring system comprises:
First image capturing device and second image capturing device, in order to the image of acquisition check point and target piece, wherein, this first image capturing device and this second image capturing device are by non-parallel setting; And
Processing module, connect this first and second image capturing device, carry out lens correction and object measurement in order to image according to this check point that this first and second image capturing device captured, wherein, this processing module is with the parameter substitution of the image of this check point geometric function based on light beam intersection collinear image formation principle, to utilize default algorithm to calculate this geometric function and then to solve first camera lens distortion parameter of this first image capturing device, second camera lens distortion parameter of first optical center and first attitude parameter and this second image capturing device, second optical center and second attitude parameter, make this first image capturing device and this second image capturing device capture the unique point coordinate of this target piece simultaneously again, and with this first and the unique point coordinate that captured of this second image capturing device, this first camera lens distortion parameter, this first optical center, this first attitude parameter, this second camera lens distortion parameter, in this geometric function of this second optical center and this second attitude parameter substitution, to calculate the solid space coordinate of this target piece based on light beam intersection collinear image formation principle.
15. object measuring system as claimed in claim 14, also comprise in order to connect first steering mechanism of this first image capturing device, in order to connect second steering mechanism of this second image capturing device, and in order to connect the fixed pedestal of this first and second steering mechanism, wherein, this first and second steering mechanism is movably set on this fixed pedestal, to adjust the steering angle of this first and second steering mechanism according to the solid space coordinate of this check point and/or this target piece, this first and second image capturing device is arranged on this fixed pedestal in nonparallel mode.
16. object measuring system as claimed in claim 15, wherein, these steering mechanism are arranged at intervals on this fixed pedestal, make between this first and second image capturing device of this non-parallel setting to have spacing.
17. object measuring system as claimed in claim 16, wherein, this spacing is less than 10 centimeters.
18. object measuring system as claimed in claim 14, wherein, described lens correction, be that this processing module of instruction is tried to achieve first camera lens distortion parameter of this first image capturing device and second camera lens distortion parameter of this second image capturing device, and described attitude correction is that this processing module of instruction solves first attitude parameter of this first image capturing device and second attitude parameter of this second image capturing device.
19. object measuring system as claimed in claim 14, wherein, this first/the second image capturing device is video camera or the camera that comprises charge coupled cell.
20. object measuring system as claimed in claim 14, wherein, this processing module is computing machine or the little process chip with logical operation function.
CN201010163956.5A 2010-04-08 2010-04-08 object measuring method and system Active CN102213581B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010163956.5A CN102213581B (en) 2010-04-08 2010-04-08 object measuring method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010163956.5A CN102213581B (en) 2010-04-08 2010-04-08 object measuring method and system

Publications (2)

Publication Number Publication Date
CN102213581A true CN102213581A (en) 2011-10-12
CN102213581B CN102213581B (en) 2016-06-08

Family

ID=44744984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010163956.5A Active CN102213581B (en) 2010-04-08 2010-04-08 object measuring method and system

Country Status (1)

Country Link
CN (1) CN102213581B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506674A (en) * 2017-09-15 2019-03-22 高德信息技术有限公司 A kind of bearing calibration of acceleration and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06265326A (en) * 1993-03-16 1994-09-20 Kawasaki Steel Corp Calibrating device for plate width/zigzag movement measuring apparatus using two-dimensional rangefinder
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
WO2010011124A1 (en) * 2008-07-21 2010-01-28 Vitrox Corporation Bhd A method and means for measuring positions of contact elements of an electronic components

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06265326A (en) * 1993-03-16 1994-09-20 Kawasaki Steel Corp Calibrating device for plate width/zigzag movement measuring apparatus using two-dimensional rangefinder
CN101334276A (en) * 2007-06-27 2008-12-31 中国科学院自动化研究所 A visual measurement method and device
WO2010011124A1 (en) * 2008-07-21 2010-01-28 Vitrox Corporation Bhd A method and means for measuring positions of contact elements of an electronic components

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张浩鹏: "双目立体视觉及管口视觉测量系统研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 11, 15 November 2009 (2009-11-15) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109506674A (en) * 2017-09-15 2019-03-22 高德信息技术有限公司 A kind of bearing calibration of acceleration and device
CN109506674B (en) * 2017-09-15 2021-05-25 阿里巴巴(中国)有限公司 Acceleration correction method and device

Also Published As

Publication number Publication date
CN102213581B (en) 2016-06-08

Similar Documents

Publication Publication Date Title
TWI420066B (en) Object measuring method and system
CN108489395B (en) Vision measurement system structural parameters calibration and affine coordinate system construction method and system
CN108700408B (en) Three-dimensional shape data and texture information generation system, method and shooting control method
US9679385B2 (en) Three-dimensional measurement apparatus and robot system
US9715730B2 (en) Three-dimensional measurement apparatus and robot system
CN111801198A (en) Hand-eye calibration method, system and computer storage medium
CN104019745B (en) Based on the free planar dimension measuring method of single visual feel indirect calibration method
CN104424630A (en) Three-dimension reconstruction method and device, and mobile terminal
JP2005201824A (en) Measuring device
CN109272555B (en) A method of obtaining and calibrating external parameters of RGB-D camera
CN102013099A (en) Interactive calibration method for external parameters of vehicle video camera
CN106920261A (en) A kind of Robot Hand-eye static demarcating method
CN109465830B (en) Robot monocular stereoscopic vision calibration system and method
CN110398208A (en) Big data deformation monitoring method based on photographic total station system
CN107230233A (en) The scaling method and device of telecentric lens 3-D imaging system based on bundle adjustment
CN110779491A (en) Method, device and equipment for measuring distance of target on horizontal plane and storage medium
CN112229323B (en) Six-degree-of-freedom measurement method of checkerboard cooperative target based on monocular vision of mobile phone and application of six-degree-of-freedom measurement method
JP6410411B2 (en) Pattern matching apparatus and pattern matching method
CN102081798A (en) Epipolar rectification method for fish-eye stereo camera pair
CN113362399B (en) Calibration method for positions and postures of focusing mirror and screen in deflection measurement system
JP2015031601A (en) Three-dimensional measurement instrument, method, and program
JP7427370B2 (en) Imaging device, image processing device, image processing method, calibration method for imaging device, robot device, method for manufacturing articles using robot device, control program, and recording medium
JP5487946B2 (en) Camera image correction method, camera apparatus, and coordinate transformation parameter determination apparatus
CN109813277B (en) Construction method of ranging model, ranging method and device and automatic driving system
CN110211175A (en) Alignment laser light beam spatial pose scaling method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant