[go: up one dir, main page]

CN107270875A - Visual signature three-dimensional rebuilding method under motion blur effects - Google Patents

Visual signature three-dimensional rebuilding method under motion blur effects Download PDF

Info

Publication number
CN107270875A
CN107270875A CN201710321151.0A CN201710321151A CN107270875A CN 107270875 A CN107270875 A CN 107270875A CN 201710321151 A CN201710321151 A CN 201710321151A CN 107270875 A CN107270875 A CN 107270875A
Authority
CN
China
Prior art keywords
image
coded
point
motion blur
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710321151.0A
Other languages
Chinese (zh)
Other versions
CN107270875B (en
Inventor
张丽艳
陈明军
周含策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN201710321151.0A priority Critical patent/CN107270875B/en
Publication of CN107270875A publication Critical patent/CN107270875A/en
Application granted granted Critical
Publication of CN107270875B publication Critical patent/CN107270875B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

本发明运动模糊效应下的视觉特征三维重建方法,步骤为:对要使用的相机进行标定;在被测物体表面布置编码标记点;获取运动模糊图像;识别图像中的编码标记点的身份;针对同一个编码标记点,借助于在不同时刻拍摄的时间序列图像,对其在不同时刻的空间位置进行粗定位并且拟合成样条曲线,作为空间运动轨迹的初始值;构造编码标记点运动的模糊成像模型;在每次曝光时间内,根据模糊成像模型,优化求解运动路径和姿态。本发明在有运动模糊的情况下,还原出曝光时间内编码标记点的中心位置和姿态,获得被测物体表面的三维信息和在曝光时间内的运动信息。本发明使得基于视觉的测量方法扩展到动态场合,对高速运动部件分析、设计、逆向工程起重要作用。

The three-dimensional reconstruction method of visual features under the motion blur effect of the present invention comprises the steps of: calibrating the camera to be used; arranging coded marking points on the surface of the object to be measured; obtaining a motion blurred image; identifying the identity of the coded marking points in the image; For the same coded point, with the help of time series images taken at different times, its spatial position at different times is roughly positioned and fitted into a spline curve as the initial value of the spatial motion trajectory; Fuzzy imaging model; within each exposure time, according to the fuzzy imaging model, optimize and solve the motion path and attitude. In the case of motion blur, the present invention restores the central position and attitude of the coded mark points within the exposure time, and obtains the three-dimensional information of the surface of the object to be measured and the motion information within the exposure time. The invention extends the vision-based measurement method to dynamic occasions, and plays an important role in the analysis, design and reverse engineering of high-speed moving parts.

Description

运动模糊效应下的视觉特征三维重建方法3D reconstruction method of visual features under the effect of motion blur

技术领域:Technical field:

本发明涉及一种运动模糊效应下的视觉特征三维重建方法,属于机器视觉测量领域。The invention relates to a three-dimensional reconstruction method of visual features under motion blur effect, belonging to the field of machine vision measurement.

背景技术:Background technique:

编码标记点被广泛用于基于机器视觉的工业测量和逆向工程中。测量之前,编码标记点被布设于被测物体表面。根据标定过的一对或者更多个相机拍摄的被测物体的一组图像,可以重建出编码标记点在空间的位置信息,从而获得被测物体的三维参数。Coded markers are widely used in machine vision-based industrial metrology and reverse engineering. Before the measurement, coded markers are placed on the surface of the measured object. According to a set of images of the measured object taken by a pair of or more calibrated cameras, the location information of the coded marker points in space can be reconstructed, so as to obtain the three-dimensional parameters of the measured object.

当被测物体处于高速运动状态时,获取的图像是模糊的。此时传统的识别编码标记点身份的方法失效。现有的针对清晰图像的编码标记点中心定位方法,完全不适用于识别模糊图像中编码标记点的身份,也更完全不适用于对模糊图像中的编码标记点中心进行定位。When the measured object is in high-speed motion, the acquired image is blurred. At this time, the traditional method of identifying the identity of the coded marker becomes invalid. Existing methods for locating the center of coded markers in clear images are completely unsuitable for identifying the identity of coded markers in fuzzy images, and are even more unsuitable for locating the centers of coded markers in fuzzy images.

发明内容:Invention content:

本发明是为了解决上述现有技术存在的问题而提供一种运动模糊效应下的视觉特征三维重建方法,能够在运动模糊造成编码标记点图像模糊的情况下,仍然能恢复出编码标记点中心在曝光期间任意时刻的精确位置。The present invention provides a three-dimensional reconstruction method of visual features under the effect of motion blur in order to solve the problems existing in the above-mentioned prior art, which can still restore the center of the code mark point to Precise position at any moment during the exposure.

本发明所采用的技术方案有:一种运动模糊效应下的视觉特征三维重建方法,包括如下步骤:The technical solution adopted in the present invention includes: a method for three-dimensional reconstruction of visual features under the effect of motion blur, comprising the following steps:

步骤一:对要使用的相机进行标定;Step 1: Calibrate the camera to be used;

步骤二:在被测物体表面布置编码标记点;Step 2: Arrange coded marking points on the surface of the measured object;

步骤三:获取运动模糊图像;Step 3: Obtain a motion blurred image;

步骤四:识别图像中的编码标记点的身份;Step 4: identify the identity of the coded marker point in the image;

步骤五:针对同一个编码标记点,借助于在不同时刻拍摄的时间序列图像,对其在不同时刻的空间位置进行粗定位并且拟合成样条曲线,作为空间运动轨迹的初始值;Step 5: For the same coded marker point, with the help of time series images taken at different times, roughly locate its spatial position at different times and fit it into a spline curve as the initial value of the spatial trajectory;

步骤六:构造编码标记点运动的模糊成像模型;Step 6: Constructing a fuzzy imaging model for coding marker motion;

步骤七:在每次曝光时间内,根据模糊成像模型,优化求解运动路径和姿态。Step 7: In each exposure time, according to the fuzzy imaging model, optimize and solve the motion path and attitude.

本发明具有如下有益效果:本发明可以在有运动模糊的情况下,还原出曝光时间内编码标记点的中心位置和姿态,从而获得被测物体表面的三维信息和在曝光时间内的运动信息。本发明使得基于视觉的测量方法扩展到动态场合。对高速运动部件的分析、设计、逆向工程将起到重要作用。The present invention has the following beneficial effects: the present invention can restore the central position and attitude of the coded marker point within the exposure time in the case of motion blur, thereby obtaining the three-dimensional information of the surface of the measured object and the motion information within the exposure time. The present invention extends vision-based measurement methods to dynamic situations. The analysis, design and reverse engineering of high-speed moving parts will play an important role.

附图说明:Description of drawings:

图1为曝光时序示意图。Figure 1 is a schematic diagram of exposure timing.

图2为清晰编码点示意图。Figure 2 is a schematic diagram of clear coding points.

图3为运动模糊编码点示意图。Fig. 3 is a schematic diagram of motion blur coding points.

具体实施方式:detailed description:

下面结合附图对本发明作进一步的说明。The present invention will be further described below in conjunction with the accompanying drawings.

本发明运动模糊效应下的视觉特征三维重建方法,包括如下步骤:The three-dimensional reconstruction method of visual features under the motion blur effect of the present invention comprises the following steps:

1.标定一对相机,分别记为左相机C0和右相机C1,它们的成像矩阵分别表示为P0和P1。两个相机镜头的畸变系数向量分别表示为d(c),c=0,1。1. Calibrate a pair of cameras, respectively denoted as left camera C 0 and right camera C 1 , and their imaging matrices are respectively denoted as P 0 and P 1 . The distortion coefficient vectors of the two camera lenses are represented as d (c) , c=0,1, respectively.

2.选择阈值Tp用于极线约束检测。2. Select the threshold T p for epipolar constraint detection.

3.选择需要使用的编码标记点的身份编号。该身份编号为一个自然数,取值在1到N0之间,N0为全套编码标记点的总数目。记选定的编码标记点的集合为N为选定的编码标记点的总数目。3. Select the ID number of the coding point to be used. The identity number is a natural number with a value between 1 and N 0 , where N 0 is the total number of coding points in the complete set. Record the set of selected coding points as N is the total number of selected coding points.

4.对中每一个idn,n=1,2,...,N。准备编码点的图像Mn,所有的图像具有相同的像素尺寸,并且宽高像素数目相同,记为z。4. Right In each id n , n=1,2,...,N. An image M n of encoding points is prepared. All images have the same pixel size and the same number of width and height pixels, denoted as z.

5.根据Mn制作实际的编码标记点贴纸,边长为l。5. Make actual coded marker stickers according to M n , with side length l.

6.粘贴实际的编码标记点到被测物体表面。在一次测量中同一个编号的编码标记点最多出现一次。6. Paste the actual coded markers on the surface of the measured object. A coded point with the same number can appear at most once in a measurement.

7.获取运动模糊图像组,即由一对相机在多个时刻获取的成对图像。由于运动模糊效应,每幅图片中的编码标记点成像有不同程度的模糊。用分别表示左(c=0)右(c=1)相机拍摄的第k幅图像。K为总的拍摄次数。每次拍摄持续时间为Δt,每次拍摄的起始时间为Tk,k=1,2,...K,相邻两次拍摄的曝光时间没有重叠。相邻两次拍摄中,前一次曝光结束时刻,到后一次曝光开始时刻,间隔时间均相同,记为ΔT。7. Acquire a set of motion blurred images, ie pairs of images acquired by a pair of cameras at multiple times. Due to the motion blur effect, the image of the encoded markers in each picture is blurred to varying degrees. use respectively represent the kth image captured by the left (c=0) and right (c=1) cameras. K is the total number of shots. The duration of each shot is Δt, the start time of each shot is T k , k=1, 2, . . . K, and the exposure times of two adjacent shots do not overlap. In two adjacent shots, the time interval between the end of the previous exposure and the start of the next exposure is the same, which is recorded as ΔT.

8.根据相机镜头的畸变系数向量d(c),修正的镜头畸变效应,结果记为 8. According to the distortion coefficient vector d (c) of the camera lens, correct The lens distortion effect, the result is recorded as

9.对每幅图像进行分割,使得分割以后得到的每个小块恰好包含一个编码标记点的完整的模糊图像。中包含的图像小块的个数记为分割出的图像小块记为其中c=0,1分别对应左右相机,k=1,2,...K对应拍摄次序,对应中的第s个小块。的中心在中的像素坐标为 9. For each image Segmentation is performed so that each small block obtained after segmentation exactly contains a complete blurred image of a coded marker point. The number of image patches contained in is denoted as The segmented image blocks are denoted as Where c=0,1 correspond to the left and right cameras respectively, k=1,2,...K corresponds to the shooting sequence, correspond The sth small block in . at the center of The pixel coordinates in

10.为了使用备注中的方法(备注:1.用计算机模拟生成不同的编码点的各种运动模糊图像;2.构造深度卷积神经网络;3.用模拟生成的图像来训练深度卷积神经网络;4.用训练以后的网络来识别实际拍摄的编码标记点的运动模糊图像,得到其身份id)识别图像中的模糊编码标记点的身份,需要对图像小块进行预处理。该方法用一个深度卷积网络MBCNet来识别运动模糊的编码标记点的身份,构建该网络必须为输入层指定宽高尺寸,用w代表输入层需要的图像的宽度和高度,单位为像素。10. In order to use the method in the remarks (remarks: 1. Use computer simulation to generate various motion blur images of different code points; 2. Construct a deep convolutional neural network; 3. Use the images generated by simulation to train deep convolutional neural networks Network; 4. use the network after training to identify the motion-blurred image of the coded marker point that is actually taken, and obtain its identity id) identify the identity of the fuzzy coded marker point in the image, and need to image small blocks Do preprocessing. This method uses a deep convolutional network MBCNet to identify the identity of the coded markers of motion blur. To construct the network, the width and height dimensions must be specified for the input layer, and w represents the width and height of the image required by the input layer, in pixels.

11.对每一个图像小块进行尺寸预处理,设每个小块的宽度为wH像素,高度为wV像素。(1)如果wH=wV=w则不需要处理。(2)如果max{wH,wV}≠w则缩放图像小块倍,然后在图像小块上下或者左右分别对称地添加与背景相同灰度的空白区域,使得图像宽度高度都为w像素。预处理以后的图像小块记作 11. Perform size preprocessing on each small image block, assuming that the width of each small block is w H pixels, and the height is w V pixels. (1) No processing is required if w H =w V =w. (2) If max{w H , w V }≠w, scale the small image block times, and then symmetrically add blank areas of the same gray scale as the background to the top, bottom or left and right of the small image block, so that the width and height of the image are both w pixels. The image block after preprocessing is denoted as

12.对每一个使用上述备注中的方法识别其中包含的模糊编码标记点的身份,记为 12. For each Use the method in the above remarks to identify the identity of the fuzzy coded markers contained in it, denoted as place

13.对识别出的编码标记点进行筛选,步骤为:13. Screen the identified coded markers, the steps are:

a)对所有id∈ID进行筛选,如果存在某个k∈{1,2,...,K},a) Filter all id∈ID, if there is a certain k∈{1,2,...,K},

则将这个id标记为无效。 Then mark this id as invalid.

b)对所有目前未被标记为无效的id∈ID进行筛选,如果存在某个k∈{1,2,...,K},它不在阈值Tp水平以下关于左右图像满足极线约束条件,则标记为无效。b) Screen all id∈IDs that are not currently marked as invalid, if there exists a certain k∈{1,2,...,K}, it is not below the threshold T p level and satisfies the epipolar constraint on the left and right images , it is marked as invalid.

c) c)

14.对每一个编码标记点身份计算拟合起始和终止端点初值步骤为:14. For each encoded marker point identity Calculate the initial value of the fitting start and end endpoints The steps are:

a)对每一个k∈{1,2,...,K},对每一个c∈{0,1},存在某个使得根据以及两个相机矩阵P1,P2重建出第k个时刻编码标记点id的空间位置初值Mid,k,其三维坐标为(xid,k,yid,k,zid,k)Ta) For each k∈{1,2,...,K}, for each c∈{0,1}, there exists some make according to And two camera matrices P 1 , P 2 reconstruct the initial value M id,k of the spatial position of the encoding marker point id at the kth moment, and its three-dimensional coordinates are (x id,k ,y id,k ,z id,k ) T.

b)根据Mid,k,k=1,2,...,K插值生成次K次B样条曲线SPid。SPid通过了每个Mid,k。SPid的参数方程表示为 b) Generate K-th degree B-spline curve SP id according to M id,k ,k=1,2,...,K interpolation. SP id passed each M id,k . The parametric equation of SP id is expressed as

c)计算SPid的弧长,记为σid,并且将SPid进行近似弧长参数化。重新参数化以后曲线的方程记为Vid(t),t∈[0,σid]。此时有Vid(0)=Mid,1,Vidid)=Mid,Kc) Calculate the arc length of SP id , which is denoted as σ id , and parameterize the approximate arc length of SP id . The equation of the curve after reparameterization is denoted as V id (t),t∈[0,σ id ]. At this time, V id (0)=M id,1 , and V idid )=M id,K .

d)在SPid上每个Mid,K对应的参数为tid,k,即Mid,k=Vid(tid,k),k=1,2,...,K。d) The parameter corresponding to each M id,K on the SP id is t id,k , that is, M id,k =V id (t id,k ),k=1,2,...,K.

e)计算半窗口尺寸 e) Calculate the half window size

f)对k=1,计算 f) For k=1, calculate

g)对k=2,3,...,K-1,计算 g) For k=2,3,...,K-1, calculate

h)对k=K,计算 h) For k=K, calculate

15.构造编码点在空间运动的静态虚拟成像模型,其步骤为:15. Construct the static virtual imaging model of the code point moving in space, the steps are:

a)设置相机C和编码点E位于同一个三维空间坐标系下。a) Set camera C and code point E to be located in the same three-dimensional space coordinate system.

b)设置相机C的成像矩阵为P,无畸变。b) Set the imaging matrix of camera C to P, without distortion.

c)编码点id的图像记为M,二值图像,灰度取值为0或者1,长宽都是l像素。在其自身的平面内,按照逆时针方向,四个顶点的齐次坐标分别为 c) The image of the coding point id is denoted as M, a binary image, the gray value is 0 or 1, and the length and width are 1 pixel. In its own plane, according to the counterclockwise direction, the homogeneous coordinates of the four vertices are respectively

d)待成像的编码点E为一个边长为l的正方形平面,一面贴有图案M,无畸变地填充满正方形。d) The encoding point E to be imaged is a square plane with a side length l, and a pattern M is pasted on one side, filling the square without distortion.

e)M(u)为用函数形式表示的编码点的图像M。其中参数u为齐次坐标(u,v,s)T,其对应的非齐次坐标为M(u)表示图像上位置处的像素的灰度值。如果这个坐标为非整数值,灰度值由插值产生。如果该坐标落在图像之外,函数返回的灰度值为0。e) M(u) is an image M of code points expressed in functional form. The parameter u is the homogeneous coordinate (u, v, s) T , and its corresponding non-homogeneous coordinate is M(u) represents the position on the image The gray value of the pixel at . If this coordinate is a non-integer value, the gray value is generated by interpolation. If the coordinate falls outside the image, the grayscale value returned by the function is 0.

f)E在空间的位置完全由四个顶点的坐标决定。当编码点的图像朝向观察者时,按照逆时针方向,四个顶点依次为Q1(v,x),Q2(v,x),Q3(v,x),Q4(v,x),其中参数向量v=(α,β,γ)T,x=(x,,y,z)T,分别决定姿态和位置。f) The position of E in space is completely determined by the coordinates of the four vertices. When the image of the coding point faces the viewer, in the counterclockwise direction, the four vertices are Q 1 (v,x), Q 2 (v,x), Q 3 (v,x), Q 4 (v,x ), where the parameter vector v=(α,β,γ) T , x=(x,,y,z) T , determine the attitude and position respectively.

g)每个Q1(v,x),Q2(v,x),Q3(v,x),Q4(v,x)分别由 经过坐标变换得到。g) Each of Q 1 (v,x), Q 2 (v,x), Q 3 (v,x), and Q 4 (v,x) is composed of obtained through coordinate transformation.

α,β,γ决定旋转矩阵 α, β, γ determine the rotation matrix

x,y,z决定平移矩阵对于i-1,2,3,4, x,y,z determine the translation matrix For i-1,2,3,4,

h)E的四个顶点在相机C的像平面的像点的齐次坐标分别为zi=PQi(v,x)。h) The homogeneous coordinates of the four vertices of E on the image plane of the camera C are respectively z i =PQ i (v,x).

i)构造单应矩阵H,使得在齐次坐标的意义下 i) Construct the homography matrix H such that in the sense of homogeneous coordinates

j)则编码点在此位置姿态下,在相机C中成像为IM,v,x,P,其函数形式为IM,v,P(u)=M(Hu),其中u=(u,v,1)T为像素位置(u,v)T的齐次坐标。j) Then the encoding point is imaged in the camera C as I M,v,x,P at this position and attitude, and its functional form is I M,v,P (u)=M(Hu), where u=(u ,v,1) T is the homogeneous coordinate of the pixel position (u,v) T.

16.利用上一步构造的IM,v,x,P,构造编码点运动的模糊成像模型,步骤为:16. Use the I M,v,x,P constructed in the previous step to construct a fuzzy imaging model for the motion of the encoding point. The steps are:

a)选择离散粒度N为自然数,一般在100以上,1000以下。N取值大,则模糊效果更接近真实效果。a) Select the discrete granularity N as a natural number, generally above 100 and below 1000. If the value of N is large, the blur effect is closer to the real effect.

b)在短时间曝光的前提下,限定运动过程中,编码点姿态角v=(α,β,γ)T保持不变。b) Under the premise of short-time exposure, the attitude angle v=(α, β, γ) T of the encoding point remains unchanged during the limited movement process.

c)在短时间曝光的前提下,限定运动为匀速的直线段运动,起点为x1=(x1,y1,z1)T,终点为x2=(x2,y2,z2)Tc) On the premise of short-time exposure, the movement is limited to a straight line segment with a uniform speed, the starting point is x 1 =(x 1 ,y 1 ,z 1 ) T , and the end point is x 2 =(x 2 ,y 2 ,z 2 ) T.

d)模糊成像结果为 d) The fuzzy imaging result is

17.对每个编码标记点身份对每个曝光编号k=1,2,...,K,拟合运动路径。步骤为:17. For each encoded marker point identity For each exposure number k=1, 2, . . . , K, the motion path is fitted. The steps are:

a)记该编码点id对应的图像为M。记c=0,1为第k次拍摄的左右图像,记为包含该编码点运动模糊编码标记点的图像小块,其中c=0,1分别对应左右相机,s为该次拍摄得到的图像中分割出的图像小块的编号。a) Record the image corresponding to the code point id as M. remember c=0, 1 is the left and right images captured for the kth time, record is the small image block containing the motion blur coding marker of the code point, where c=0, 1 correspond to the left and right cameras respectively, and s is the number of the small image block segmented from the image obtained by this shooting.

b)选择优化变量为θ12312312312b) Select optimization variables as θ 1 , θ 2 , θ 3 , λ 1 , λ 2 , λ 3 , μ 1 , μ 2 , μ 3 , ω 1 , ω 2 .

C)θ123,的初值在0到2π内随机选取。C) The initial values of θ 1 , θ 2 , θ 3 are randomly selected within 0 to 2π.

d)(λ123)的初值为 d) The initial value of (λ 123 ) is

e)(μ123)的初值为 e) The initial value of (μ 123 ) is

f)ω1为图像增益,初始值为1。f) ω 1 is the image gain, the initial value is 1.

g)ω2为图像偏置,初始值为0。g) ω 2 is the image offset, the initial value is 0.

h)置v=(λ123)T,x1=(λ123)T,x2=(μ123)Th) Set v=(λ 123 ) T , x 1 =(λ 123 ) T , x 2 =(μ 123 ) T .

i)置掩码函数其中c,k,s含义与中的对应符号的含义相同。参数u=(u,v,s)T,为像素坐标,当该坐标落在图像小块在其母图像中占据的像素坐标范围内时,返回1,否则返回0。i) Set mask function Where c, k, s mean the same as The corresponding symbols in have the same meaning. Parameter u=(u,v,s) T , is the pixel coordinate, when the coordinate falls on the small block of the image When within the range of pixel coordinates occupied in its parent image, Returns 1, otherwise returns 0.

j)计算优化目标函数j) Calculate and optimize the objective function

其中||||2表示范数的平方。当W为一个图像时,||W||2为图像中所有像素灰度值的平方之和。 where |||| 2 represents the square of the norm. When W is an image, ||W|| 2 is the sum of the squares of the gray values of all pixels in the image.

k)通过优化参数θ12312312312使f取得最小优化值。k) By optimizing parameters θ 1 , θ 2 , θ 3 , λ 1 , λ 2 , λ 3 , μ 1 , μ 2 , μ 3 , ω 1 , ω 2 to obtain the minimum optimal value of f.

l)每次给θ123,赋以不同的随机值,重复上述步骤b)至k),选择f的优化值最小的一次作为最终的优化结果。重复次数不少于16次。l) Assign different random values to θ 1 , θ 2 , θ 3 each time, repeat the above steps b) to k), and select the one with the smallest optimized value of f as the final optimization result. The number of repetitions is not less than 16 times.

m)运算完成,在该次曝光时间内,编码点的运动轨迹为从(λ123)到(μ123)的直线段。在此次曝光时间内,编码点的姿态参数为(θ123,)。m) After the operation is completed, within the exposure time, the motion track of the coding point is a straight line segment from (λ 1 , λ 2 , λ 3 ) to (μ 1 , μ 2 , μ 3 ). During this exposure time, the attitude parameters of the encoding point are (θ 1 , θ 2 , θ 3 ,).

以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本发明原理的前提下还可以作出若干改进,这些改进也应视为本发明的保护范围。The above is only a preferred embodiment of the present invention, it should be pointed out that for those of ordinary skill in the art, some improvements can also be made without departing from the principle of the present invention, and these improvements should also be regarded as the invention. protected range.

Claims (10)

1.一种运动模糊效应下的视觉特征三维重建方法,其特征在于:包括如下步骤1. A visual feature three-dimensional reconstruction method under motion blur effect, is characterized in that: comprise the following steps 步骤一:对要使用的相机进行标定;Step 1: Calibrate the camera to be used; 步骤二:在被测物体表面布置编码标记点;Step 2: Arrange coded marking points on the surface of the measured object; 步骤三:获取运动模糊图像;Step 3: Obtain a motion blurred image; 步骤四:识别图像中的编码标记点的身份;Step 4: identify the identity of the coded marker point in the image; 步骤五:针对同一个编码标记点,借助于在不同时刻拍摄的时间序列图像,对其在不同时刻的空间位置进行粗定位并且拟合成样条曲线,作为空间运动轨迹的初始值;Step 5: For the same coded marker point, with the help of time series images taken at different times, roughly locate its spatial position at different times and fit it into a spline curve as the initial value of the spatial trajectory; 步骤六:构造编码标记点运动的模糊成像模型;Step 6: Constructing a fuzzy imaging model for coding marker motion; 步骤七:在每次曝光时间内,根据模糊成像模型,优化求解运动路径和姿态。Step 7: In each exposure time, according to the fuzzy imaging model, optimize and solve the motion path and attitude. 2.如权利要求1所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤一中,标定一对相机,分别记为左相机C0和右相机C1,它们的成像矩阵分别表示为P0和P1,两个相机镜头的畸变系数向量分别表示为d(c),c=0,1;选择阈值Tp用于极线约束检测。2. The method for three-dimensional reconstruction of visual features under motion blur effect as claimed in claim 1, characterized in that: in said step 1, a pair of cameras are calibrated, respectively marked as left camera C 0 and right camera C 1 , their The imaging matrices are respectively denoted as P 0 and P 1 , and the distortion coefficient vectors of the two camera lenses are respectively denoted as d (c) , c=0,1; the threshold T p is selected for epipolar constraint detection. 3.如权利要求2所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤二中:3. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 2, is characterized in that: in the described step 2: (a).选择需要使用的编码标记点的身份编号,该身份编号为一个自然数,取值在1到N0之间,N0为全套编码标记点的总数目,记选定的编码标记点的集合为N为选定的编码标记点的总数目;(a). Select the identity number of the coded point to be used. The identity number is a natural number with a value between 1 and N 0. N 0 is the total number of the complete set of coded points. Record the selected coded point The set of N is the total number of selected coding points; (b).对中每一个idn,n=1,2,...,N,准备编码点的图像Mn,所有的图像具有相同的像素尺寸,宽高像素数目均记为z;(b). Yes For each id n , n=1,2,...,N, the image M n of the encoding point is prepared, all images have the same pixel size, and the number of width and height pixels is recorded as z; (c).根据Mn制作实际的编码标记点贴纸,边长为l;(c). Make actual coded marking point stickers according to M n , and the side length is l; (d).粘贴实际的编码标记点到被测物体表面。(d). Paste the actual coded markers on the surface of the measured object. 4.如权利要求3所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤三中:4. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 3, is characterized in that: in the described step 3: (a).获取运动模糊图像组,即由一对相机在多个时刻获取的成对图像,由于运动模糊效应,每幅图片中的编码标记点成像有不同程度的模糊,用分别表示左(c=0)右(c=1)相机拍摄的第k幅图像,K为总的拍摄次数,每次拍摄持续时间为Δt,每次拍摄的起始时间为Tk,k=1,2,...K,相邻两次拍摄的曝光时间没有重叠,前一次曝光结束到下一次曝光开始,之间的间隔时间相同,记为ΔT;(a). Obtain a motion blur image group, that is, a pair of images acquired by a pair of cameras at multiple times. Due to the motion blur effect, the imaging of the coding markers in each picture has different degrees of blurring. Use respectively represent the kth image taken by the left (c=0) and right (c=1) cameras, K is the total number of shots, the duration of each shot is Δt, and the starting time of each shot is T k , k= 1,2,...K, the exposure time of two adjacent shots does not overlap, and the interval between the end of the previous exposure and the start of the next exposure is the same, which is recorded as ΔT; (b).根据相机镜头的畸变系数向量d(c),修正的镜头畸变效应,结果记为 (b). According to the distortion coefficient vector d (c) of the camera lens, correct The lens distortion effect, the result is recorded as (c).对每幅图像进行分割,使得分割以后得到的每个小块恰好包含一个编码标记点的完整的模糊图像,中包含的图像小块的个数记为分割出的图像小块记为其中c=0,1分别对应左右相机,k=1,2,...K对应拍摄次序,对应中的第s个小块,的中心在中的像素坐标为 (c). For each image Carry out segmentation so that each small block obtained after segmentation exactly contains a complete fuzzy image of a coded marker point, The number of image patches contained in is denoted as The segmented image blocks are denoted as Where c=0,1 correspond to the left and right cameras respectively, k=1,2,...K corresponds to the shooting sequence, correspond The sth small block in , at the center of The pixel coordinates in 5.如权利要求4所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤四中:5. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 4, is characterized in that: in described step 4: (a).对图像小块进行预处理,使用一个深度卷积网络MBCNet来识别运动模糊的编码标记点的身份,构建该网络必须为输入层指定宽高尺寸,用w代表输入层需要的图像的宽度和高度,单位为像素;(a). For small image blocks For preprocessing, use a deep convolutional network MBCNet to identify the identity of the motion-blurred coded markers. To build this network, you must specify the width and height dimensions for the input layer. Use w to represent the width and height of the image required by the input layer, in pixels ; (b).对每一个图像小块进行尺寸预处理,设每个小块的宽度为wH像素,高度为wV像素,(1)如果wH=wV=w则不需要处理,(2)如果max{wH,wV}≠w则缩放图像小块倍,然后在图像小块上下或者左右分别对称地添加与背景相同灰度的空白区域,使得图像宽度高度都为w像素,预处理以后的图像小块记作 (b). Perform size preprocessing on each small image block, assuming that the width of each small block is w H pixels, and the height is w V pixels, (1) if w H =w V =w then no processing is required, ( 2) If max{w H ,w V }≠w, scale the small image block times, and then symmetrically add a blank area of the same gray level as the background on the top, bottom or left and right of the small image, so that the width and height of the image are both w pixels, and the preprocessed image small block is denoted as (c).对每一个识别其中包含的模糊编码标记点的身份,记为表示识别出的所有编码点的集合。(c). For each Identify the identity of the fuzzy coded markers contained in it, denoted as use Represents the set of all codepoints identified. 6.如权利要求5所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤五中:6. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 5, characterized in that: in the step five: (a).对识别出的编码标记点进行筛选;(a). Screening the identified coding markers; (b).对每一个编码标记点身份id∈ID计算拟合起始和终止端点初值 (b). Calculate the initial value of the fitting start and end endpoints for each coded marker identity id∈ID 7.如权利要求6所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:对识别出的编码标记点进行筛选,步骤为:7. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 6, is characterized in that: the identified coded marker points are screened, the steps are: a)对所有id∈ID进行筛选,如果存在某个k∈{1,2,...,K},则将这个id标记为无效;a) Filter all id∈ID, if there is a certain k∈{1,2,...,K}, Then mark this id as invalid; b)对所有目前未被标记为无效的id∈ID进行筛选,如果存在某个k∈{1,2,...,K},它不在阈值Tp水平以下关于左右图像满足极线约束条件,则标记为无效;b) Screen all id∈IDs that are not currently marked as invalid, if there exists a certain k∈{1,2,...,K}, it is not below the threshold T p level and satisfies the epipolar constraint on the left and right images , it is marked as invalid; c) c) 8.如权利要求7所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:对每一个编码标记点身份计算拟合起始和终止端点初值步骤为:8. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 7, is characterized in that: for each coding mark point identity Calculate the initial value of the fitting start and end endpoints The steps are: a)对每一个k∈{1,2,...,K},对每一个c∈{0,1},存在某个使得根据以及两个相机矩阵P1,P2重建出第k个时刻编码标记点id的空间位置初值Mid,k,其三维坐标为(xid,k,yid,k,zid,k)Ta) For each k∈{1,2,...,K}, for each c∈{0,1}, there exists some make according to And two camera matrices P 1 , P 2 reconstruct the initial value M id,k of the spatial position of the encoding marker point id at the kth moment, and its three-dimensional coordinates are (x id,k ,y id,k ,z id,k ) T ; b)根据Mid,k,k=1,2,...,K插值生成K次B样条曲线SPid,SPid通过了每个Mid,k,SPid的参数方程表示为 b) Generate K times B-spline curve SP id according to M id,k ,k=1,2,...,K interpolation, SP id passes each M id,k , and the parameter equation of SP id is expressed as c)计算SPid的弧长,记为σid,并且将SPid进行近似弧长参数化,重新参数化以后曲线的方程记为Vid(t),t∈[0,σid],此时有Vid(0)=Mid,1,Vidid)=Mid,Kc) Calculate the arc length of SP id , denoted as σ id , and parameterize the approximate arc length of SP id , after reparameterization, the equation of the curve is denoted as V id (t),t∈[0,σ id ], here When V id (0)=M id,1 ,V idid )=M id,K ; d)在SPid上每个Mid,K对应的参数为tid,k,即Mid,k=Vid(tid,k),k=1,2,...,K;d) The parameter corresponding to each M id,K on SP id is t id,k , that is, M id,k =V id (t id,k ),k=1,2,...,K; e)计算半窗口尺寸 e) Calculate the half window size f)对k=1,计算 f) For k=1, calculate g)对k=2,3,...,K-1,计算 g) For k=2,3,...,K-1, calculate h)对k=K,计算 h) For k=K, calculate 9.如权利要求8所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:9. The visual feature three-dimensional reconstruction method under the motion blur effect as claimed in claim 8, is characterized in that: 在步骤六中按如下方式构造清晰成像模型:In step six, construct a clear imaging model as follows: a)设置相机C和编码点E位于同一个三维空间坐标系下;a) Set the camera C and the encoding point E to be located in the same three-dimensional space coordinate system; b)设置相机C的成像矩阵为P,无畸变;b) Set the imaging matrix of camera C to P, without distortion; c)编码点id的图像记为M,二值图像,灰度取值为0或者1,长宽都是l像素,在其自身的平面内,按照逆时针方向,四个顶点的齐次坐标分别为 c) The image of the coded point id is denoted as M, a binary image, the grayscale value is 0 or 1, and the length and width are l pixels. In its own plane, according to the counterclockwise direction, the homogeneous coordinates of the four vertices respectively d)待成像的编码点E为一个边长为l的正方形平面,一面贴有图案M,无畸变地填充满正方形;d) The encoding point E to be imaged is a square plane with a side length l, and a pattern M is pasted on one side, filling the square without distortion; e)M(u)为用函数形式表示的编码点的图像M,其中参数u为齐次坐标(u,v,s)T,其对应的非齐次坐标为M(u)表示图像上位置处的像素的灰度值;e) M(u) is the image M of the coding point expressed in the form of a function, where the parameter u is the homogeneous coordinate (u, v, s) T , and its corresponding non-homogeneous coordinate is M(u) represents the position on the image The gray value of the pixel at; f)E在空间的位置完全由四个顶点的坐标决定,当编码点的图像朝向观察者时,按照逆时针方向,四个顶点依次为Q1(v,x),Q2(v,x),Q3(v,x),Q4(v,x),其中参数向量v=(α,β,γ)T,x=(x,,y,z)T,分别决定姿态和位置;f) The position of E in space is completely determined by the coordinates of the four vertices. When the image of the encoding point faces the observer, the four vertices are Q 1 (v,x), Q 2 (v,x) in the counterclockwise direction ), Q 3 (v,x), Q 4 (v,x), where the parameter vector v=(α,β,γ) T , x=(x,,y,z) T , respectively determine the attitude and position; g)每个Q1(v,x),Q2(v,x),Q3(v,x),Q4(v,x)分别由 经过坐标变换得到,α,β,γ决定旋转矩阵 g) Each of Q 1 (v,x), Q 2 (v,x), Q 3 (v,x), and Q 4 (v,x) is composed of After coordinate transformation, α, β, γ determine the rotation matrix x,y,z决定平移矩阵对于i-1,2,3,4, x,y,z determine the translation matrix For i-1,2,3,4, h)E的四个顶点在相机C的像平面的像点的齐次坐标分别为zi=PQi(v,x);h) The homogeneous coordinates of the four vertices of E on the image plane of the camera C are respectively z i =PQ i (v,x); i)构造单应矩阵H,使得在齐次坐标的意义下 i) Construct the homography matrix H such that in the sense of homogeneous coordinates j)则编码点在此位置姿态下,在相机C中成像为IM,v,x,P,其函数形式为IM,v,x,P(u)=M(Hu),其中u=(u,v,1)T为像素位置(u,v)T的齐次坐标。j) Then the encoding point is imaged in the camera C as I M,v,x,P at this position and attitude, and its functional form is I M,v,x,P (u)=M(Hu), where u= (u,v,1) T is the homogeneous coordinate of the pixel position (u,v) T. 10.如权利要求9所述的运动模糊效应下的视觉特征三维重建方法,其特征在于:所述步骤七中:10. The method for three-dimensional reconstruction of visual features under motion blur effect as claimed in claim 9, characterized in that: in the step seven: (a).利用上一步构造的IM,v,x,P,构造编码点运动模糊成像模型;(a). Utilize the I M,v,x,P constructed in the previous step to construct the motion blur imaging model of the coding point; (b).对每个编码标记点身份对每个曝光编号k=1,2,...,K,拟合运动路径。(b). For each coded point identity For each exposure number k=1, 2, . . . , K, the motion path is fitted.
CN201710321151.0A 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect Expired - Fee Related CN107270875B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710321151.0A CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710321151.0A CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Publications (2)

Publication Number Publication Date
CN107270875A true CN107270875A (en) 2017-10-20
CN107270875B CN107270875B (en) 2020-04-24

Family

ID=60073863

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710321151.0A Expired - Fee Related CN107270875B (en) 2017-05-09 2017-05-09 Visual feature three-dimensional reconstruction method under motion blur effect

Country Status (1)

Country Link
CN (1) CN107270875B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299172A (en) * 2021-12-31 2022-04-08 广东工业大学 A plane coding target for vision system and its real-time pose measurement method
CN114757993A (en) * 2022-06-13 2022-07-15 中国科学院力学研究所 Motion and parameter identification method and system for schlieren image

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995004331A1 (en) * 1993-08-03 1995-02-09 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
US7023922B1 (en) * 2000-06-21 2006-04-04 Microsoft Corporation Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information
US20100149184A1 (en) * 2008-12-17 2010-06-17 Disney Enterprises, Inc. System and method providing motion blur to rotating objects
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
CN106254722A (en) * 2016-07-15 2016-12-21 北京邮电大学 A kind of video super-resolution method for reconstructing and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995004331A1 (en) * 1993-08-03 1995-02-09 Apple Computer, Inc. Three-dimensional image synthesis using view interpolation
US7023922B1 (en) * 2000-06-21 2006-04-04 Microsoft Corporation Video coding system and method using 3-D discrete wavelet transform and entropy coding with motion information
CN101750029A (en) * 2008-12-10 2010-06-23 中国科学院沈阳自动化研究所 Characteristic point three-dimensional reconstruction method based on trifocal tensor
US20100149184A1 (en) * 2008-12-17 2010-06-17 Disney Enterprises, Inc. System and method providing motion blur to rotating objects
CN106254722A (en) * 2016-07-15 2016-12-21 北京邮电大学 A kind of video super-resolution method for reconstructing and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
时愈等: "基于多观测点图像SURF特征配准及去模糊的三维重建", 《光学与光电技术》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299172A (en) * 2021-12-31 2022-04-08 广东工业大学 A plane coding target for vision system and its real-time pose measurement method
CN114299172B (en) * 2021-12-31 2022-07-08 广东工业大学 A plane coding target for vision system and its real-time pose measurement method
US11699244B2 (en) 2021-12-31 2023-07-11 Guangdong University Of Technology Planar coding target for vision system and real-time pose measurement method thereof
CN114757993A (en) * 2022-06-13 2022-07-15 中国科学院力学研究所 Motion and parameter identification method and system for schlieren image

Also Published As

Publication number Publication date
CN107270875B (en) 2020-04-24

Similar Documents

Publication Publication Date Title
Casser et al. Depth prediction without the sensors: Leveraging structure for unsupervised learning from monocular videos
CN103047943B (en) Based on the door skin geomery detection method of single projection coded structured light
CN113674360B (en) A covariant-based method for line structured light and light plane calibration
Ma et al. Deformable neural radiance fields using rgb and event cameras
CN113947589A (en) Missile-borne image deblurring method based on countermeasure generation network
CN110197505B (en) Binocular Stereo Matching Method for Remote Sensing Images Based on Deep Network and Semantic Information
CN111833237B (en) Image registration method based on convolutional neural network and local homography transformation
CN111524233A (en) Three-dimensional reconstruction method for dynamic target of static scene
CN106780546B (en) Identification method of motion blur code points based on convolutional neural network
CN105389808A (en) Camera self-calibration method based on two vanishing points
WO2021219835A1 (en) Pose estimation method and apparatus
CN111768452A (en) A non-contact automatic mapping method based on deep learning
CN108225216A (en) Structured-light system scaling method and device, structured-light system and mobile equipment
CN108122254A (en) Three-dimensional image reconstruction method, device and storage medium based on structure light
CN112132907A (en) A camera calibration method, device, electronic device and storage medium
ES2906626T3 (en) Image processing device and two-dimensional image generation program
CN119477770B (en) Three-dimensional Gaussian splatter-based orthographic image generation method, device and storage medium
CN116912334B (en) Phase mapping high-precision projector calibration method based on grating fringe projection
CN109724537B (en) Binocular three-dimensional imaging method and system
Feng et al. Ae-nerf: Augmenting event-based neural radiance fields for non-ideal conditions and larger scenes
CN114862866A (en) Calibration plate detection method and device, computer equipment and storage medium
CN120451011A (en) A dynamic scene restoration method based on four-dimensional Gaussian sputtering
Lee et al. Mfos: Model-free & one-shot object pose estimation
CN117274066B (en) An image synthesis model, method, equipment and storage medium
CN107270875B (en) Visual feature three-dimensional reconstruction method under motion blur effect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20200424

CF01 Termination of patent right due to non-payment of annual fee