CN101763633A - Visible light image registration method based on salient region - Google Patents
Visible light image registration method based on salient region Download PDFInfo
- Publication number
- CN101763633A CN101763633A CN200910088975A CN200910088975A CN101763633A CN 101763633 A CN101763633 A CN 101763633A CN 200910088975 A CN200910088975 A CN 200910088975A CN 200910088975 A CN200910088975 A CN 200910088975A CN 101763633 A CN101763633 A CN 101763633A
- Authority
- CN
- China
- Prior art keywords
- region
- lfd
- registration
- salient
- salient region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000009466 transformation Effects 0.000 claims abstract description 25
- 239000000284 extract Substances 0.000 claims abstract description 11
- 238000007667 floating Methods 0.000 claims description 22
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000007621 cluster analysis Methods 0.000 claims description 7
- 230000001174 ascending effect Effects 0.000 claims description 3
- 238000005259 measurement Methods 0.000 claims description 2
- 230000008878 coupling Effects 0.000 claims 9
- 238000010168 coupling process Methods 0.000 claims 9
- 238000005859 coupling reaction Methods 0.000 claims 9
- 239000004744 fabric Substances 0.000 claims 7
- 238000006243 chemical reaction Methods 0.000 claims 2
- 230000007812 deficiency Effects 0.000 claims 1
- 230000004069 differentiation Effects 0.000 claims 1
- 238000010606 normalization Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 29
- 238000010586 diagram Methods 0.000 description 11
- 238000000605 extraction Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000002474 experimental method Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000011524 similarity measure Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
本发明涉及图像配准领域,特别是一种基于显著性区域的可见光图像配准方法,包括:(1)加载图像。(2)提取图像的显著性区域。(3)对提取出的显著性区域,计算区域特征描述子,根据区域特征描述子的相似性进行显著性区域的匹配。(4)对步骤(3)中初步匹配上的显著性区域进行局部刚性配准。(5)采用局部刚性配准后的显著性区域中心作为控制点,进行全局二次多项式变换配准。本发明方法是一种快速、精确、鲁棒的自动图像配准方法,在图像配准方面有重大的应用价值。
The invention relates to the field of image registration, in particular to a visible light image registration method based on a salient region, including: (1) loading an image. (2) Extract the salient region of the image. (3) For the extracted salient regions, calculate the regional feature descriptors, and match the salient regions according to the similarity of the regional feature descriptors. (4) Perform local rigid registration on the salient regions on the preliminary matching in step (3). (5) The center of the salient region after local rigid registration is used as the control point to perform global quadratic polynomial transformation registration. The method of the invention is a fast, accurate and robust automatic image registration method, and has great application value in image registration.
Description
技术领域technical field
本发明涉及图像处理,模式识别技术,特别涉及一种基于显著性区域的自动图像配准技术。The invention relates to image processing and pattern recognition technology, in particular to an automatic image registration technology based on salient regions.
背景技术Background technique
目前主流的自动图像配准方法,主要有基于特征点的配准,基于图像灰度值的配准,基于互信息的配准方法等。这些方法都还存在一些不足之处,基于特征点的配准,对一些质量比较差的可见光图像,特征点难以准确提取,基于图像灰度值的配准要求两幅图像的灰度值必须要一致,对受光照等环境影响的图像配准精度不高,基于互信息的配准,配准需要的时间比较长,而且可能会陷入局部极值,无法得到精确配准结果。因此,对于低质量的图像很多还是采用手动配准,手动配准的成功率和精度都比较高,但是它加大了操作者的负担,配准的速度比较慢。At present, the mainstream automatic image registration methods mainly include registration based on feature points, registration based on image gray value, and registration methods based on mutual information. These methods still have some shortcomings. Based on the registration of feature points, for some visible light images with poor quality, it is difficult to accurately extract the feature points. The registration based on image gray value requires that the gray value of the two images must be equal. Consistent, the registration accuracy of images affected by lighting and other environments is not high. Registration based on mutual information takes a long time for registration, and may fall into local extremum, and accurate registration results cannot be obtained. Therefore, manual registration is still used for many low-quality images. The success rate and accuracy of manual registration are relatively high, but it increases the burden on the operator and the registration speed is relatively slow.
发明内容Contents of the invention
针对现有技术的缺陷,本发明的目的是提供一种快速、精确、鲁棒、基于显著性区域的可见光图像配准方法。Aiming at the defects of the prior art, the purpose of the present invention is to provide a fast, accurate and robust visible light image registration method based on salient regions.
为达成所述目的,本发明提供一种基于显著性区域的自动图像配准方法,该方法的步骤如下:To achieve the stated purpose, the present invention provides a method for automatic image registration based on salient regions, the steps of which are as follows:
步骤1:在计算机上加载两幅待配准图像,选择一幅作为参考图像,另一幅作为浮动图像;Step 1: Load two images to be registered on the computer, select one as a reference image and the other as a floating image;
步骤2:将参考图像和浮动图像分成M×N个矩形区域,计算每一个区域R的局部显著性函数Ls(R);对局部显著性函数Ls(R)进行高斯拟合,计算拟合后的局部显著性函数值Fls(R);选择Fls(R)的局部极值区域的中心作为显著性区域中心;对每一个显著性区域中心,根据邻域内的Fls(R)分布计算区域的半径,提取参考图像和浮动图像的显著性区域R;Step 2: Divide the reference image and the floating image into M×N rectangular areas, and calculate the local saliency function Ls(R) of each area R; perform Gaussian fitting on the local saliency function Ls(R), and calculate the fitted The local significance function value Fls(R) of Fls(R); select the center of the local extremum region of Fls(R) as the center of the significant region; for each significant region center, calculate the radius of the region according to the distribution of Fls(R) in the neighborhood , to extract the salient region R of the reference image and the floating image;
步骤3:对提取出的显著性区域R构建一个72维的尺度无关特征描述子Lfd(R);定义一个距离度量函数为Dist(Lfd(R1),Lfd(R2)),衡量两个特征描述子Lfd(R1),Lfd(R2)间的相似性;对两幅图像的任一显著性区域对C(i,j),计算C(i,j)的区域匹配相似度,采用由粗到精的匹配策略,进行显著性区域的匹配;Step 3: Construct a 72-dimensional scale-independent feature descriptor Lfd(R) for the extracted salient region R; define a distance metric function as Dist(Lfd(R 1 ), Lfd(R 2 )), measure two Similarity between feature descriptors Lfd(R 1 ), Lfd(R 2 ); for any significant region pair C(i, j) of two images, calculate the region matching similarity of C(i, j), Use a coarse-to-fine matching strategy to match the salient regions;
步骤4:对初步匹配上的显著性区域对Cmp(i,j),采用基于归一化相关系数的相似性度量进行局部刚体配准;Step 4: For the salient region pair Cmp(i, j) on the preliminary matching, use the similarity measure based on the normalized correlation coefficient to perform local rigid body registration;
步骤5:对局部刚体配准后的区域,采用聚类分析的方法,提取精确匹配成功的区域中心点作为控制点进行全局二次多项式变换配准,实现两幅图像的精确配准。Step 5: For the area after the local rigid body registration, cluster analysis method is used to extract the center point of the area with successful accurate matching as the control point for global quadratic polynomial transformation registration to achieve accurate registration of the two images.
其中,所述局部显著性函数Ls(R)是:Ls(R)=Av(R)·Lge(R),式中:Av(R)是区域R的归一化区域差异函数表示为Av(R)=σ/μ,σ是区域R的标准差,μ是区域R的均值;Lge(R)是区域R的梯度场熵表示如下:Wherein, the local saliency function Ls(R) is: Ls(R)=Av(R) Lge(R), where: Av(R) is the normalized regional difference function of the region R, expressed as Av( R)=σ/μ, σ is the standard deviation of the region R, μ is the mean value of the region R; Lge(R) is the gradient field entropy of the region R expressed as follows:
pi(R)是梯度方向位于第i个扇形区域的像素点集在区域R中占的梯度幅值比例表示如下:p i (R) is the gradient amplitude ratio of the pixel point set in the i-th fan-shaped area where the gradient direction occupies in the area R, expressed as follows:
Ri是由所有梯度方向位于第i个扇形区域内的像素点构成的点集,是像素点X梯度幅值,Xi是点集Ri中的像素点。 R i is a point set composed of all the pixels whose gradient direction is located in the i-th fan-shaped area, is the gradient magnitude of the pixel point X, and Xi is the pixel point in the point set R i .
其中,所述拟合后的局部显著性函数值Fls(R)的计算为:式中:用σ=1.5的高斯核函数进行拟合,M是图像在X方向上划分的矩形区域数,N是图像在Y方向上划分的矩形区域数,(a,i)∈{1,2,...,M},是区域R在M×N矩形区域阵列中X方向上的坐标,(b,j)∈{1,2,...,N}是区域R在M×N矩形区域阵列中Y方向上的坐标,矩形区域为Rab。Wherein, the calculation of the local significance function value Fls(R) after the fitting is: In the formula: the Gaussian kernel function of σ=1.5 is used for fitting, M is the number of rectangular areas divided by the image in the X direction, N is the number of rectangular areas divided by the image in the Y direction, (a, i)∈{1, 2,...,M}, is the coordinates of the region R in the X direction in the M×N rectangular region array, (b,j)∈{1,2,...,N} is the region R in the M×N Coordinates in the Y direction in the rectangular area array, the rectangular area is R ab .
其中,所述每一个显著性区域中心的区域半径的计算包括:以显著性区域中心所在矩形区域Rab为中心,构建一个半径最大的正方形矩形区域集合Ω,Ω必须满足以下条件:Wherein, the calculation of the area radius of each salient area center includes: taking the rectangular area R ab where the salient area center is located as the center, constructing a square rectangular area set Ω with the largest radius, Ω must meet the following conditions:
Fls(Rij)≥λ·Fls(Rab),
式中:Fls(Rij)是区域Rij拟合后的局部显著性函数值,λ是矩形区域半径控制参数,经验取值为0.75;选择Ω的长度和宽度中的较小值作为显著性区域半径。In the formula: Fls(R ij ) is the local saliency function value after fitting of the region R ij , λ is the control parameter of the radius of the rectangular region, and the empirical value is 0.75; the smaller value of the length and width of Ω is selected as the saliency area radius.
其中,所述构建一个72维的尺度无关特征描述子为:Lfd(R)=(p1(R),...p36(R),da1(R),...da36(R)),式中pi(R)是梯度方向位于第i个扇形区域的像素点集在区域R中占的梯度幅值比例,dai(R)∈[0,2π)是梯度方向位于第i个扇形区域的像素点集的几何中心到显著性区域中心的方向角;然后定义一个距离度量函数Dist(Lfd(R1),Lfd(R2)),衡量两个特征描述子Lfd(R1),Lfd(R2)间的相似性为:Wherein, the construction of a 72-dimensional scale-independent feature descriptor is: Lfd(R)=(p 1 (R),...p 36 (R), da 1 (R),...da 36 (R )), where p i (R) is the gradient amplitude ratio of the pixel point set whose gradient direction is located in the i-th fan-shaped area in the area R, da i (R)∈[0, 2π) is the gradient direction located in the i-th sector The direction angle from the geometric center of the pixel point set in the i fan-shaped area to the center of the salient area; then define a distance metric function Dist(Lfd(R 1 ), Lfd(R 2 )), measure the two feature descriptors Lfd(R 1 ), the similarity between Lfd(R 2 ) is:
式中Eud(dai(R1),dai(R2))是R1和R2中对应第i个方向角间的夹角。In the formula, Eud(da i (R 1 ), da i (R 2 )) is the angle between R 1 and R 2 corresponding to the i-th direction angle.
其中,所述显著性区域的匹配包括:1)遍历两幅图像的每一个可能的显著性区域匹配C(i,j),式中i表示参考图像的第i个显著性区域Ri,j表示浮动图像的第j个显著性区域Rj;满足以下条件的C(i,j)则认为是粗匹配上的显著性区域对Cmp(i,j):Wherein, the matching of the salient region includes: 1) traversing every possible salient region matching C(i, j) of the two images, where i represents the ith salient region R i , j of the reference image Represents the jth salient region R j of the floating image; C(i, j) satisfying the following conditions is considered to be a salient region pair Cmp(i, j) on rough matching:
式中Min(·)函数是求最小值,Max(·)函数是求最大值,T是粗匹配控制参数,经验取值为0.6;In the formula, the Min( ) function is to seek the minimum value, the Max( ) function is to seek the maximum value, T is the rough matching control parameter, and the empirical value is 0.6;
2)对每一个粗匹配区域对Cmp(i,j),按如下方法计算Ri和Rj间的相似性S(i,j)以及旋转角度θij: 2) For each rough matching area pair Cmp(i, j), calculate the similarity S(i, j) and the rotation angle θ ij between R i and R j as follows:
S(i,j)=Dist(Lfd(Ri),Lfd(Rjk)),S(i, j) = Dist(Lfd(R i ), Lfd(R j k)),
k=argk Min(Dist(Lfd(Ri),Lfd(Rjk))),k∈{0,1,...35},k = arg k Min(Dist(Lfd(R i ), Lfd(R j k))), k ∈ {0, 1, ... 35},
式中,Rjk是将Rj逆时针旋转10·k度得到的新区域;每一个粗匹配区域对Cmp(i,j)确定三个全局刚性变换参数为:二维平移(由Ri和Rj的区域中心决定)以及旋转参数θij;In the formula, R j k is the new area obtained by rotating R j counterclockwise by 10 k degrees; each rough matching area pair Cmp(i, j) determines three global rigid transformation parameters: two-dimensional translation (by R i and the area center of R j ) and the rotation parameter θ ij ;
3)按S(i,j)升序排列所有的Cmp(i,j),选取前2000(如果提取的粗匹配区域对数目不足则以实际的数目为准)个粗匹配区域对Cmp(i,j)作为输入样本集合,设置合适的类内距离阈值,对Cmp(i,j)进行全局刚性变换参数空间上的聚类,选取聚类内数目最多的类作为初步匹配上的显著性区域对F(i,j);按S(i,j)的大小来剔除重复的区域,保证初步匹配上的显著性区域对F(i,j)中不包含重复的区域,减小后续的计算量。3) Arrange all Cmp(i, j) in ascending order of S(i, j), and select the first 2000 (if the number of extracted rough matching region pairs is insufficient, the actual number shall prevail) rough matching region pairs Cmp(i, j) As the input sample set, set an appropriate intra-class distance threshold, cluster Cmp(i, j) in the global rigid transformation parameter space, and select the class with the largest number in the cluster as the significant region pair for preliminary matching F(i, j); eliminate repeated regions according to the size of S(i, j), to ensure that the significant region pair F(i, j) on the preliminary matching does not contain repeated regions, reducing the amount of subsequent calculations .
其中,所述局部刚体配准是对初步匹配上的显著性区域对F(i,j),以显著性区域Ri,Rj的中心位置和旋转角度θij作为初始配准参数,进行局部刚体配准。Wherein, the local rigid body registration is for the salient region pair F(i, j) on the preliminary matching, and the center position and rotation angle θ ij of the salient regions R i and R j are used as initial registration parameters to perform local Rigid body registration.
其中,所述实现两幅图像的精确配准包括:对局部刚体配准后的区域,设置更精细的类内距离阈值,进行全局刚性变换参数空间上的聚类分析,提取精确匹配上的区域对;并以精确匹配上的区域中心点作为控制点进行全局二次多项式变换配准,实现两幅图像的精确配准。Wherein, the precise registration of the two images includes: setting a finer intra-class distance threshold for the region after the local rigid body registration, performing cluster analysis on the global rigid transformation parameter space, and extracting the region on the exact match Right; and the center point of the area on the precise matching is used as the control point to perform global quadratic polynomial transformation registration to realize the precise registration of the two images.
本发明的有益效果:本发明利用提取图像中的显著性区域,计算区域特征描述子,然后采用由粗到精的显著性区域匹配策略,将初步匹配上的显著性区域对进行局部刚体配准,最后提取局部刚体配准成功的区域中心点作为控制点进行全局二次多项式变换配准,实现两幅图像的精确配准。由于我们很好的定义了区域显著性函数和特征描述子,采用由粗到精的显著性区域匹配策略以及图像配准策略,整个算法的计算量大大减少,同时对低质量的图像也能很好的完成精确配准,算法具有非常高的鲁棒性。实验结果表明,本方法可以在4s左右完成一般图像的配准,低质量图像配准时间会长一些,在10s左右也可完成精确配准,对1548×1260的图像配准精度可以达到2个像素。因此,具有重大的应用价值。Beneficial effects of the present invention: the present invention utilizes the extraction of salient regions in the image, calculates the regional feature descriptor, and then adopts a coarse-to-fine salient region matching strategy to perform local rigid-body registration on the salient region pairs on the preliminary match , and finally extract the center point of the area where the local rigid body registration is successful as the control point for global quadratic polynomial transformation registration to achieve accurate registration of the two images. Since we have well defined regional saliency functions and feature descriptors, and adopted a coarse-to-fine salient region matching strategy and image registration strategy, the calculation of the entire algorithm is greatly reduced, and it can also be used for low-quality images. Accurate registration is completed well, and the algorithm has very high robustness. The experimental results show that this method can complete the registration of general images in about 4s, the registration time of low-quality images will be longer, and the precise registration can be completed in about 10s, and the registration accuracy of 1548×1260 images can reach 2 pixels. Therefore, it has great application value.
附图说明Description of drawings
图1是本发明方法执行流程示意图;Fig. 1 is a schematic diagram of the execution flow of the method of the present invention;
图2(a)浮动图像;Figure 2(a) floating image;
图2(b)参考图像;Figure 2(b) reference image;
图2(c)是提取的浮动图像显著性区域;Figure 2(c) is the extracted salient region of the floating image;
图2(d)是提取的参考图像显著性区域:Figure 2(d) is the extracted salient region of the reference image:
图2(e)是初步匹配上的显著性区域示意图;Figure 2(e) is a schematic diagram of the salient regions on the preliminary matching;
图3是梯度方向划分示意图;Fig. 3 is a schematic diagram of gradient direction division;
图4(a)是显著性区域R的梯度分布示意图;Figure 4(a) is a schematic diagram of the gradient distribution of the salient region R;
图4(b)是显著性区域特征描述子前36维特征向量示意图;Figure 4(b) is a schematic diagram of the first 36-dimensional feature vector of the feature descriptor of the salient region;
图4(c)是显著性区域特征描述子后36维特征向量示意图;Figure 4(c) is a schematic diagram of the 36-dimensional feature vector after the feature descriptor of the salient region;
图5是配准后的浮动图像示意图;Fig. 5 is a schematic diagram of a floating image after registration;
图6是融合后的图像示意图;Figure 6 is a schematic diagram of an image after fusion;
具体实施方式Detailed ways
为使本发明的目的、技术方案和优点更加清楚明白,以下结合具体实施例,并参照附图,对本发明进一步详细说明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with specific embodiments and with reference to the accompanying drawings.
如图1示出本发明方法执行流程示意图;在计算机101上加载两幅待配准图像,选择一幅作为参考图像,另一幅作为浮动图像;利用计算机101实现以下四个顺序处理单元对可见光图像配准:显著性区域提取单元102,用来提取可见光图像中的显著性区域;区域特征描述子计算及显著性区域匹配单元103,用来计算显著性区域的特征描述子,以及基于特征描述子间的相似性进行显著性区域匹配;局部刚体配准单元104,对初步匹配上的显著性区域进行局部刚体配准;全局二次多项式变换配准单元105,提取局部刚体配准后的区域中心点作为控制点进行全局二次多项式变换,实现图像的精确配准。Figure 1 shows a schematic diagram of the execution flow of the method of the present invention; load two images to be registered on the
所述显著性区域提取单元102,具体实现是应用计算机101,并可以利用编程语言C++编写程序,根据我们定义的拟合后的局部显著性函数值Fls(R),提取两幅图像中的显著性区域。The salient
区域特征描述子计算及显著性区域匹配单元103,具体实现是应用计算机101,并可以利用编程语言C++编写程序,实现如下功能:对提取出的显著性区域,计算它的72维的尺度无关特征描述子Lfd(R),定义距离度量函数Dist(Lfd(R1),Lfd(R2)),衡量两个特征描述子Lfd(R1),Lfd(R2)间的相似性,对两幅图像的任一显著性区域对,计算它们的区域匹配相似度,采用由粗到精的匹配策略,进行显著性区域的匹配。The regional feature descriptor calculation and salient
所述局部刚体配准单元104,具体实现是应用计算机101,并可以利用编程语言C++编写程序,实现如下功能:对初步匹配上的显著性区域对,采用基于归一化相关系数的相似性度量进行局部刚体配准。The local rigid
所述全局二次多项式变换配准单元105,具体实现是应用计算机101,并可以利用编程语言C++编写程序,实现如下功能:对局部刚体配准后的区域,设置更精细的类内距离阈值,进行全局刚性变换参数空间上的聚类分析,提取精确匹配成功的区域中心点作为控制点进行全局二次多项式变换配准,实现两幅图像的精确配准。The global quadratic polynomial
本发明的配准方法主要包括以下步骤:The registration method of the present invention mainly includes the following steps:
步骤1:加载两幅待配准图像,应用计算机读取图像,将图像转化为二维数组,存储在计算机中,以方便后续单元进行处理,选择一幅作为参考图像,另一幅作为浮动图像,如图2(a),图2(b)所示。Step 1: Load two images to be registered, use a computer to read the images, convert the images into a two-dimensional array, and store them in the computer to facilitate subsequent unit processing. Select one as a reference image and the other as a floating image , as shown in Figure 2(a) and Figure 2(b).
步骤2:运行显著性区域提取单元102,提取两幅图像的显著性区域。Step 2: Run the salient
显著性区域的提取主要由以下几步完成:The extraction of salient regions is mainly completed by the following steps:
1、将参考图像和浮动图像分成M×N个矩形区域,M,N的取值跟图像大小相关,在我们的方法中,将1548×1260的图像分成100×100个矩形区域,计算每一个矩形区域R的局部显著性函数Ls(R):1. Divide the reference image and the floating image into M×N rectangular areas. The values of M and N are related to the size of the image. In our method, the 1548×1260 image is divided into 100×100 rectangular areas, and each The local saliency function Ls(R) of the rectangular area R:
Ls(R)=Av(R)·Lge(R),Ls(R)=Av(R)·Lge(R),
其中:Av(R)是区域R的归一化区域差异函数,表示为:Av(R)=σ/μ,式中σ是区域R的标准差,μ是区域R的均值;Among them: Av(R) is the normalized regional difference function of the region R, expressed as: Av(R)=σ/μ, where σ is the standard deviation of the region R, and μ is the mean value of the region R;
Lge(R)是区域R的梯度场熵,表示如下:Lge(R) is the gradient field entropy of the region R, expressed as follows:
式中:pi(R)是梯度方向(像素点X的梯度方向由该点的梯度向量g(X)=[Gx(X),Gy(X)]确定,如图3所示,我们将整个GxGy平面分成36等分)位于第i个扇形区域的像素点集在区域R中占的梯度幅值比例表示如下: In the formula: p i (R) is the gradient direction (the gradient direction of the pixel point X is determined by the gradient vector g (X)=[G x (X), G y (X)] of the point, as shown in Figure 3, We divide the entire G x G y plane into 36 equal parts) The proportion of the gradient amplitude value of the pixel point set in the i-th fan-shaped area in the area R is expressed as follows:
式中Ri是由所有梯度方向位于第i个扇形区域内的像素点构成的点集,是像素点X梯度幅值,Xi是点集Ri中的像素点。如果区域的归一化区域差异函数Av(R)的值比较小,那么区域的像素值分布比较一致,区域同质性比较强,区域显著性不明显,如果Av(R)的值比较大,区域像素值分布比较复杂,区域异质性比较强,区域显著性比较明显。另一方面,如果区域是同质的,那么局部梯度场分布应该是规则的,梯度场熵Lge(R)的值比较小,区域显著性低,如果区域是异质的,那么局部梯度场的分布就比较复杂,梯度场熵Lge(R)的值比较大,区域显著性增强。结合Av(R)和Lge(R)的区域显著性度量,Ls(R)能更好的度量区域的显著性水平。In the formula, R i is a point set composed of all the pixels whose gradient directions are located in the i-th fan-shaped area, is the gradient magnitude of the pixel point X, and Xi is the pixel point in the point set R i . If the value of the normalized regional difference function Av(R) of the region is relatively small, then the pixel value distribution of the region is relatively consistent, the regional homogeneity is relatively strong, and the regional significance is not obvious. If the value of Av(R) is relatively large, The distribution of regional pixel values is relatively complex, the regional heterogeneity is relatively strong, and the regional salience is relatively obvious. On the other hand, if the region is homogeneous, then the distribution of the local gradient field should be regular, the value of the gradient field entropy Lge(R) is relatively small, and the significance of the region is low. If the region is heterogeneous, then the local gradient field The distribution is more complicated, the value of the gradient field entropy Lge(R) is relatively large, and the regional significance is enhanced. Combining the regional saliency measures of Av(R) and Lge(R), Ls(R) can better measure the saliency level of the region.
2、在完成局部显著性函数Ls(R)的计算之后,我们采用σ=1.5的高斯核函数进行拟合,计算拟合后的局部显著性函数值Fls(R):2. After completing the calculation of the local significance function Ls(R), we use the Gaussian kernel function of σ=1.5 for fitting, and calculate the fitted local significance function value Fls(R):
M是图像在X方向上划分的矩形区域数,N是图像在Y方向上划分的矩形区域数,(a,i)∈{1,2,...,M},是区域R在M×N矩形区域阵列中X方向上的坐标,(b,j)∈{1,2,...,N}是区域R在M×N矩形区域阵列中Y方向上的坐标,通过对Ls(R)进行高斯拟合,Fls(R)可以反映区域R的邻域的显著性水平,从而能更准确的度量R的区域显著性,所以我们选择Fls(R)的局部极值区域的中心点作为显著性区域中心。M is the number of rectangular areas divided by the image in the X direction, N is the number of rectangular areas divided by the image in the Y direction, (a, i) ∈ {1, 2, ..., M}, is the area R in M× The coordinates in the X direction of the N rectangular area array, (b, j) ∈ {1, 2, ..., N} are the coordinates of the area R in the Y direction in the M×N rectangular area array, through the pair Ls(R ) for Gaussian fitting, Fls(R) can reflect the significance level of the neighborhood of the region R, so that the regional significance of R can be measured more accurately, so we choose the center point of the local extremum region of Fls(R) as The center of the salience region.
3、在完成显著性区域中心提取后,根据邻域内的Fls(R)分布计算每一个显著性区域中心的区域半径,包括:3. After completing the extraction of the center of the salient area, calculate the area radius of each salient area center according to the Fls(R) distribution in the neighborhood, including:
以显著性区域中心所在矩形区域Rab为中心,构建一个半径最大的正方形矩形区域集合Ω,Ω必须满足以下条件:Taking the rectangular area R ab where the center of the salient area is located as the center, construct a square rectangular area set Ω with the largest radius, Ω must meet the following conditions:
Fls(Rij)≥λ·Fls(Rab),
其中:Fls(Rij)是区域Rij拟合后的局部显著性函数值,λ是区域半径控制参数,经验取值为0.75。选择Ω的长度(X方向包含的像素数)和宽度(Y方向包含的像素数)中的较小值作为显著性区域半径,完成图像的显著性区域提取,如图2(c),图2(d)所示。Among them: Fls(R ij ) is the local saliency function value after fitting the region R ij , λ is the region radius control parameter, and the empirical value is 0.75. Select the smaller value of the length (the number of pixels included in the X direction) and the width (the number of pixels included in the Y direction) of Ω as the radius of the salient area to complete the extraction of the salient area of the image, as shown in Figure 2(c), Figure 2 (d) shown.
步骤3:运行区域特征描述子计算及显著性区域匹配单元103,完成两幅图像的显著性区域匹配。Step 3: Run the region feature descriptor calculation and salient
显著性区域匹配主要由以下几步完成:Salient region matching is mainly completed by the following steps:
1、对每一个显著性区域R,我们遍历区域R的每一个像素,计算其梯度幅值和梯度方向。然后统计区域R的梯度向量分布和每个梯度方向上包含的像素点集合,构建一个72维的尺度无关特征描述子Lfd(R):1. For each salient region R, we traverse each pixel of the region R and calculate its gradient magnitude and gradient direction. Then count the gradient vector distribution of the region R and the set of pixels contained in each gradient direction to construct a 72-dimensional scale-independent feature descriptor Lfd(R):
Lfd(R)=(p1(R),...p36(R),da1(R),...da36(R)),Lfd(R)=(p 1 (R), . . . p 36 (R), da 1 (R), . . . da 36 (R)),
其中pi(R)是梯度方向位于第i个扇形区域的像素点集在区域R中占的梯度幅值比例,dai(R)∈[0,2π)是梯度方向位于第i个扇形区域的像素点集的几何中心到显著性区域中心C的方向角。Lfd(R)的前36维特征描述了区域R内36个梯度方向上的像素点梯度幅值分布(示意图见图4(b),限于图像大小并未画齐所有36个方向上的特征分布),后36维特征描述了36个梯度方向的像素点几何中心相对于区域中心C的方位特征(示意图见图4(c),限于图像大小并未画齐所有36个方向上的特征分布)。一个示意性的尺度无关特征描述子Lfd(R)的构造如图4(a),图4(b),图4(c)所示,其中,图4(a)中m示意性表示第三个梯度方向上的像素点集的(梯度方向以虚线表示)几何中心,C是区域中心,da3(R)表示从C到m的方向角。Where p i (R) is the gradient amplitude ratio of the pixel point set whose gradient direction is located in the i-th fan-shaped area in the area R, da i (R)∈[0, 2π) is the gradient direction located in the i-th fan-shaped area The direction angle from the geometric center of the pixel point set to the center C of the salient region. The first 36-dimensional features of Lfd(R) describe the gradient amplitude distribution of pixels in 36 gradient directions in the region R (see Figure 4(b) for the schematic diagram, and the feature distribution in all 36 directions is not drawn due to the image size ), the last 36-dimensional features describe the orientation characteristics of the geometric center of the pixel point in the 36 gradient directions relative to the center C of the area (see Figure 4(c) for the schematic diagram, and the feature distribution in all 36 directions is not drawn due to the image size) . The construction of a schematic scale-independent feature descriptor Lfd(R) is shown in Figure 4(a), Figure 4(b), and Figure 4(c), where m in Figure 4(a) schematically represents the third The geometric center of the pixel point set in the gradient direction (the gradient direction is indicated by a dotted line), C is the center of the region, and da 3 (R) represents the direction angle from C to m.
2、定义一个距离度量函数Dist(Lfd(R1),Lfd(R2)),衡量两个特征描述子Lfd(R1),Lfd(R2)间的相似性为:2. Define a distance metric function Dist(Lfd(R 1 ), Lfd(R 2 )) to measure the similarity between two feature descriptors Lfd(R 1 ), Lfd(R 2 ) as:
其中Eud(dai(R1),dai(R2))是R1和R2中对应第i个方向角间的夹角。Dist(Lfd(R1),Lfd(R2))的第二项类似于K-L散度的定义,但是与K-L散度相比,我们的距离度量函数定义具有对称性的优点。Wherein Eud(da i (R 1 ), da i (R 2 )) is the angle between R 1 and R 2 corresponding to the i-th direction angle. The second term of Dist(Lfd(R 1 ), Lfd(R 2 )) is similar to the definition of KL divergence, but compared with KL divergence, our distance metric function definition has the advantage of symmetry.
3、遍历两幅图像的每一个可能的显著性区域匹配C(i,j),其中i表示参考图像的第i个显著性区域Ri,j表示浮动图像的第j个显著性区域Rj;满足以下条件的C(i,j)则认为是粗匹配上的显著性区域对Cmp(i,j):3. Traverse every possible salient region matching C(i, j) of the two images, where i represents the i-th salient region R i of the reference image, and j represents the j-th salient region R j of the floating image ; C(i, j) that satisfies the following conditions is considered to be a significant region pair Cmp(i, j) on rough matching:
其中Min(·)函数是求最小值,Max(·)函数是求最大值,T是粗匹配控制参数,T值设定越高,则粗匹配条件要求越严格,粗匹配上的显著性区域对Cmp(i,j)也越少,经验取值为0.6。Among them, the Min( ) function is to find the minimum value, the Max( ) function is to find the maximum value, and T is the rough matching control parameter. The higher the T value is set, the stricter the rough matching conditions are, and the significant area Cmp(i, j) is also less, and the empirical value is 0.6.
4、对每一个粗匹配区域对Cmp(i,j),按如下方法计算Ri和Rj间的相似性S(i,j)以及旋转角度θij:4. For each rough matching area pair Cmp(i, j), calculate the similarity S(i, j) and the rotation angle θ ij between R i and R j as follows:
S(i,j)=Dist(Lfd(Ri),Lfd(Rjk)),S(i, j) = Dist(Lfd(R i ), Lfd(R j k)),
k=argk Min(Dist(Lfd(Ri),Lfd(Rjk))),k∈{0,1,...35},k = arg k Min(Dist(Lfd(R i ), Lfd(R j k))), k ∈ {0, 1, ... 35},
其中,Rik是将Rj逆时针旋转10·k角度得到的新区域。Among them, R i k is the new region obtained by rotating R j counterclockwise by an angle of 10·k.
每一个粗匹配区域对Cmp(i,j)都可以确定三个全局刚性变换参数为:浮动图像中心相对参考图像中心的平移tx,ty(tx=Ojx-Oix,ty=Ojy-Oiy)以及绕浮动图像中心的旋转角度θij。其中Oix,Oiy是显著性区域Ri中心Oi的X和Y方向上的坐标,Ojx,Ojy是显著性区域Rj中心Oj的X和Y方向上的坐标。Each coarse matching area pair Cmp(i, j) can determine three global rigid transformation parameters as: the translation t x of the center of the floating image relative to the center of the reference image, ty (t x = O j xO i x, ty = O j yO i y) and the rotation angle θ ij around the center of the floating image. where O i x, O i y are the coordinates in the X and Y directions of the center O i of the salient region R i , and O j x, O j y are the coordinates in the X and Y directions of the center O j of the salient region R j .
5、按S(i,j)升序排列所有的Cmp(i,j),选取前2000(如果提取的粗匹配区域对数目不足则以实际的数目为准)个粗匹配区域对Cmp(i,j)作为输入样本集合,设置合适的类内距离阈值,对Cmp(i,j)进行全局刚性变换参数空间上的聚类,选取聚类内数目最多的类作为初步匹配上的显著性区域对F(i,j)。具体的聚类方法如下:固定浮动图像的中心Cm,通过每一个粗匹配对Cmp(i,j)确定的三个刚体变换变换参数,将Cm映射到参考图像上的Cfij点,具体表达式如下:5. Arrange all Cmp(i, j) in ascending order according to S(i, j), and select the first 2000 (if the extracted rough matching region pairs are insufficient, the actual number shall prevail) coarse matching region pairs Cmp(i, j) As the input sample set, set an appropriate intra-class distance threshold, cluster Cmp(i, j) in the global rigid transformation parameter space, and select the class with the largest number in the cluster as the significant region pair for preliminary matching F(i,j). The specific clustering method is as follows: fix the center Cm of the floating image, map Cm to the Cf ij point on the reference image through the three rigid body transformation parameters determined by each rough matching pair Cmp(i, j), the specific expression as follows:
其中Cmx,Cmy,Cfijx,Cfijy分别是Cm和CfijX和Y方向上的坐标。我们对映射后的Cfij点在二维欧式空间上进行聚类,通过设置合适的类内阈值(跟图像的尺寸大小相关,在我们的实验中选取的类内阈值t为50),选取聚类内数目最多的类作为初步匹配上的显著性区域对F(i,j)。二维欧式空间上的聚类步骤如下:Where Cmx, Cmy, Cf ij x, Cf ij y are the coordinates of Cm and Cf ij in the X and Y directions, respectively. We cluster the mapped Cf ij points in the two-dimensional Euclidean space. By setting an appropriate intra-class threshold (related to the size of the image, the intra-class threshold t selected in our experiment is 50), select the cluster The class with the largest number in the class is used as the significant region pair F(i, j) on the preliminary matching. The clustering steps on the two-dimensional Euclidean space are as follows:
1.初始化分类数目N为0。1. Initialize the number of categories N to be 0.
2.按顺序遍历映射后的Cfij点,如果N=0,将当前Cfij作为第一类的中心,N=N+1;反之,依次计算Cfij与第k(k=1,...,N)类中心的距离dk,选取最小值dkmin,如果dkmin<t,则将Cfij归入第kmin类并更新第kmin类的中心以及类内元素数,如果dkmin≥t,则将Cfij作为第N+1类的中心,N=N+1。2. Traverse the mapped Cf ij points in order, if N=0, take the current Cf ij as the center of the first category, N=N+1; otherwise, calculate Cf ij and the kth (k=1, .. ., N) The distance d k of the center of the class, select the minimum value d kmin , if d kmin <t, then classify Cf ij into the kminth class and update the center of the kminth class and the number of elements in the class, if d kmin ≥ t , then take Cf ij as the center of the N+1th category, N=N+1.
3.选取类内数目最多的类对应的粗匹配Cmp(i,j)对作为初步匹配上的显著性区域对F(i,j),然后,我们按S(i,j)的大小来剔除重复的区域,如果F(3,5)和F(3,7)都是F(i,j)中的元素,我们比较S(3,5)和S(3,7)的大小,剔除值比较大的匹配。保证初步匹配上的显著性区域对F(i,j)中不包含重复的区域,减小后续的计算量。初步匹配上的显著性区域对如图2(e)所示。3. Select the rough matching Cmp(i, j) pair corresponding to the class with the largest number in the class as the salient region pair F(i, j) on the preliminary matching, and then we eliminate it according to the size of S(i, j) Repeated areas, if both F(3, 5) and F(3, 7) are elements in F(i, j), we compare the size of S(3, 5) and S(3, 7), and remove the value relatively large match. It is guaranteed that the significant region pair F(i, j) on the preliminary matching does not contain repeated regions, reducing the amount of subsequent calculations. The salient region pairs on the preliminary matching are shown in Fig. 2(e).
步骤4:运行局部刚体配准单元104。对步骤3中初步匹配上的显著性区域对F(i,j),以区域Ri,Rj的中心确定初始平移参数tx,ty,以旋转角度θij作为初始旋转参数,以归一化相关系数作为区域相似性度量,进行局部刚体配准。归一化相关系数的定义如下:Step 4: Run the local rigid
其中N是参考图像和浮动图像重叠区域内包含的像素数,Fi(x)和Mi(x)分别是参考图像和浮动图像第i个像素值,和分别是参考图像和浮动图像区域内像素的均值。Where N is the number of pixels contained in the overlapping area of the reference image and the floating image, F i (x) and M i (x) are the i-th pixel value of the reference image and the floating image, respectively, and are the mean values of the pixels in the reference image and floating image regions, respectively.
步骤5:运行全局二次多项式变换配准单元105。在步骤4中完成局部刚体配准后,可能存在错误的区域配准结果,在进行全局二次多项式变换前需要剔除错误的区域配准结果,在这里我们对局部刚体配准后的区域及对应的全局刚性配准参数,采用步骤3中第5步所描述的聚类分析方法,设置更精细的类内距离阈值(和图像大小相关,在我们的实验中取为20),进行全局刚性变换参数空间上的聚类分析,选择聚类内数目最多的类作为精确匹配上的区域对。并以精确匹配上的区域中心点作为控制点进行全局二次多项式变换配准,实现两幅图像的精确配准。全局二次多项式变换配准的数学模型如下:Step 5: Run the global quadratic polynomial
XC=A·B,X C =A·B,
XD=[xD,yD]T是浮动图像中的点坐标,XC=[xC,yC]T是参考图像中的点坐标,A是二次多项式变换矩阵,A可以通过以下方法进行求解,其中K是控制点个数:X D =[x D , y D ] T is the point coordinates in the floating image, X C =[x C , y C ] T is the point coordinates in the reference image, A is the quadratic polynomial transformation matrix, and A can pass the following method to solve, where K is the number of control points:
A=RDT(DDT)-1。A = RD T (DD T ) -1 .
运行结果:operation result:
为了验证本发明方法,我们选取了20对图像作为实验样本,其中包含5对低质量的图像,实验结果表明,本算法可以在4s左右完成一般图像的配准,低质量的图像配准时间会长一些,在10s左右也可完成精确配准,对1548×1260的图像配准精度可以达到2个像素。具体的配准结果如图5,图6所示。实验表明,我们的方法是快速,精确,鲁棒的,具有巨大的应用价值。In order to verify the method of the present invention, we selected 20 pairs of images as experimental samples, including 5 pairs of low-quality images. The experimental results show that this algorithm can complete the registration of general images in about 4s, and the registration time of low-quality images will be slower. If it is longer, accurate registration can be completed in about 10s, and the registration accuracy of 1548×1260 images can reach 2 pixels. The specific registration results are shown in Figure 5 and Figure 6. Experiments show that our method is fast, accurate, robust and has great application value.
以上所述,仅为本发明中的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉该技术的人在本发明所揭露的技术范围内,可理解想到的变换或替换,都应涵盖在本发明的包含范围之内,因此,本发明的保护范围应该以权利要求书的保护范围为准。The above is only a specific implementation mode in the present invention, but the scope of protection of the present invention is not limited thereto. Anyone familiar with the technology can understand the conceivable transformation or replacement within the technical scope disclosed in the present invention. All should be covered within the scope of the present invention, therefore, the protection scope of the present invention should be based on the protection scope of the claims.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100889753A CN101763633B (en) | 2009-07-15 | 2009-07-15 | Visible light image registration method based on salient region |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100889753A CN101763633B (en) | 2009-07-15 | 2009-07-15 | Visible light image registration method based on salient region |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101763633A true CN101763633A (en) | 2010-06-30 |
CN101763633B CN101763633B (en) | 2011-11-09 |
Family
ID=42494788
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100889753A Active CN101763633B (en) | 2009-07-15 | 2009-07-15 | Visible light image registration method based on salient region |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101763633B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950419A (en) * | 2010-08-26 | 2011-01-19 | 西安理工大学 | Quick image rectification method in presence of translation and rotation at same time |
CN102663738A (en) * | 2012-03-20 | 2012-09-12 | 苏州生物医学工程技术研究所 | Method and system for three-dimensional image registration |
CN103400393A (en) * | 2013-08-21 | 2013-11-20 | 中科创达软件股份有限公司 | Image matching method and system |
CN103810709A (en) * | 2014-02-25 | 2014-05-21 | 南京理工大学 | Vessel-based registration method for eye fundus image and SD-OCT projection image |
CN104392462A (en) * | 2014-12-16 | 2015-03-04 | 西安电子科技大学 | SAR image registration method based on salient division sub-region pair |
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN106991694A (en) * | 2017-03-17 | 2017-07-28 | 西安电子科技大学 | Based on marking area area matched heart CT and ultrasound image registration method |
CN110516618A (en) * | 2019-08-29 | 2019-11-29 | 苏州大学 | Assembly robot and assembly method and system based on vision and force-position hybrid control |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7362920B2 (en) * | 2003-09-22 | 2008-04-22 | Siemens Medical Solutions Usa, Inc. | Method and system for hybrid rigid registration based on joint correspondences between scale-invariant salient region features |
CN1920882A (en) * | 2005-08-24 | 2007-02-28 | 西门子共同研究公司 | System and method for salient region feature based 3d multi modality registration of medical images |
CN100552716C (en) * | 2007-04-12 | 2009-10-21 | 上海交通大学 | Robust image registration method based on joint saliency map in global abnormal signal environment |
-
2009
- 2009-07-15 CN CN2009100889753A patent/CN101763633B/en active Active
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101950419B (en) * | 2010-08-26 | 2012-09-05 | 西安理工大学 | Quick image rectification method in presence of translation and rotation at same time |
CN101950419A (en) * | 2010-08-26 | 2011-01-19 | 西安理工大学 | Quick image rectification method in presence of translation and rotation at same time |
CN102663738A (en) * | 2012-03-20 | 2012-09-12 | 苏州生物医学工程技术研究所 | Method and system for three-dimensional image registration |
CN103400393A (en) * | 2013-08-21 | 2013-11-20 | 中科创达软件股份有限公司 | Image matching method and system |
CN103400393B (en) * | 2013-08-21 | 2016-07-20 | 中科创达软件股份有限公司 | A kind of image matching method and system |
CN103810709A (en) * | 2014-02-25 | 2014-05-21 | 南京理工大学 | Vessel-based registration method for eye fundus image and SD-OCT projection image |
CN103810709B (en) * | 2014-02-25 | 2016-08-17 | 南京理工大学 | Eye fundus image based on blood vessel projects method for registering images with SD-OCT |
CN104392462B (en) * | 2014-12-16 | 2017-06-16 | 西安电子科技大学 | A kind of SAR image registration method based on significantly segmentation subregion pair |
CN104392462A (en) * | 2014-12-16 | 2015-03-04 | 西安电子科技大学 | SAR image registration method based on salient division sub-region pair |
CN104504723A (en) * | 2015-01-14 | 2015-04-08 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN104504723B (en) * | 2015-01-14 | 2017-05-17 | 西安电子科技大学 | Image registration method based on remarkable visual features |
CN106991694A (en) * | 2017-03-17 | 2017-07-28 | 西安电子科技大学 | Based on marking area area matched heart CT and ultrasound image registration method |
CN106991694B (en) * | 2017-03-17 | 2019-10-11 | 西安电子科技大学 | Cardiac CT and Ultrasound Image Registration Method Based on Significant Region Area Matching |
CN110516618A (en) * | 2019-08-29 | 2019-11-29 | 苏州大学 | Assembly robot and assembly method and system based on vision and force-position hybrid control |
Also Published As
Publication number | Publication date |
---|---|
CN101763633B (en) | 2011-11-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Ma et al. | Image matching from handcrafted to deep features: A survey | |
CN101763633A (en) | Visible light image registration method based on salient region | |
Maire et al. | Using contours to detect and localize junctions in natural images | |
CN102147858B (en) | License plate character identification method | |
CN104200461B (en) | The remote sensing image registration method of block and sift features is selected based on mutual information image | |
CN101493891A (en) | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT | |
CN106096503A (en) | A kind of based on key point with the three-dimensional face identification method of local feature | |
CN103593832A (en) | Method for image mosaic based on feature detection operator of second order difference of Gaussian | |
Li et al. | Point cloud registration and localization based on voxel plane features | |
CN103310196A (en) | Finger vein recognition method by interested areas and directional elements | |
CN103632142A (en) | Image Matching Method Based on Feature Description of Local Coordinate System | |
Liu et al. | Regularization based iterative point match weighting for accurate rigid transformation estimation | |
CN101833763B (en) | Method for detecting reflection image on water surface | |
Zhang et al. | FANet: An arbitrary direction remote sensing object detection network based on feature fusion and angle classification | |
CN111898428A (en) | An ORB-based UAV feature point matching method | |
CN115272153A (en) | An image matching enhancement method based on feature sparse region detection | |
Li et al. | Adaptive regional multiple features for large-scale high-resolution remote sensing image registration | |
Chen et al. | Edgenet: Deep metric learning for 3d shapes | |
CN110246165A (en) | It improves visible images and SAR image matches the method and system of Quasi velosity | |
CN101630365A (en) | Method for extracting and describing DAISY-based feature with mirror face turning invariance | |
Nouri et al. | An active contour model reinforced by convolutional neural network and texture description | |
Yu et al. | An improved deformable convolution method for aircraft object detection in flight based on feature separation in remote sensing images | |
CN111898589B (en) | Unmanned aerial vehicle image rapid registration method based on GPU+feature recognition | |
Han et al. | Text kernel calculation for arbitrary shape text detection | |
CN103310456B (en) | Multidate/multi-modal remote sensing image registration method based on Gaussian-Hermite square |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant |