[go: up one dir, main page]

CN103679193A - FREAK-based high-speed high-density packaging component rapid location method - Google Patents

FREAK-based high-speed high-density packaging component rapid location method Download PDF

Info

Publication number
CN103679193A
CN103679193A CN201310562520.7A CN201310562520A CN103679193A CN 103679193 A CN103679193 A CN 103679193A CN 201310562520 A CN201310562520 A CN 201310562520A CN 103679193 A CN103679193 A CN 103679193A
Authority
CN
China
Prior art keywords
point
image
freak
prime
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310562520.7A
Other languages
Chinese (zh)
Inventor
高红霞
吴丽璇
陈安
胡跃明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201310562520.7A priority Critical patent/CN103679193A/en
Publication of CN103679193A publication Critical patent/CN103679193A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于FREAK的高速高密度封装元器件快速定位方法,该方法是:采用SURF配准方法中的Hessian矩阵检测关键点的位置所在;利用关键点的邻域信息,确定特征点的主方向;根据视网膜模型的分布,训练学习采样点对比对,根据对比对构建FREAK特征向量;根据FREAK特征向量,采用海明距离作为特征向量的相似性度量,进行最近邻匹配;通过匹配的特征点对构建仿射变换方程,通过最小二乘法求解上述变换方程,计算得到空间变换模型参数,即:x方向的平移参数m、y方向的平移参数n和旋转角度β。与现有技术相比,本发明大大降低了特征描述与匹配的计算复杂度和存储代价,实现了高速高精度亚像素级定位。

Figure 201310562520

The invention discloses a fast positioning method for high-speed and high-density packaging components based on FREAK. The method is: using the Hessian matrix in the SURF registration method to detect the position of the key point; using the neighborhood information of the key point to determine the feature point According to the distribution of the retinal model, train and learn the comparison of sampling points, and construct the FREAK feature vector according to the comparison; according to the FREAK feature vector, use the Hamming distance as the similarity measure of the feature vector to perform nearest neighbor matching; through the matching The affine transformation equation is constructed by the feature point pairs, and the above transformation equation is solved by the least square method, and the space transformation model parameters are calculated, namely: the translation parameter m in the x direction, the translation parameter n in the y direction, and the rotation angle β. Compared with the prior art, the present invention greatly reduces the computational complexity and storage cost of feature description and matching, and realizes high-speed and high-precision sub-pixel level positioning.

Figure 201310562520

Description

一种基于FREAK的高速高密度封装元器件快速定位方法A fast positioning method for high-speed and high-density packaging components based on FREAK

技术领域technical field

本发明涉及精密电子组装中的识别定位研究领域,特别涉及一种基于FREAK的高速高密度封装元器件快速定位方法。The invention relates to the research field of identification and positioning in precision electronic assembly, in particular to a fast positioning method for high-speed and high-density packaging components based on FREAK.

背景技术Background technique

表面贴装技术起源于20世纪60年代的美国,当时主要用在军事电子产品中。70年代,由于日本电子行业大力发展消费类产品,SMT无与伦比的制造优势慢慢突显出来并且迅速在电子行业得到大力推进。80年代以来,由于电子类消费产品的迅猛发展以及各国对电子装备业战略性地位的充分认识,作为“第四次贴装革命”的SMT得到前所未有的重视和发展。目前,SMT已影响到通信、家电、计算机、网络、自动化、航空、航天、航海等各个领域的产品水平,其相关技术和设备是各国电子信息制造业水平的重要标志。Surface mount technology originated in the United States in the 1960s, when it was mainly used in military electronics. In the 1970s, due to the vigorous development of consumer products in the Japanese electronics industry, the unparalleled manufacturing advantages of SMT gradually emerged and were rapidly promoted in the electronics industry. Since the 1980s, due to the rapid development of electronic consumer products and the full understanding of the strategic position of the electronic equipment industry in various countries, SMT, as the "fourth placement revolution", has received unprecedented attention and development. At present, SMT has affected the product level in various fields such as communications, home appliances, computers, networks, automation, aviation, aerospace, and navigation. Its related technologies and equipment are important symbols of the level of electronic information manufacturing in various countries.

数字图像配准(Image Registration)作为图像处理中一项基础性任务,其定义是指将不同时刻拍摄的、从不同视角或者不同的摄像机等条件下获取的同一目标下的两幅图像进行几何对准的过程,是评价两幅或多幅图像的相似性以确定同名点的过程。图像配准是图像处理与应用中的一个基本问题,它在航空影像图像拼接、三维成像、机器视觉和模式识别、遥感数据处理、医学图像分析等领域均有重要应用。由于SMT中视觉处理的任务与配准的任务是一致的,因此图像配准技术是SMT视觉检测系统的一个重要组成部分,为后续检测提供了必要的预处理。基于特征的图像配准方法因其不直接依赖于灰度、鲁棒性好、抗干扰性强、计算量小、速度快而成为应用最广泛的图像配准方法。一般来讲,配准的基本框架包含特征检测、特征匹配、变换模型参数估计和图像重采样四个步骤。Digital image registration (Image Registration) is a basic task in image processing. Its definition refers to the geometric alignment of two images of the same target captured at different times, from different perspectives or different cameras. The standard process is the process of evaluating the similarity of two or more images to determine the same name points. Image registration is a basic problem in image processing and application. It has important applications in the fields of aerial image stitching, 3D imaging, machine vision and pattern recognition, remote sensing data processing, and medical image analysis. Since the task of visual processing in SMT is consistent with that of registration, image registration technology is an important part of the SMT visual inspection system, providing the necessary preprocessing for subsequent inspections. The feature-based image registration method has become the most widely used image registration method because it does not directly depend on the gray scale, has good robustness, strong anti-interference, small amount of calculation, and fast speed. Generally speaking, the basic framework of registration includes four steps: feature detection, feature matching, transformation model parameter estimation, and image resampling.

自从Lowe和Bay等分别提出了SIFT和SURF算法后,追求更快、具有更好鲁棒性的特征描述子成为近来一个热门的发展趋势。一个理想的特征描述子一般具有高的鲁棒性、奇异性和较低的算法复杂度。为了让特征描述算法能够应用于智能手机和嵌入式电子设备上,Alexandre Alahi在CVPR2012提出了一种新的特征点描述子FREAK(Fast Retina Keypoint)。该描述子在权衡上述三个性能,以运算速度为导向,突出算法的实时性能,同时其鲁棒性和奇异性也具有良好的性能。FREAK配准的关注点在于特征的描述,而关键点定位的工作则可以采用已有的一些常用的方法,如Hessian矩阵、DoH检测、Harris角点定位等。因此将特征点描述子FREAK应用于精密电子组装中进行识别定位具有极高的研究意义。Since Lowe and Bay proposed the SIFT and SURF algorithms respectively, the pursuit of faster and more robust feature descriptors has become a recent hot development trend. An ideal feature descriptor generally has high robustness, singularity and low algorithm complexity. In order to allow the feature description algorithm to be applied to smartphones and embedded electronic devices, Alexandre Alahi proposed a new feature point descriptor FREAK (Fast Retina Keypoint) at CVPR2012. The descriptor weighs the above three performances, and is guided by the operation speed, highlighting the real-time performance of the algorithm, and at the same time, its robustness and singularity also have good performance. The focus of FREAK registration lies in the description of features, while the work of key point positioning can use some existing common methods, such as Hessian matrix, DoH detection, Harris corner point positioning, etc. Therefore, it is of great research significance to apply the feature point descriptor FREAK to identify and locate in precision electronic assembly.

发明内容Contents of the invention

本发明的主要目的在于克服现有技术的缺点与不足,提供一种基于FREAK的高速高密度封装元器件快速定位方法,该方法是在SURF关键点检测的基础上引入FREAK二进制特征,能够大幅度提高图像配准的速度,实现高速高精度元器件视觉定位检测,同时具有很强的鲁棒性。The main purpose of the present invention is to overcome the shortcomings and deficiencies of the prior art, and provide a fast positioning method for high-speed and high-density packaging components based on FREAK. The method introduces FREAK binary features on the basis of SURF key point detection, which can greatly Improve the speed of image registration, realize high-speed and high-precision visual positioning detection of components, and have strong robustness.

本发明的目的通过以下的技术方案实现:一种基于FREAK的高速高密度封装元器件快速定位方法,包括以下步骤:The purpose of the present invention is achieved through the following technical solutions: a FREAK-based high-speed and high-density packaging component rapid positioning method, comprising the following steps:

(1)关键点定位:输入待配准图像和模板图像,采用Hessian矩阵检测关键点的位置所在;(1) Key point positioning: Input the image to be registered and the template image, and use the Hessian matrix to detect the position of the key point;

(2)特征描述:利用关键点的邻域信息,确定特征点的主方向;根据视网膜模型的分布,训练学习采样点对比对,根据对比对构建FREAK特征向量;(2) Feature description: use the neighborhood information of the key points to determine the main direction of the feature points; according to the distribution of the retina model, train and learn the comparison of sampling points, and construct the FREAK feature vector according to the comparison;

(3)特征匹配:根据FREAK特征向量,采用海明距离(即异或操作)作为特征向量的相似性度量,进行最近邻匹配;通过匹配的特征点对构建仿射变换方程,通过最小二乘法求解上述变换方程,计算得到空间变换模型参数,即:x方向的平移参数m、y方向的平移参数n和旋转角度β。(3) Feature matching: According to the FREAK feature vector, the Hamming distance (that is, XOR operation) is used as the similarity measure of the feature vector to perform nearest neighbor matching; the affine transformation equation is constructed through the matched feature point pairs, and the least square method is used to Solve the above transformation equations to calculate the parameters of the space transformation model, namely: the translation parameter m in the x direction, the translation parameter n in the y direction, and the rotation angle β.

具体的,所述步骤(1)的关键点定位具体过程如下:Specifically, the specific process of key point positioning in the step (1) is as follows:

(1-1)输入待配准图像I(x,y)和模板图像f(x,y),并分别生成积分图像;(1-1) Input the image to be registered I(x,y) and the template image f(x,y), and generate integral images respectively;

(1-2)构建快速Hessian矩阵,并根据快速Hessian矩阵构建尺度空间;然后根据步骤(1-1)得到的积分图像得到三维尺度空间响应图;(1-2) Construct a fast Hessian matrix, and construct a scale space based on the fast Hessian matrix; then obtain a three-dimensional scale space response map based on the integral image obtained in step (1-1);

(1-3)在得到的三维尺度空间响应图中进行阈值分割,只保留具有强响应值的像素点;接着,采用极大值抑制来寻找候选特征点;最后,用三维二次拟合函数对特征点进行临近像素插值,得到关键点位置。(1-3) Perform threshold segmentation on the obtained three-dimensional scale space response map, and only keep pixels with strong response values; then, use maximum value suppression to find candidate feature points; finally, use three-dimensional quadratic fitting function Perform adjacent pixel interpolation on feature points to obtain key point positions.

具体的,所述步骤(1-1)中生成积分图像的方法是:对于给定的图像Q,点(x,y)是图像Q中某一点,则在积分图像中,点(x,y)的值为以Q的原点和(x,y)像素点所形成的矩形区域内所有像素点的像素值之和。使用积分图像,对积分图像内任一矩形区域内所有像素值之和的计算可以简化为访问4次图像数据和3次加减运算。不管矩形区域的面积多大,该区域内像素值之和的计算的时间都是不变的。当区域面积很大的时候,这种方法的优越性会很明显。SURF配准方法利用积分图像的这种性质来保证尺寸可变框架滤波器的图像卷积的计算时间几乎不变。Specifically, the method for generating the integral image in the step (1-1) is: for a given image Q, the point (x, y) is a certain point in the image Q, then in the integral image, the point (x, y ) is the sum of the pixel values of all pixels in the rectangular area formed by the origin of Q and (x, y) pixels. Using the integral image, the calculation of the sum of all pixel values in any rectangular area in the integral image can be simplified as accessing 4 image data and 3 addition and subtraction operations. No matter how large the area of the rectangular area is, the calculation time of the sum of pixel values in the area is constant. The advantage of this approach becomes apparent when the area is large. The SURF registration method exploits this property of integral images to guarantee almost constant computation time for image convolutions with size-variable frame filters.

具体的,所述步骤(1-2)中构建快速Hessian矩阵的方法是:Specifically, the method for constructing a fast Hessian matrix in the step (1-2) is:

对于给定图像上的一点X(x,y),则点X在尺度σ下的Hessian矩阵H(x,σ)定义如下:For a point X(x, y) on a given image, the Hessian matrix H(x, σ) of point X at scale σ is defined as follows:

Hh (( Xx ,, σσ )) == LL xxxx (( Xx ,, σσ )) LL xyxy (( Xx ,, σσ )) LL xyxy (( Xx ,, σσ )) LL yyyy (( Xx ,, σσ )) ;;

其中,Lxx(X,σ)是图像在点X处与高斯二阶偏导

Figure BDA0000413067440000032
的卷积,Lxy(X,σ)是图像在点X处与高斯二阶偏导的卷积,Lyy(X,σ)是图像在点X处与高斯二阶偏导
Figure BDA0000413067440000034
的卷积;Among them, L xx (X,σ) is the image at point X and Gaussian second-order partial derivative
Figure BDA0000413067440000032
The convolution of L xy (X,σ) is the image at point X with Gaussian second-order partial derivative The convolution of L yy (X,σ) is the image at point X with Gaussian second-order partial derivative
Figure BDA0000413067440000034
the convolution;

设Dxx、Dyy、和Dxy分别是x方向、y方向和xy方向上框架滤波器与图像卷积的结果,则Hessian行列式的近似估计为:Let D xx , D yy , and D xy be the results of convolution of the frame filter and the image in the x direction, y direction, and xy direction, respectively, then the approximate estimate of the Hessian determinant is:

det(Happrox)=DxxDyy-(0.9Dxy)2det(H approx )=D xx D yy -(0.9D xy ) 2 .

优选的,所述步骤(2)特征描述的步骤如下:Preferably, the steps described in step (2) are as follows:

(2-1)对关键点邻域的每个采样点进行平滑去噪,根据视网膜模型的分布,采用不同的高斯核滤波,高斯核函数的尺寸随着采样点与中心点的距离的增长呈指数分布;(2-1) Smooth and denoise each sampling point in the neighborhood of the key point. According to the distribution of the retinal model, different Gaussian kernel filters are used. The size of the Gaussian kernel function increases with the distance between the sampling point and the center point. index distribution;

(2-2)确定特征点的主方向:选择相对于中心对称的感受野来估计梯度值,并将此梯度值作为特征点主方向,设点集G为用于计算梯度的对比集合,特征点主方向O计算如下:(2-2) Determine the main direction of the feature point: select the receptive field symmetrical to the center to estimate the gradient value, and use this gradient value as the main direction of the feature point, set the point set G as the comparison set used to calculate the gradient, and the feature The point main direction O is calculated as follows:

Oo == 11 Mm ΣΣ PP Oo ∈∈ GG (( II (( PP oo γγ 11 )) -- II (( PP oo γγ 22 )) )) PP oo γγ 11 -- PP oo γγ 22 || || PP oo γγ 11 -- PP oo γγ 22 || || ;;

其中,M是点集G的排列数,

Figure BDA0000413067440000036
Figure BDA0000413067440000037
是对比感受野的中心点二维图像坐标,
Figure BDA0000413067440000038
Figure BDA0000413067440000039
Figure BDA00004130674400000310
所对应的灰度值,γ1和γ2表示进行灰度对比的两个感受野;Among them, M is the permutation number of point set G,
Figure BDA0000413067440000036
and
Figure BDA0000413067440000037
is the two-dimensional image coordinates of the center point of the comparison receptive field,
Figure BDA0000413067440000038
and
Figure BDA0000413067440000039
yes
Figure BDA00004130674400000310
and The corresponding gray value, γ 1 and γ 2 represent two receptive fields for gray comparison;

(2-3)根据特征点主方向和视网膜模型的分布,对特征点邻域进行旋转,对旋转后的邻域进行视网膜模型采样,并按下式生成FREAK二值特征描述子F:(2-3) According to the main direction of the feature point and the distribution of the retinal model, rotate the neighborhood of the feature point, sample the retinal model of the rotated neighborhood, and generate the FREAK binary feature descriptor F according to the following formula:

Ff == &Sigma;&Sigma; 00 << &alpha;&alpha; << NN 22 &alpha;&alpha; TT (( PP &alpha;&alpha; )) ;;

其中,Pα表示一对采样感受野,N是描述子长度,即感受野对的数目,若感受野总数量为M,则

Figure BDA0000413067440000041
α表示二值描述子的二进制左移移位值;Among them, P α represents a pair of sampling receptive fields, and N is the descriptor length, that is, the number of receptive field pairs. If the total number of receptive fields is M, then
Figure BDA0000413067440000041
α represents the binary left-shift shift value of the binary descriptor;

Figure BDA0000413067440000042
Figure BDA0000413067440000042

表示对比对Pα中前面一位的图像信息,采用的图像信息为图像灰度之和或者区域均值,γ1和γ2表示进行灰度对比的两个感受野; Indicates that the image information of the previous bit in P α is compared, and the image information used is the sum of image gray levels or the average value of the region, and γ 1 and γ 2 represent two receptive fields for gray level comparison;

(2-4)从已有感受野对中获得高方差和非相关性的对比对,步骤包括:(2-4) Obtain high-variance and non-correlated comparison pairs from existing receptive field pairs, the steps include:

(2-4-1)创建矩阵D,D的每一行为一个采样点的描述子;(2-4-1) Create a matrix D, each row of D is a descriptor of a sampling point;

(2-4-2)计算每一列的均值,均值与0.5的差值代表每一列的方差;(2-4-2) Calculate the mean of each column, and the difference between the mean and 0.5 represents the variance of each column;

(2-4-3)按照每一列的方差进行从小到大排列;(2-4-3) Arrange from small to large according to the variance of each column;

(2-4-4)保留方差最小的一列,迭代地从余下的列中选择与保留列具有低相关性的列,直至FREAK特征描述子维数达到预定要求;进而得到FREAK特征向量。(2-4-4) Keep the column with the smallest variance, and iteratively select the columns with low correlation with the reserved columns from the remaining columns until the dimension of the FREAK feature descriptor meets the predetermined requirement; then obtain the FREAK feature vector.

优选的,所述步骤(3)特征匹配的步骤如下:Preferably, the step (3) feature matching steps are as follows:

(3-1)采用海明距离作为特征向量的相似性度量,进行最近邻匹配,方法是:先搜索表示模糊信息的前16个字节的描述特征,如果匹配距离小于所设阈值,则得到匹配的特征点对,即进入步骤(3-2);(3-1) Use Hamming distance as the similarity measure of feature vectors to perform nearest neighbor matching. The method is: first search for the description features of the first 16 bytes representing fuzzy information. If the matching distance is less than the set threshold, then get Matched feature point pairs, that is, enter step (3-2);

(3-2)对匹配的特征点对(x1,y1)和(x2,y2),代入如下仿射变换公式:(3-2) For the matched pair of feature points (x 1 , y 1 ) and (x 2 , y 2 ), substitute the following affine transformation formula:

xx 22 ythe y 22 == mm &prime;&prime; &prime;&prime; nno &prime;&prime; &prime;&prime; ++ sthe s coscos &beta;&beta; &prime;&prime; &prime;&prime; -- sinsin &beta;&beta; &prime;&prime; &prime;&prime; sinsin &beta;&beta; &prime;&prime; &prime;&prime; coscos &beta;&beta; &prime;&prime; &prime;&prime; xx 11 ythe y 11 ;;

通过最小二乘法求解,得到变换参数(m″,n″,β″,s);对所有匹配的特征点对的变换参数(m″,n″,β″)求平均值,得到I(x,y)与f(x,y)之间粗略变换关系(m,n,β)。Solve by the least squares method to obtain the transformation parameters (m″, n″, β″, s); average the transformation parameters (m″, n″, β″) of all matched feature point pairs to obtain I(x ,y) and f(x,y) roughly transform the relationship (m,n,β).

本发明与现有技术相比,具有如下优点和有益效果:Compared with the prior art, the present invention has the following advantages and beneficial effects:

1、本发明采用FREAK描述子检测各种新型表面贴装元器件的特征,使得该检测方法对噪声和芯片偏移旋转具有很强的鲁棒性。1. The present invention uses the FREAK descriptor to detect the characteristics of various new surface mount components, so that the detection method has strong robustness to noise and chip offset rotation.

2、本发明采用的FREAK描述子具有快速描述图像特征的能力,在Hessian矩阵定位关键点的基础上,大大降低了特征描述与匹配的计算复杂度和存储代价,实现了高速高精度亚像素级定位,对于实际的视觉检测定位具有重要意义。2. The FREAK descriptor used in the present invention has the ability to quickly describe image features. On the basis of the Hessian matrix to locate key points, it greatly reduces the computational complexity and storage cost of feature description and matching, and realizes high-speed and high-precision sub-pixel level Positioning is of great significance for the actual vision detection positioning.

附图说明Description of drawings

图1为本发明方法的流程图。Fig. 1 is the flowchart of the method of the present invention.

图2为积分图像的示意图。Figure 2 is a schematic diagram of an integral image.

图3为FREAK基于视网膜感受野分布的采样模型示意图。Figure 3 is a schematic diagram of FREAK's sampling model based on the distribution of retinal receptive fields.

图4为选取用于计算旋转角度的采样点示意图。FIG. 4 is a schematic diagram of sampling points selected for calculating the rotation angle.

具体实施方式Detailed ways

下面结合实施例及附图对本发明作进一步详细的描述,但本发明的实施方式不限于此。The present invention will be further described in detail below in conjunction with the embodiments and the accompanying drawings, but the embodiments of the present invention are not limited thereto.

实施例1Example 1

如图1所示,本实施例一种基于FREAK的高速高密度封装元器件快速定位方法,包括以下步骤:As shown in Figure 1, a FREAK-based fast positioning method for high-speed and high-density packaging components in this embodiment includes the following steps:

S1关键点定位:输入待配准图像I(x,y)和模板图像f(x,y),利用Hessian矩阵分别检测I(x,y)与f(x,y)的团块位置,具体如下:S1 key point positioning: Input the image to be registered I(x,y) and the template image f(x,y), and use the Hessian matrix to detect the clumping positions of I(x,y) and f(x,y) respectively, specifically as follows:

S1.1分别根据图像I(x,y)、f(x,y)生成积分图像:S1.1 Generate integral images according to images I(x,y) and f(x,y) respectively:

对于给定的图像,如I(x,y),点(x,y)是积分图像中某一点,则该点的积分图像值I是以I(x,y)的原点和(x,y)像素点所形成的矩形区域内所有像素点的像素值之和。使用积分图像,对积分图像内任一矩形区域内所有像素值之和的计算可以简化为访问4次图像数据和3次加减运算。积分图像的示意图如图2所示。对于如图2所示的矩形区域,定义矩形的顶点分别是A、B、C、D,该区域的像素值之和为:For a given image, such as I(x,y), point (x,y) is a point in the integral image, then the integral image value I of this point is the origin of I(x,y) and (x, y) The sum of the pixel values of all the pixel points in the rectangular area formed by the pixel points. Using the integral image, the calculation of the sum of all pixel values in any rectangular area in the integral image can be simplified as accessing 4 image data and 3 addition and subtraction operations. A schematic diagram of the integral image is shown in Figure 2. For the rectangular area shown in Figure 2, the vertices defining the rectangle are A, B, C, and D respectively, and the sum of the pixel values in this area is:

I=A+D-(C+B)。I =A+D-(C+B).

显然,不管矩形区域的面积多大,该区域内像素值之和的计算的时间都是不变的。当区域面积很大的时候,这种方法的优越性会很明显。SURF配准方法利用积分图像的这种性质来保证尺寸可变框架滤波器的图像卷积的计算时间几乎不变。Obviously, no matter how large the area of the rectangular area is, the calculation time of the sum of pixel values in the area is constant. The advantage of this approach becomes apparent when the area is large. The SURF registration method exploits this property of integral images to guarantee almost constant computation time for image convolutions with size-variable frame filters.

S1.2分别在I(x,y)、f(x,y)上检测特征点,具体包括以下步骤:S1.2 Detect feature points on I(x,y) and f(x,y) respectively, specifically including the following steps:

S1.2.1构建快速Hessian矩阵,对于给定图像上的一点X(x,y),则点X在尺度σ下的Hessian矩阵H(x,σ)定义如下:S1.2.1 Build a fast Hessian matrix. For a point X(x,y) on a given image, the Hessian matrix H(x,σ) of point X at scale σ is defined as follows:

Hh (( Xx ,, &sigma;&sigma; )) == LL xxxx (( Xx ,, &sigma;&sigma; )) LL xyxy (( Xx ,, &sigma;&sigma; )) LL xyxy (( Xx ,, &sigma;&sigma; )) LL yyyy (( Xx ,, &sigma;&sigma; )) -- -- -- (( 11 ))

其中,Lxx(X,σ)是图像在点X处与高斯二阶偏导

Figure BDA0000413067440000062
的卷积,Lxy(X,σ)是图像在点X处与高斯二阶偏导
Figure BDA0000413067440000063
的卷积,Lyy(X,σ)是图像在点X处与高斯二阶偏导
Figure BDA0000413067440000064
的卷积;Among them, L xx (X,σ) is the image at point X and Gaussian second-order partial derivative
Figure BDA0000413067440000062
The convolution of L xy (X,σ) is the image at point X with Gaussian second-order partial derivative
Figure BDA0000413067440000063
The convolution of L yy (X,σ) is the image at point X with Gaussian second-order partial derivative
Figure BDA0000413067440000064
the convolution;

设Dxx、Dyy和Dxy是x方向、y方向和xy方向上框架滤波器与图像卷积的结果,则Hessian行列式的近似估计为:Let D xx , D yy and D xy be the results of the convolution of the frame filter and the image in the x direction, y direction and xy direction, then the approximate estimate of the Hessian determinant is:

det(Happrox)=DxxDyy-(0.9Dxy)2   (2)det(H approx )=D xx D yy -(0.9D xy ) 2 (2)

S1.2.2根据快速Hessian矩阵构建尺度空间,并利用步骤S1.1中得到的积分图像,得到三维尺度空间响应图。尺度空间的构建需要借助图像金字塔来完成。SURF配准方法是用尺寸逐渐增大的框架滤波器与原始图像做卷积来构建金字塔。因为采用了积分图像来处理卷积,不同尺寸的框架滤波器的计算速度是一样的,这样就提高了算法的效率。S1.2.2 Construct the scale space according to the fast Hessian matrix, and use the integral image obtained in step S1.1 to obtain a three-dimensional scale space response map. The construction of scale space needs to be done with the help of image pyramid. The SURF registration method is to construct a pyramid by convolving the original image with a frame filter of increasing size. Because the integral image is used to process the convolution, the calculation speed of frame filters of different sizes is the same, which improves the efficiency of the algorithm.

S1.2.3精确定位特征点:首先,在得到的三维尺度空间响应图中,进行阈值分割,只保留具有强响应值的像素点;接着,采用极大值抑制来寻找候选特征点;最后,用三维二次拟合函数对特征点进行临近像素插值,使之具有亚空间和亚尺度的精度。S1.2.3 Accurate positioning of feature points: first, threshold segmentation is performed on the obtained three-dimensional scale space response map, and only pixels with strong response values are retained; then, maximum value suppression is used to find candidate feature points; finally, use The three-dimensional quadratic fitting function performs interpolation of adjacent pixels on the feature points, so that it has the accuracy of subspace and subscale.

S2特征描述:统计关键点邻域的特征信息,生成一串FREAK二进制描述子。具体步骤如下:S2 feature description: Statize the feature information of the key point neighborhood, and generate a series of FREAK binary descriptors. Specific steps are as follows:

S2.1对关键点邻域的每个采样点进行平滑去噪,根据视网膜模型的分布,采用不同的高斯核滤波。如附图3所示,高斯核函数的尺寸随着采样点与中心点的距离的增长呈指数分布。S2.1 Smooth and denoise each sampling point in the key point neighborhood, and use different Gaussian kernel filters according to the distribution of the retina model. As shown in Fig. 3, the size of the Gaussian kernel function is distributed exponentially with the increase of the distance between the sampling point and the center point.

S2.2确定特征点的主方向:为了计算目标旋转的梯度,FREAK算法选择了如图4所示的45个点对计算角度。设点集G为用于计算梯度的对比集合。特征点主方向O计算如下:S2.2 Determine the main direction of the feature points: In order to calculate the gradient of the target rotation, the FREAK algorithm selects 45 point pairs as shown in Figure 4 to calculate the angle. Let the point set G be the comparison set used to calculate the gradient. The main direction O of the feature point is calculated as follows:

Oo == 11 Mm &Sigma;&Sigma; PP Oo &Element;&Element; GG (( II (( PP oo &gamma;&gamma; 11 )) -- II (( PP oo &gamma;&gamma; 22 )) )) PP oo &gamma;&gamma; 11 -- PP oo &gamma;&gamma; 22 || || PP oo &gamma;&gamma; 11 -- PP oo &gamma;&gamma; 22 || || ;; -- -- -- (( 33 ))

其中,M是点集G的排列数,

Figure BDA0000413067440000066
Figure BDA0000413067440000067
是对比感受野的中心点二维图像坐标,
Figure BDA0000413067440000068
Figure BDA0000413067440000069
Figure BDA00004130674400000610
Figure BDA00004130674400000611
所对应的灰度值,γ1和γ2表示进行灰度对比的两个感受野。Among them, M is the permutation number of point set G,
Figure BDA0000413067440000066
and
Figure BDA0000413067440000067
is the two-dimensional image coordinates of the center point of the comparison receptive field,
Figure BDA0000413067440000068
and
Figure BDA0000413067440000069
yes
Figure BDA00004130674400000610
and
Figure BDA00004130674400000611
The corresponding gray values, γ 1 and γ 2 represent two receptive fields for gray comparison.

S2.3根据特征点主方向,对特征点邻域进行旋转。对旋转后的邻域进行视网膜模型采样,并按下式生成FREAK二值特征描述子F:S2.3 Rotate the neighborhood of the feature point according to the main direction of the feature point. Perform retinal model sampling on the rotated neighborhood, and generate the FREAK binary feature descriptor F as follows:

Ff == &Sigma;&Sigma; 00 << &alpha;&alpha; << NN 22 &alpha;&alpha; TT (( PP &alpha;&alpha; )) ;;

其中,Pα表示一对采样感受野,其空间分布如图3所示;N是描述子长度,即感受野对的数目,若感受野总数量为M,则

Figure BDA0000413067440000072
α表示二值描述子的二进制左移移位值;Among them, P α represents a pair of sampling receptive fields, and its spatial distribution is shown in Figure 3; N is the descriptor length, that is, the number of receptive field pairs. If the total number of receptive fields is M, then
Figure BDA0000413067440000072
α represents the binary left-shift shift value of the binary descriptor;

Figure BDA0000413067440000073
表示对比对Pα中前面一位的图像信息,采用的图像信息为图像灰度之和或者区域均值,本实施例采用的是区域均值。γ1和γ2表示进行灰度对比的两个感受野。
Figure BDA0000413067440000073
Indicates that compared with the image information of the first bit in P α , the image information used is the sum of image gray levels or the area average value, and this embodiment adopts the area average value. γ 1 and γ 2 represent two receptive fields for grayscale comparison.

S2.4学习训练具有高方差和非相关性的采样点对比对。具体步骤包括:(1)创建矩阵D,D的每一行为一个采样点的描述子;(2)计算每一列的均值,均值与0.5的距离代表了每一列的方差;(3)按照每一列的方差进行从小到大排列;(4)保留方差最小的一列,迭代地从余下的列中选择与保留列具有低相关性的列,直至FREAK特征描述子维数达到预定要求。S2.4 Learning to train pairs of sampling points with high variance and non-correlation. The specific steps include: (1) Create a matrix D, each row of D is a descriptor of a sampling point; (2) Calculate the mean value of each column, and the distance between the mean value and 0.5 represents the variance of each column; (3) According to each column (4) Keep the column with the smallest variance, iteratively select the column with low correlation with the reserved column from the remaining columns, until the dimension of the FREAK feature descriptor reaches the predetermined requirement.

S3特征匹配:对特征点进行特征匹配,得到I(x,y)与f(x,y)之间空间变换关系(m,n,β),其中m、n分别是x、y方向的平移参数,β是旋转角度。具体步骤如下:S3 Feature Matching: Perform feature matching on feature points to obtain the spatial transformation relationship (m, n, β) between I(x,y) and f(x,y), where m and n are translations in the x and y directions respectively parameter, β is the rotation angle. Specific steps are as follows:

S3.1由于FREAK描述子的特征向量是一串二进制序列,因此在度量特征时可采用海明距离(即异或操作)代替传统的欧氏距离匹配特征向量。S3.1 Since the feature vector of the FREAK descriptor is a string of binary sequences, the Hamming distance (that is, XOR operation) can be used to replace the traditional Euclidean distance matching feature vector when measuring features.

FREAK算法匹配时首先搜索表示模糊信息的前16个字节的描述特征。如果匹配距离小于所设阈值,才进行后面特征的匹配。这样的搜索策略可以快速的剔除高达90%的不相关匹配点。When the FREAK algorithm is matched, it first searches for the description characteristics of the first 16 bytes representing fuzzy information. If the matching distance is less than the set threshold, the following features are matched. Such a search strategy can quickly eliminate up to 90% of irrelevant matching points.

S3.2对匹配的特征点对(x1,y1)和(x2,y2),根据仿射变换公式S3.2 For the pair of matched feature points (x 1 , y 1 ) and (x 2 , y 2 ), according to the affine transformation formula

xx 22 ythe y 22 == mm &prime;&prime; &prime;&prime; nno &prime;&prime; &prime;&prime; ++ sthe s coscos &beta;&beta; &prime;&prime; &prime;&prime; -- sinsin &beta;&beta; &prime;&prime; &prime;&prime; sinsin &beta;&beta; &prime;&prime; &prime;&prime; coscos &beta;&beta; &prime;&prime; &prime;&prime; xx 11 ythe y 11 -- -- -- (( 44 ))

通过最小二乘法求解,得到式(4)的变换参数(m″,n″,β″,s);对所有匹配的特征点对的变换参数(m″,n″,β″)求平均值,得到I(x,y)与f(x,y)之间粗略变换关系(m,n,β)。The transformation parameters (m″, n″, β″, s) of formula (4) are obtained by solving by the least square method; the transformation parameters (m″, n″, β″) of all matched feature point pairs are averaged , to obtain a rough transformation relationship (m, n, β) between I(x, y) and f(x, y).

上述实施例为本发明较佳的实施方式,但本发明的实施方式并不受上述实施例的限制,其他的任何未背离本发明的精神实质与原理下所作的改变、修饰、替代、组合、简化,均应为等效的置换方式,都包含在本发明的保护范围之内。The above-mentioned embodiment is a preferred embodiment of the present invention, but the embodiment of the present invention is not limited by the above-mentioned embodiment, and any other changes, modifications, substitutions, combinations, Simplifications should be equivalent replacement methods, and all are included in the protection scope of the present invention.

Claims (7)

1. the encapsulation of the high-speed and high-density based on a FREAK components and parts method for rapidly positioning, is characterized in that, comprises the following steps:
(1) key point location: input image subject to registration and template image, adopt Hessian matrix to detect the place, position of key point;
(2) feature is described: utilize the neighborhood information of key point, determine the principal direction of unique point; According to the distribution of Modified Retinal Model, training study sampled point is to comparison, according to comparison is built to FREAK proper vector;
(3) characteristic matching: according to FREAK proper vector, adopt hamming distance as the similarity measurement of proper vector, carry out arest neighbors coupling; Unique point by coupling, to building affine transformation equation, solves above-mentioned transformation equation by least square method, calculates spatial alternation model parameter, that is: the translation parameters m of x direction is, the translation parameters n of y direction and anglec of rotation β.
2. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the key point location detailed process of described step (1) is as follows:
(1-1) input image I subject to registration (x, y) and template image f (x, y), and difference formation product partial image;
(1-2) build quick Hessian matrix, and build metric space according to quick Hessian matrix; Then the integral image obtaining according to step (1-1) obtains three dimension scale roomage response figure;
(1-3) in the three dimension scale roomage response figure obtaining, carry out Threshold segmentation, only retain the pixel with strong response; Then, adopt maximum value to suppress to find candidate feature point; Finally, with three-dimensional quadratic fit function, unique point is carried out to adjacent pixels interpolation, obtain key point position.
3. the high-speed and high-density based on FREAK according to claim 2 encapsulates components and parts method for rapidly positioning, it is characterized in that, in described step (1-1), the method for formation product partial image is: for given image Q, point (x, y) be in image Q certain a bit, in integral image, point (x, y) value is the pixel value sum with all pixels in the initial point of Q and the formed rectangular area of (x, y) pixel.
4. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 2, is characterized in that, the method that builds quick Hessian matrix in described step (1-2) is:
For 1 X (x, y) on Given Graph picture, put the Hessian matrix H (x, σ) of X under yardstick σ and be defined as follows:
H ( X , &sigma; ) = L xx ( X , &sigma; ) L xy ( X , &sigma; ) L xy ( X , &sigma; ) L yy ( X , &sigma; ) ;
Wherein, L xx(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000012
convolution, L xy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000013
convolution, L yy(X, σ) is that image is at an X place and Gauss's second order local derviation
Figure FDA0000413067430000014
convolution;
If D xx, D yy, and D xybe respectively the result of x direction, y direction and xy direction upper frame wave filter and image convolution, the approximate evaluation of Hessian determinant is:
det(H approx)=D xxD yy-(0.9D xy) 2
5. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the step that described step (2) feature is described is as follows:
(2-1) each sampled point of key point neighborhood is carried out to smoothing denoising;
(2-2) determine the principal direction of unique point: select to estimate Grad with respect to centrosymmetric receptive field, and using this Grad as unique point principal direction, the collection G that sets up an office is for the contrast set for compute gradient, unique point principal direction O is calculated as follows:
O = 1 M &Sigma; P O &Element; G ( I ( P o &gamma; 1 ) - I ( P o &gamma; 2 ) ) P o &gamma; 1 - P o &gamma; 2 | | P o &gamma; 1 - P o &gamma; 2 | | ;
Wherein, M is the number of permutations of point set G, with
Figure FDA0000413067430000023
the central point two dimensional image coordinate of contrast receptive field,
Figure FDA0000413067430000024
with be
Figure FDA0000413067430000026
with
Figure FDA0000413067430000027
corresponding gray-scale value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-3) according to the distribution of unique point principal direction and Modified Retinal Model, unique point neighborhood is rotated, postrotational neighborhood is carried out to Modified Retinal Model sampling, and press following formula generation FREAK two-value Feature Descriptor F:
F = &Sigma; 0 < &alpha; < N 2 &alpha; T ( P &alpha; ) ;
Wherein, P αrepresent a pair of sampling receptive field, N is descriptor length, i.e. the right number of receptive field, if receptive field total quantity is M,
Figure FDA0000413067430000029
α represents the binary shift left shift value of two valued description;
Figure FDA00004130674300000210
Figure FDA00004130674300000211
expression is to comparison P αin the image information of above, the image information of employing is gradation of image sum or regional average value, γ 1and γ 2represent to carry out two receptive fields of intensity contrast;
(2-4) from existing receptive field centering obtain high variance and non-correlation to comparison, step comprises:
(2-4-1) create matrix D, the descriptor of a sampled point of each behavior of D;
(2-4-2) calculate the average of each row, average and 0.5 difference represent the variance of each row;
(2-4-3) according to the variance of each row, arrange from small to large;
(2-4-4) retain row of variance minimum, from remaining row, select to have with reservation row iteratively the row of low correlation, until FREAK Feature Descriptor dimension reaches pre-provisioning request; And then obtain FREAK proper vector.
6. the high-speed and high-density based on FREAK according to claim 5 encapsulates components and parts method for rapidly positioning, it is characterized in that, described step (2-1) is carried out smoothing denoising to each sampled point of key point neighborhood, according to the distribution of Modified Retinal Model, adopt different gaussian kernel filtering, the size of gaussian kernel function is along with the growth of the distance of sampled point and central point is exponential distribution.
7. the high-speed and high-density encapsulation components and parts method for rapidly positioning based on FREAK according to claim 1, is characterized in that, the step of described step (3) characteristic matching is as follows:
(3-1) adopt hamming distance as the similarity measurement of proper vector, carry out arest neighbors coupling, method is: first search represents the Expressive Features of front 16 bytes of fuzzy message, if matching distance is less than set threshold value, the unique point pair that obtains coupling, enters step (3-2);
(3-2) to coupling unique point to (x 1, y 1) and (x 2, y 2), the following affined transformation formula of substitution:
x 2 y 2 = m &prime; &prime; n &prime; &prime; + s cos &beta; &prime; &prime; - sin &beta; &prime; &prime; sin &beta; &prime; &prime; cos &beta; &prime; &prime; x 1 y 1 ;
By least square method, solve, obtain transformation parameter (m ", n ", β ", s); The transformation parameter right to the unique point of all couplings (m ", n ", β ") averages, and obtains rough transformation relation (m, n, β) between I (x, y) and f (x, y).
CN201310562520.7A 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method Pending CN103679193A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310562520.7A CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310562520.7A CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Publications (1)

Publication Number Publication Date
CN103679193A true CN103679193A (en) 2014-03-26

Family

ID=50316681

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310562520.7A Pending CN103679193A (en) 2013-11-12 2013-11-12 FREAK-based high-speed high-density packaging component rapid location method

Country Status (1)

Country Link
CN (1) CN103679193A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106128037A (en) * 2016-07-05 2016-11-16 董超超 A kind of monitoring early-warning device for natural disaster
CN106980852A (en) * 2017-03-22 2017-07-25 嘉兴闻达信息科技有限公司 Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN107256545A (en) * 2017-05-09 2017-10-17 华侨大学 A kind of broken hole flaw detection method of large circle machine
CN107369170A (en) * 2017-07-04 2017-11-21 云南师范大学 Image registration treating method and apparatus
CN108027248A (en) * 2015-09-04 2018-05-11 克朗设备公司 The industrial vehicle of positioning and navigation with feature based
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN109509166A (en) * 2017-09-15 2019-03-22 凌云光技术集团有限责任公司 Printed circuit board image detection method and device
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN112386282A (en) * 2020-11-13 2021-02-23 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112464909A (en) * 2020-12-18 2021-03-09 杭州电子科技大学 Iris feature extraction method based on FREAK description
US20220284694A1 (en) * 2018-04-05 2022-09-08 Imagination Technologies Limited Sampling for feature detection in image analysis
US12169959B2 (en) 2022-03-11 2024-12-17 Apple Inc. Filtering of keypoint descriptors based on orientation angle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013852A1 (en) * 2003-05-22 2008-01-17 Ge Medical Systems Global Technology Co., Llc. Systems and Methods for Optimized Region Growing Algorithm for Scale Space Analysis
CN102880866A (en) * 2012-09-29 2013-01-16 宁波大学 Method for extracting face features

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013852A1 (en) * 2003-05-22 2008-01-17 Ge Medical Systems Global Technology Co., Llc. Systems and Methods for Optimized Region Growing Algorithm for Scale Space Analysis
CN102880866A (en) * 2012-09-29 2013-01-16 宁波大学 Method for extracting face features

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ALEXANDRE ALAHI ETC.: "FREAK:Fast Retina Keypoint", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
麦倩: "基于配准的新型表面贴装元器件的定位算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108027248A (en) * 2015-09-04 2018-05-11 克朗设备公司 The industrial vehicle of positioning and navigation with feature based
CN106128037A (en) * 2016-07-05 2016-11-16 董超超 A kind of monitoring early-warning device for natural disaster
CN106980852A (en) * 2017-03-22 2017-07-25 嘉兴闻达信息科技有限公司 Based on Corner Detection and the medicine identifying system matched and its recognition methods
CN107256545A (en) * 2017-05-09 2017-10-17 华侨大学 A kind of broken hole flaw detection method of large circle machine
CN107369170A (en) * 2017-07-04 2017-11-21 云南师范大学 Image registration treating method and apparatus
CN109509166A (en) * 2017-09-15 2019-03-22 凌云光技术集团有限责任公司 Printed circuit board image detection method and device
CN108335306A (en) * 2018-02-28 2018-07-27 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
CN108335306B (en) * 2018-02-28 2021-05-18 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium
US20220284694A1 (en) * 2018-04-05 2022-09-08 Imagination Technologies Limited Sampling for feature detection in image analysis
US12131564B2 (en) * 2018-04-05 2024-10-29 Imagination Technologies Limited Sampling for feature detection in image analysis
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN112386282A (en) * 2020-11-13 2021-02-23 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112386282B (en) * 2020-11-13 2022-08-26 声泰特(成都)科技有限公司 Ultrasonic automatic volume scanning imaging method and system
CN112464909A (en) * 2020-12-18 2021-03-09 杭州电子科技大学 Iris feature extraction method based on FREAK description
US12169959B2 (en) 2022-03-11 2024-12-17 Apple Inc. Filtering of keypoint descriptors based on orientation angle

Similar Documents

Publication Publication Date Title
CN103679193A (en) FREAK-based high-speed high-density packaging component rapid location method
CN103310453B (en) A kind of fast image registration method based on subimage Corner Feature
CN105427298B (en) Remote sensing image registration method based on anisotropic gradient metric space
CN103236064B (en) A kind of some cloud autoegistration method based on normal vector
CN102800097B (en) The visible ray of multi-feature multi-level and infrared image high registration accuracy method
CN103400384B (en) The wide-angle image matching process of calmodulin binding domain CaM coupling and some coupling
CN106504276A (en) The combinations matches cost algorithms of non local Stereo Matching Algorithm and parallax joint filling algorithm
CN102800098B (en) Multi-characteristic multi-level visible light full-color and multi-spectrum high-precision registering method
CN107301664A (en) Improvement sectional perspective matching process based on similarity measure function
CN102661708B (en) High-density packaged element positioning method based on speeded up robust features (SURFs)
CN108401565B (en) Remote sensing image registration method based on improved KAZE (Kaze-zero-average spatial interpolation) feature and Pseudo-RANSAC (Pseudo-random sample consensus) algorithm
CN102819839B (en) High-precision registration method for multi-characteristic and multilevel infrared and hyperspectral images
CN107909018B (en) A Robust Multimodal Remote Sensing Image Matching Method and System
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN106340010B (en) A Corner Detection Method Based on Second-Order Contour Difference
CN108154066B (en) A 3D Object Recognition Method Based on Curvature Feature Recurrent Neural Network
CN116452644A (en) Three-dimensional point cloud registration method and device based on feature descriptors and storage medium
CN110738695B (en) A Method for Eliminating Mismatched Image Feature Points Based on Local Transformation Model
CN107180436A (en) A kind of improved KAZE image matching algorithms
CN106056122A (en) KAZE feature point-based image region copying and pasting tampering detection method
CN111199558A (en) Image matching method based on deep learning
CN111814895B (en) Salient object detection method based on absolute and relative depth induced network
CN106529548A (en) Sub-pixel level multi-scale Harris corner detection algorithm
CN106651756B (en) An Image Registration Method Based on SIFT and Verification Mechanism
CN101894369B (en) A Real-time Method for Computing Camera Focal Length from Image Sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140326

RJ01 Rejection of invention patent application after publication