CN101493891A - Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT - Google Patents
Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT Download PDFInfo
- Publication number
- CN101493891A CN101493891A CNA2009100679878A CN200910067987A CN101493891A CN 101493891 A CN101493891 A CN 101493891A CN A2009100679878 A CNA2009100679878 A CN A2009100679878A CN 200910067987 A CN200910067987 A CN 200910067987A CN 101493891 A CN101493891 A CN 101493891A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- gradient
- bin
- gaussian
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000000605 extraction Methods 0.000 claims abstract description 16
- 239000011159 matrix material Substances 0.000 claims description 4
- 238000003384 imaging method Methods 0.000 abstract description 17
- 230000008569 process Effects 0.000 abstract description 4
- 238000010606 normalization Methods 0.000 abstract description 2
- 239000013598 vector Substances 0.000 description 11
- 230000008859 change Effects 0.000 description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 241000282320 Panthera leo Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 238000000513 principal component analysis Methods 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
本发明属于图像处理技术领域,涉及一种基于SIFT的具有镜面翻转不变性的特征提取和描述方法,包括下列步骤:(1)对输入的图像进行高斯核卷积处理;(2)对图像继续进行高斯差处理,检测其极值点;(3)筛选特征点;(4)精确定位特征点位置;(5)确定每一个特征点的方向参数;(6)对分别以主方向为分界线的两侧的梯度模值求和;(7)组织高斯权值窗口里的像素单元,进行编码和归一化操作,从而生成对图像的描述数据。本发明增加了特征提取和描述方法对镜面成像问题的健壮性,扩展了计算机视觉的应用领域。
The invention belongs to the technical field of image processing, and relates to a SIFT-based feature extraction and description method with mirror flip invariance, comprising the following steps: (1) performing Gaussian kernel convolution processing on an input image; (2) continuing to process the image Perform Gaussian difference processing to detect its extreme points; (3) filter feature points; (4) accurately locate the position of feature points; (5) determine the direction parameters of each feature point; (6) pair the main direction as the dividing line (7) Organize the pixel units in the Gaussian weight window to perform encoding and normalization operations to generate image description data. The invention increases the robustness of the feature extraction and description method to the mirror imaging problem, and expands the application field of computer vision.
Description
技术领域 technical field
本发明属于图像处理技术领域,涉及一种图像特征提取方法。The invention belongs to the technical field of image processing and relates to an image feature extraction method.
背景技术 Background technique
当今的计算机技术发展飞速,计算机视觉和图像检索的应用领域越来越广泛,同时也彰显出它们的重要性。热门的三维重建、物体识别、相机校准、机器人双目导航等等都是建立在计算机视觉的基础之上,所以合理有效地解决计算机视觉所存在的问题或改进不完善之处可以给计算机界甚至科学界带来巨大的推动。计算机视觉建立在一种令计算机模拟人类(哺乳类动物)视觉而达到一定程度的智能性的理念之上,连同图像检索等都需要对图像的特征进行提取分析,所以图像特征的定义及提取方案有着举足轻重的作用。当然,当今有多种解决方案,常见的有基于梯度的特征提取和描述方法,包括Harriscorner detector【1】、SIFT【2】、SURF【3】、HOG【4】、GLOH【5】。除了基于梯度的特征提取方法,还有基于轮廓提取等其它方法。由于本发明提出的MIFT是基于梯度的方法,所以对其它方法不予过多描述。其中Harris corner detector(哈里斯角点检测器)能够提取出在图像本身尺度上对于旋转和光照不变性的特征点。实际上,并不是如名称中所说的那样仅仅是提取角点而已,而是所有的在多个方向上具有显著梯度的特征点。但是,Harris corner detector的局限性相对较大,因为它对图像尺度的缩放非常敏感。为了去除或是减弱图像尺度改变的影响,Lowe提出SIFT(尺度不变特征变换)解决了尺度缩放所带来的问题,当然它也保证了旋转不变性甚至在一定程度能容忍光照、仿射变换和遮盖等影响。SURF,简单来讲是一个加速版的SIFT,它与SIFT都采用一种辅助区域策略,也就是说以特征点为中心指定一个辅助区域,在这个区域中的像素共同决定特征点的描述。不同的是SIFT采用一种根据区域中不同像素对特征点贡献不同的策略来给它们赋予权值,而SURF则仅仅采用基于积分图像(integral image)的相等权值策略。HOG结合SVM(支持向量机)提供了一种基于梯度信息的人类检测方法。GLOH则是SIFT的另一变种形式,它利用圆形策略来组织辅助区域,目的是增强特征的健壮性和突出性,初始的GLOH描述符有272维,但经过PCA(主成分分析)操作,使272维降至与SIFT相同的128维,保证在不丢失关键信息的基础上,提高匹配操作的效率。上述的所有方法,虽然可以很好的解决旋转、尺度变化甚至是光照变化和仿射等图像上的变形,但是几乎所有的方法都没有考虑到镜面成像一类的情况,这种情况在现实生活中是很常见的,例如水面倒影,镜面成像,对称物体的不同角度观察等等。Today's computer technology is developing rapidly, and the application fields of computer vision and image retrieval are becoming more and more extensive, which also highlights their importance. Popular 3D reconstruction, object recognition, camera calibration, robot binocular navigation, etc. are all based on computer vision, so reasonably and effectively solving the problems existing in computer vision or improving imperfections can give the computer industry even A huge boost from the scientific community. Computer vision is based on the idea that the computer can simulate human (mammalian) vision to achieve a certain degree of intelligence. Together with image retrieval, it is necessary to extract and analyze the features of the image, so the definition and extraction scheme of image features has a pivotal role. Of course, there are many solutions today, and the common ones are gradient-based feature extraction and description methods, including Harriscorner detector [1], SIFT [2], SURF [3], HOG [4], GLOH [5]. In addition to gradient-based feature extraction methods, there are other methods based on contour extraction. Since the MIFT proposed by the present invention is a gradient-based method, other methods will not be described too much. Among them, the Harris corner detector (Harris corner detector) can extract feature points that are invariant to rotation and illumination on the scale of the image itself. In fact, it is not just extracting corner points as the name says, but all feature points with significant gradients in multiple directions. However, the limitations of the Harris corner detector are relatively large because it is very sensitive to the scaling of the image scale. In order to remove or weaken the influence of image scale changes, Lowe proposed SIFT (Scale Invariant Feature Transform) to solve the problems caused by scale scaling. Of course, it also ensures rotation invariance and even tolerates illumination and affine transformation to a certain extent. and covering effects. SURF, simply speaking, is an accelerated version of SIFT. Both it and SIFT adopt an auxiliary region strategy, that is to say, an auxiliary region is designated with the feature point as the center, and the pixels in this region jointly determine the description of the feature point. The difference is that SIFT uses a strategy to assign weights to different pixels in the region according to their contribution to feature points, while SURF only uses an equal weight strategy based on integral images. HOG combined with SVM (Support Vector Machine) provides a human detection method based on gradient information. GLOH is another variant of SIFT. It uses a circular strategy to organize the auxiliary area. The purpose is to enhance the robustness and prominence of features. The initial GLOH descriptor has 272 dimensions, but after PCA (Principal Component Analysis) operation, Reduce the 272 dimensions to the same 128 dimensions as SIFT, ensuring that the efficiency of the matching operation is improved without losing key information. Although all the above methods can well solve the deformation of images such as rotation, scale change and even illumination change and affine, almost all methods do not take into account the situation of mirror imaging, which is very difficult in real life. It is very common, such as water surface reflection, mirror imaging, observation of symmetrical objects from different angles, etc.
参考文献:references:
【1】C.Harris和M.J.Stephens,《角边探测器》,Alvey视觉会议,vol.20,pp.147-152,1988.【1】C.Harris and M.J.Stephens, "Corner and Edge Detector", Alvey Vision Conference, vol.20, pp.147-152, 1988.
【2】D.G.Lowe,《选自尺度不变关键点的独特图像特征》,计算机视觉国际期刊,vol.60,pp.91-110,2004.【2】D.G.Lowe, "Unique Image Features Selected from Scale Invariant Key Points", International Journal of Computer Vision, vol.60, pp.91-110, 2004.
【3】H.Bay,T.Tuytelaars,和L.Van Gool.,《Surf:加速提取健壮性特征》,计算机视觉欧洲会议,pp.404-417,2006.[3] H.Bay, T.Tuytelaars, and L.Van Gool., "Surf: Accelerated Extraction of Robust Features", European Conference on Computer Vision, pp.404-417, 2006.
【4】N.Dalal和B.Triggs,《基于方向梯度直方图的人类探测》,计算机视觉及模式识别国际会仪,vol.1,pp.886-893,2005.【4】N.Dalal and B.Triggs, "Human Detection Based on Histogram of Oriented Gradients", International Conference on Computer Vision and Pattern Recognition, vol.1, pp.886-893, 2005.
【5】K.Mikolajczyk和C.Schmid,《局部特征描述符的性能估计》,IEEE模式分析及机器智能会刊,vol.27,pp.1651-1630,2004.【5】K.Mikolajczyk and C.Schmid, "Performance Estimation of Local Feature Descriptors", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.27, pp.1651-1630, 2004.
发明内容 Contents of the invention
本发明的目的在于提供一种能够解决镜面翻转现象所导致的SIFT特征提取方法失效的问题,同时又保持SIFT的所有优点和性能。换句话说,提供一种在翻转前后两种情况下保持同一描述形式的特征提取和描述方法,也即具有镜面翻转不变性的特征提取和描述方法。为此,本发明采用如下的技术方案:The purpose of the present invention is to provide a method that can solve the problem of failure of the SIFT feature extraction method caused by the mirror flip phenomenon, while maintaining all the advantages and performances of SIFT. In other words, a feature extraction and description method that maintains the same description form before and after flipping is provided, that is, a feature extraction and description method that is invariant to mirror flipping. For this reason, the present invention adopts following technical scheme:
一种基于SIFT的具有镜面翻转不变性的特征提取和描述方法,包括下列步骤:A SIFT-based feature extraction and description method with mirror flip invariance, comprising the following steps:
步骤1:对输入的图像I(x,y)进行高斯核卷积处理,即L(x,y,σ)=G(x,y,σ)*I(x,y),得到多尺度空间表达的图像L(x,y,σ),式中,
步骤2:对多尺度空间表达的图像L(x,y,σ)按照下列公式进行高斯差处理D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ),检测多尺度空间表达的图像L(x,y,σ)的极值点;Step 2: Perform Gaussian difference processing on the image L(x, y, σ) expressed in multi-scale space according to the following formula D(x, y, σ)=(G(x, y, kσ)-G(x, y, σ))*I(x, y)=L(x, y, kσ)-L(x, y, σ), detecting the extreme points of the image L(x, y, σ) expressed in multi-scale space;
步骤3:利用阈值法和黑塞矩阵法筛选特征点;Step 3: Use threshold method and Hessian matrix method to screen feature points;
步骤4:采用拟合三维二次曲线的方式精确定位特征点位置;Step 4: Accurately locate the position of the feature points by fitting the three-dimensional quadratic curve;
步骤5:根据特征点辅助邻域像素上的梯度方向θ(x,y)和大小m(x,y)的信息,确定每一个特征点的方向参数,其中:Step 5: Determine the direction parameter of each feature point according to the gradient direction θ(x, y) and the size m(x, y) information on the auxiliary neighborhood pixels of the feature point, where:
步骤6:对分别以主方向为分界线的两侧的梯度模值求和,计算
步骤7:若mr>ml,则由上至下、由右至左组织高斯权值窗口里的像素单元,每个单元中各个梯度从梯度角相对主方向0度开始顺时针编码,否则,则由上至下、由右至左组织高斯权值窗口里的像素单元,每个单元中各个梯度从梯度角相对主方向0度开始逆时针编码,并进行归一化操作,从而生成对图像的描述数据。Step 7: If m r >m l , organize the pixel units in the Gaussian weight window from top to bottom and from right to left, and each gradient in each unit is coded clockwise from the gradient angle relative to the main direction 0 degrees, otherwise , the pixel units in the Gaussian weight window are organized from top to bottom and from right to left, each gradient in each unit is coded counterclockwise from the gradient angle relative to the main direction 0 degrees, and normalized to generate a pair The description data of the image.
本发明增加了特征提取和描述方法对镜面成像问题的健壮性,扩展了计算机视觉的应用领域。当今计算机视觉领域中所有的特征提取和描述的方法并未对镜面成像的情况有所考虑和处理,虽然像Harris corner detector、SIFT、SURF、GLOH等方法可以在旋转、尺度、光照等变化下可以保持一定程度的稳定性,但是在镜面成像的情况下却束手无策。本发明是针对这种情况而提出的方法,在成功解决镜面成像情况的同时,对于非镜面成像的情况,它拥有和SIFT相近的性能。本发明提出的特征提取和描述方法,这里称之为MIFT,与SIFT在镜面成像情况下的比较结果如图1所示,在阈值同为0.60的情况下,MIFT匹配258个特征点对,SIFT匹配12个特征点对。图2的效果图对几种较为复杂的图像的匹配结果进行了比较。The invention increases the robustness of the feature extraction and description method to the mirror imaging problem, and expands the application field of computer vision. All the feature extraction and description methods in the field of computer vision today do not consider and deal with the situation of mirror imaging, although methods such as Harris corner detector, SIFT, SURF, GLOH can be used under changes in rotation, scale, and illumination. Maintains a certain degree of stability, but is helpless in the case of mirror imaging. The present invention proposes a method aiming at this situation, and while successfully solving the mirror imaging situation, it has similar performance to SIFT for the non-mirror imaging situation. The feature extraction and description method proposed by the present invention is called MIFT here, and the comparison result with SIFT in the case of mirror imaging is shown in Figure 1. Under the same threshold as 0.60, MIFT matches 258 feature point pairs, and SIFT Match 12 feature point pairs. The renderings in Figure 2 compare the matching results of several complex images.
附图说明 Description of drawings
图1:镜面成像情况下的匹配效果图。图1(a)为MIFT匹配结果图1(b)为SIFT匹配结果。Figure 1: Matching effect diagram in the case of mirror imaging. Figure 1(a) is the MIFT matching result and Figure 1(b) is the SIFT matching result.
图2:MIFT与SIFT匹配结果比较。(a)在非镜面成像情况下,MIFT匹配结果。(b)在非镜面成像情况下,SIFT匹配结果。(c)在镜面成像情况下,MIFT匹配结果。(d)在镜面成像情况下,SIFT匹配结果。Figure 2: Comparison of MIFT and SIFT matching results. (a) MIFT matching results in the case of non-specular imaging. (b) SIFT matching results in the case of non-specular imaging. (c) MIFT matching results in the case of mirror imaging. (d) SIFT matching results in the case of mirror imaging.
图3一幅图像在相邻三个尺度下的表示情况示意图。Figure 3 is a schematic representation of an image at three adjacent scales.
图4:图像翻转前后特征描述符组织结构分析。其中,(a)特征点及其辅助区域在未翻转图像上的表示。(b)同一特征点及其辅助区域在翻转后图像上的表示。(c)来自(b)中的第14个单元中8个梯度的表示。(d)来自(a)中的第14个单元中8个梯度的表示。(e)在(a)情况下SIFT和MIFT的特征描述符。(f)在(b)情况下SIFT的特征描述符。(g)在(b)情况下MIFT的特征描述符。Figure 4: Analysis of feature descriptor organization structure before and after image flipping. Among them, (a) Representation of feature points and their auxiliary regions on the unflipped image. (b) Representation of the same feature point and its auxiliary region on the flipped image. (c) Representation of the 8 gradients in the 14th unit from (b). (d) Representation of the 8 gradients in the 14th unit from (a). (e) Feature descriptors for SIFT and MIFT in the case of (a). (f) Feature descriptors for SIFT in the case of (b). (g) Feature descriptors for MIFT in the case of (b).
图5:特征点梯度信息图。图中,nd为主方向的索引下标,%代表取模操作。Figure 5: Gradient information map of feature points. In the figure, n d is the index subscript of the main direction, and % represents the modulo operation.
图6:特征检测及特征描述流程图。Figure 6: Flow chart of feature detection and feature description.
图7:特征匹配流程图。Figure 7: Feature matching flowchart.
具体实施方式 Detailed ways
本发明首先将输入图像进行尺度变化,这种变化是通过高斯卷积完成的,在一系列不同尺度的图像中,针对每个像素点都寻找像素点灰度值的极值点。然而并不是所有的极值点都符合作为特征点的标准,由于特征点需具有一定的突出性和健壮性,通过对高斯差(Difference of Gaussian)和黑塞矩阵(Hessian matrix)设置合适的阈值,从而分别对特征点具有低对比度和具有边缘响应的备选点进行筛选。经过这两步的筛选所留下的极值点就是所要的特征点,对于这些极值点,在通过三维二次曲线拟合之后,得到它们精确的坐标和尺度信息。保留特征点的坐标、尺度等信息,为后来的匹配阶段提供可用信息。In the present invention, the scale change of the input image is performed firstly, and the change is completed through Gaussian convolution. In a series of images of different scales, the extreme point of the gray value of the pixel is searched for each pixel. However, not all extreme points meet the criteria as feature points. Since the feature points need to have certain prominence and robustness, by setting appropriate thresholds for Difference of Gaussian and Hessian matrix , so as to filter the candidate points with low contrast and edge response of feature points respectively. The extreme points left after these two steps of screening are the desired feature points. For these extreme points, after fitting them with a three-dimensional quadratic curve, their precise coordinates and scale information are obtained. The coordinates, scale and other information of the feature points are preserved to provide usable information for the subsequent matching stage.
上述部分可以被认为是特征点检测器,也就是说这部分主要是寻找图像的特征点。那么紧接着,就是如何组织特征点的信息,成为可用的特征描述,以提供更高层的应用--特征匹配。特征点描述部分分为统计梯度信息指定方向参数以及特征描述符的构建。The above part can be considered as a feature point detector, that is to say, this part is mainly to find the feature points of the image. Then, the next step is how to organize the information of feature points to become available feature descriptions to provide higher-level applications-feature matching. The feature point description part is divided into the statistical gradient information specifying the direction parameters and the construction of feature descriptors.
针对镜面成像问题,图像共分为四类,分别是:原始图像、水平翻转图像(镜像)、垂直翻转图像(倒置)和完全翻转图像(同时存在水平和垂直两类翻转)。但是,容易验证的是完全翻转图像等同于对原始图像旋转坐标系180度所得到的图像,同样,垂直翻转图像和水平翻转图像也存在如原始图像和完全翻转图像之间的类似关系。又因为在特征点描述之前的指定方向参数的操作使得特征描述具有旋转不变性,所以四种情况可以简化为两种,即原始图像和水平翻转图像。在原始图像和水平翻转图像之间,存在一种固定的关系,这种关系可划分为两个层次。SIFT中的每个特征点都是受到一个区域的像素点所影响,这种策略是为了减少噪音的影响、提高特征点的健壮性。那么同一个特征点所对应的这个区域,在原始图像和水平翻转图像中只是区域像素单元的列顺序相反,而行顺序不变,这是两个层次中的较高层次的一个。另一个层次则是相对微观的单元,在每个小的像素单元里,根据两种图像之间所对应的关系,每个单元中的8个梯度方向满足如下公式:For the mirror imaging problem, images are divided into four categories, namely: original image, horizontally flipped image (mirror), vertically flipped image (inverted) and completely flipped image (there are both horizontal and vertical flips). However, it is easy to verify that a fully flipped image is equivalent to the image obtained by rotating the coordinate system of the original image by 180 degrees. Similarly, a vertically flipped image and a horizontally flipped image also have a similar relationship as between the original image and the completely flipped image. And because the operation of specifying the orientation parameter before the feature point description makes the feature description invariant to rotation, the four cases can be simplified into two, namely the original image and the horizontally flipped image. There is a fixed relationship between the original image and the horizontally flipped image, which can be divided into two levels. Each feature point in SIFT is affected by pixels in an area. This strategy is to reduce the influence of noise and improve the robustness of feature points. Then for the area corresponding to the same feature point, in the original image and the horizontally flipped image, only the order of the columns of the area pixel units is reversed, but the order of the rows remains unchanged, which is the higher level of the two levels. The other level is a relatively microscopic unit. In each small pixel unit, according to the corresponding relationship between the two images, the 8 gradient directions in each unit satisfy the following formula:
其中,下标ori代表原始图像,下标hor代表水平翻转图像。编码策略充分利用两个层次的关系,从而产生一个镜面翻转不变性的特征描述符。Among them, the subscript ori represents the original image, and the subscript hor represents the horizontally flipped image. The encoding strategy fully exploits the two-level relationship, resulting in a mirror-flip invariant feature descriptor.
下面结合附图和实施例对本发明做进一步描述。The present invention will be further described below in conjunction with the accompanying drawings and embodiments.
步骤1:输入图像的多尺度空间表示Step 1: Multi-scale spatial representation of the input image
Koendetink和Lindeberg证明高斯卷积核是唯一的线性实现尺度变换的不变卷积核。二维高斯函数形式为:
步骤2:检测尺度空间极值点Step 2: Detect scale space extreme points
由步骤1中所得到的多尺度的图像中,可以通过从整个尺度空间上计算和比较得到以像素点为观察单位的极值。除了比较相邻尺度上一定数量的像素点,同时也要比较同一幅二维图像空间上的相邻的像素点得以较为全面的检测到极值点。本发明利用DoG(Difference-of-Gaussian)算子来近似LoG(Laplacian-of-Gaussian)以检测极值点,虽然DoG的精度较LoG略有不及,但是前者的运算速度胜过后者。其中,DoG算子定义形式如下:From the multi-scale image obtained in step 1, the extremum with pixel as the observation unit can be obtained by calculating and comparing from the entire scale space. In addition to comparing a certain number of pixels on adjacent scales, it is also necessary to compare adjacent pixels on the same two-dimensional image space to detect extreme points more comprehensively. The present invention utilizes DoG (Difference-of-Gaussian) operator to approximate LoG (Laplacian-of-Gaussian) to detect extremum points. Although the accuracy of DoG is slightly lower than that of LoG, the operation speed of the former is better than that of the latter. Among them, the definition form of the DoG operator is as follows:
D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y,σ)。D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ))*I(x,y)=L(x,y,kσ)-L(x,y , σ).
图3显示了一幅图像在相邻三个尺度下的表示情况,黑色十字是当前所计算的像素点位置,而灰色的圆点是所有需要比较的像素点,共2×9+8=26个点。Figure 3 shows the representation of an image at three adjacent scales. The black cross is the currently calculated pixel position, and the gray dots are all the pixels that need to be compared, a total of 2×9+8=26 points.
步骤3:从极值点中筛选特征点Step 3: Filter feature points from extreme points
完成了上述两个步骤所得到的极值点组成了特征点的备选集合,特征点将由此集合中的备选点中筛选出来,也就是说,并非所有的极值点都满足特征点的要求。因为在此集合中还存在着低对比度和边缘响应的点,作为图像的特征它们的独特性和稳定性不够突出,所以采用不同的两种策略将这两类点剔除掉。其一,在运算高斯差(DoG)的同时,设置适当的阈值来有效地消除低对比度的极值点。其二,由于高斯差(DoG)本身存在着边缘响应,那么采用黑塞矩阵(Hessian matrix)方法过滤掉具有边缘响应的点(鞍点)。The extreme points obtained by completing the above two steps constitute the candidate set of feature points, and the feature points will be selected from the candidate points in this set, that is to say, not all extreme points satisfy the requirement of feature points. Require. Because there are low-contrast and edge-response points in this set, their uniqueness and stability are not prominent enough as image features, so two different strategies are used to remove these two types of points. Firstly, while computing the Difference of Gaussian (DoG), an appropriate threshold is set to effectively eliminate low-contrast extreme points. Second, since the Difference of Gaussian (DoG) itself has an edge response, the Hessian matrix method is used to filter out points with edge response (saddle points).
步骤4:精确定位特征点位置Step 4: Accurate positioning of feature point positions
经过了以上的操作,特征点已经确定,由于尺度上的变换和像素单位大小上的影响可能导致特征点坐标和尺度信息有些偏差,为了保证特征点的信息精确,采用拟合三维二次曲线的方式进行信息的拟合和逼近,以得到更为精确的坐标和图像尺度信息。After the above operations, the feature points have been determined. Due to the transformation on the scale and the influence of the size of the pixel unit, the coordinates and scale information of the feature points may be somewhat deviated. Fitting and approximation of information by means of a method to obtain more accurate coordinates and image scale information.
步骤5:确定方向参数Step 5: Determine Orientation Parameters
为了满足特征点具有旋转不变性,根据特征点辅助邻域像素上的梯度方向和大小的信息,为每一个特征点指定方向参数。其中,梯度的大小m(x,y)和方向角度θ(x,y)可由像素点之间的差为基础计算出来,计算方式为:In order to meet the rotation invariance of feature points, according to the gradient direction and size information on the auxiliary neighborhood pixels of feature points, specify the direction parameters for each feature point. Among them, the size of the gradient m(x, y) and the direction angle θ(x, y) can be calculated based on the difference between pixels, and the calculation method is:
然后,采用直方图的形式进行信息统计,将直方图的范围规定为0-360度,10度为一单位,并结合以特征点为圆心的高斯权值窗口(Gaussian weighted windows),根据整个邻域中的各个像素点的梯度大小和方向进行统计。在直方图中36个方向中强度最大的便是主方向,由于噪声或变形的影响,图像可能略有变化或失真,而这些变化或失真可能会导致特征点的主方向参数产生偏差,为了缓解或是避免这些偏差对方向参数所带来的影响,本发明和SIFT一样使用辅方向。辅方向定义为强度大于80%的主方向强度的方向,而且辅方向可能有多个。实际上,每个辅方向在生成特征描述符的环节都被视为与主方向同等重要而建立单独的描述符。Then, use the form of histogram for information statistics, specify the range of the histogram as 0-360 degrees, 10 degrees as a unit, and combine the Gaussian weighted windows with the feature point as the center, according to the entire neighborhood The magnitude and direction of the gradient of each pixel in the domain are counted. Among the 36 directions in the histogram, the main direction is the most intense. Due to the influence of noise or deformation, the image may change or distort slightly, and these changes or distortions may cause the deviation of the main direction parameters of the feature points. In order to alleviate Or to avoid the impact of these deviations on the direction parameters, the present invention uses the auxiliary direction as in SIFT. An auxiliary direction is defined as a direction whose strength is greater than 80% of the strength of the main direction, and there may be more than one auxiliary direction. In fact, each auxiliary direction is considered as equally important as the main direction in the process of generating feature descriptors and a separate descriptor is established.
步骤6:特征点描述符编码Step 6: Feature point descriptor encoding
本发明采用的特征描述符是由一个向量来表示的,这个向量包含了所有的高斯权值窗口中的像素点的梯度信息。向量包含了4×4×8=128维的信息,这种程度的维数被Lowe证明为成功的一种,同样MIFT也使用128维的向量来表示描述符,尽管维数的大小并非强制不变的。以图4为例,分别对两个层次进行分析。其中(a)表示在未翻转图像中的一个特征点及其辅助区域,(b)则是同一个特征点和它的辅助区域在翻转后的图像中的表示,这两种情况都已指定了方向参数。The feature descriptor used in the present invention is represented by a vector, and this vector contains gradient information of all pixels in the Gaussian weight window. The vector contains 4×4×8=128-dimensional information. This level of dimensionality was proved to be a successful one by Lowe. Similarly, MIFT also uses 128-dimensional vectors to represent descriptors, although the size of the dimensionality is not mandatory. changing. Taking Figure 4 as an example, the two levels are analyzed separately. where (a) represents a feature point and its auxiliary region in the unflipped image, and (b) is the representation of the same feature point and its auxiliary region in the flipped image, both of which have been specified direction parameter.
首先,分析特征描述符较为宏观的层次-16个单元的顺序。在规定主方向之后,SIFT使用固定的顺序,采取列优先顺序(也可以采取行优先顺序)来组织这16个单元,如图4中所示,SIFT特征描述符向量如(e)所示。在经过翻转之后的图像,也就是对应(b)的情况,这16个单元的行顺序不变,但列顺序相反,那么就导致了SIFT在(b)下的特征描述符向量组织方式如(f)所示。不容置疑的是,SIFT对旋转、尺度缩放、光照、仿射变换等有良好的稳定性和健壮性,但同样不可否认的是,对于镜面成像这类情况,SIFT无能为力。所以,本发明提出了一种可以在图像翻转前后得到唯一形式的描述符的编码方法。在两种不同的情况下,在列优先编码的情况下仅有两种编码方式,一种是从右向左的顺序另一种则是反向顺序。直观上,图中的左右指向的梯度模值(点划线)可以用作判断采用上述两种中的某种方式的依据。但是,仅用左右指向的梯度信息对噪音等影响比较敏感,因此本发明改用对所有指向同一侧梯度模值求和的策略。如图5所示,抽象为数学公式即为:First, the feature descriptors are analyzed at a macro level - the sequence of 16 units. After specifying the main direction, SIFT uses a fixed order to organize the 16 units in column-first order (or row-first order), as shown in Figure 4, and the SIFT feature descriptor vector is shown in (e). In the image after flipping, that is, corresponding to (b), the row order of these 16 units is unchanged, but the column order is reversed, which leads to the organization of the feature descriptor vector of SIFT under (b) as ( f) as shown. It is undeniable that SIFT has good stability and robustness for rotation, scaling, lighting, affine transformation, etc., but it is also undeniable that SIFT is powerless for situations such as mirror imaging. Therefore, the present invention proposes an encoding method that can obtain a unique form of descriptor before and after image flipping. In two different cases, there are only two encodings in the case of column-major encoding, one is a right-to-left order and the other is a reverse order. Intuitively, the left and right pointing gradient modulus (dotted line) in the figure can be used as a basis for judging the adoption of one of the above two methods. However, only the gradient information pointing to the left and right is more sensitive to noise and other influences, so the present invention uses a strategy of summing all gradient moduli pointing to the same side instead. As shown in Figure 5, the abstract mathematical formula is:
其中,Nbin是所有方向的总数,在这里Nbin=36,nd则是主方向的索引并且Li表示在方向上的梯度模值,%代表取模操作。其中mr和ml分别是图5中右下方虚线箭头和左上方点划线箭头之和。据此,我们将编码策略由原来的固定顺序编码改变成为由mr和ml比较结果决定。理论上而言,在翻转前后通过本发明提出的方法可以得到同样的描述符如图4中(g)所示。与主辅方向原理相似,为了减少各种噪音和光照条件变化等影响,增强MIFT的健壮性,如果满足min{mr,ml}>τmax{mr,ml},(其中τ为阈值,这里我们将之设置为0.70,)那么另一个描述符则随之生成。Among them, N bin is the total number of all directions, here N bin =36, nd is the index of the main direction and Li represents the direction The modulus value of the gradient above, % represents the modulo operation. Among them, m r and m l are respectively the sum of the dotted line arrow at the lower right and the dotted line arrow at the upper left in Fig. 5 . Accordingly, we change the encoding strategy from the original fixed sequence encoding to be determined by the comparison results of m r and m l . Theoretically speaking, the same descriptor can be obtained by the method proposed by the present invention before and after flipping, as shown in (g) in FIG. 4 . Similar to the principle of primary and secondary directions, in order to reduce the impact of various noises and changes in lighting conditions and enhance the robustness of MIFT, if min{m r , m l }>τmax{m r , m l }, (where τ is the threshold , here we set it to 0.70,) then another descriptor is generated accordingly.
第二个层次较为微观,是具体到每个单元中的各方向梯度的关系。如图4中(c)和(d)所示,它们分别对应着翻转之后和翻转之前图像中同一个部分,(c)和(d)之间各个方向梯度的关系,详见上述的具体描述。根据这种特殊关系,结合第一个层次的分析,可以通过如下的策略生成最终的特征描述符:计算和比较mr和ml,由较大的一个作为编码顺序的指向,结合图4,在(a)所示的情况下mr<ml,那么16个单元的编码顺序为由上到下、从左到右,每个单元中的8个梯度方向则如(d)所示,从A开始,逆时针编码;若在(b)所示的情况下mr>ml,那么16个单元的编码顺序为由上到下、从右到左,每个单元中的8个梯度方向如(c)所示,从A开始,顺时针编码,这两种情况下所得到的特征描述符是相同的,也就是说这种特征描述可以解决镜面成像类的问题。当然,为了增加特征描述符对光照条件变化的稳定性,最后的归一化操作是必不可少的。特征检测及特征描述的流程,如图6所示。The second level is more microcosmic, which is specific to the relationship of gradients in each direction in each unit. As shown in (c) and (d) in Figure 4, they respectively correspond to the same part of the image after flipping and before flipping, and the relationship between gradients in each direction between (c) and (d), see the above specific description for details . According to this special relationship, combined with the analysis of the first level, the final feature descriptor can be generated by the following strategy: calculate and compare m r and m l , and use the larger one as the direction of the coding order, combined with Figure 4, In the case shown in (a) m r <m l , then the encoding order of the 16 units is from top to bottom and from left to right, and the 8 gradient directions in each unit are as shown in (d), Starting from A, encode counterclockwise; if m r > m l in the case shown in (b), then the encoding order of the 16 units is from top to bottom, from right to left, and 8 gradients in each unit The direction is shown in (c), starting from A, coded clockwise, the feature descriptors obtained in these two cases are the same, that is to say, this feature description can solve the problem of mirror imaging. Of course, in order to increase the stability of feature descriptors to changes in lighting conditions, the final normalization operation is essential. The process of feature detection and feature description is shown in Figure 6.
在对输入图像的处理、特征点的检测、特征描述之后,特征匹配的部分是必不可少的,同时也需要精心的设计以达到尽可能多的匹配正确的特征点对,减少错误匹配的目的。SIFT所采用的匹配方法是通过图像2中当前待匹配特征点描述向量与图像1中所有的特征点描述向量进行内积运算而得到一组数值,这些数值由小到大的排序,排在第一位的也就是数值最小的结果所对应的一组特征点是所需判定是否匹配的对象。倘若最小的数值与第二小的数值之间的比值低于某阈值,那么最小数值所对应的两个特征点被认为是匹配的,否则,没有匹配。这种方法与一般思路(即设置全局阈值的方法)相比,具有一定稳定性和合理性,因为图像的各种变形和噪音都会对此产生影响,那么一个固定指标是不能有效合理的解释匹配与否的问题。After the processing of the input image, the detection of feature points, and the feature description, the part of feature matching is essential, and it also needs to be carefully designed to match as many correct feature point pairs as possible and reduce the purpose of wrong matching. . The matching method adopted by SIFT is to obtain a set of values by performing an inner product operation on the current feature point description vector to be matched in image 2 and all the feature point description vectors in image 1. These values are sorted from small to large, ranking first A group of feature points corresponding to the result with the smallest value is the object to be judged whether to match. If the ratio between the smallest value and the second smallest value is lower than a certain threshold, then the two feature points corresponding to the smallest value are considered to match, otherwise, there is no match. Compared with the general idea (that is, the method of setting the global threshold), this method has certain stability and rationality, because various deformations and noises of the image will affect this, so a fixed index cannot effectively and reasonably explain the matching question of whether or not.
同样,MIFT也采用类似SIFT的匹配策略,但在此之上有所改进,这种改进是为了减少甚至避免错误丢失本该匹配的点对的情况。两个单位向量的内积结果反映两向量夹角的余弦值,也就是说值越小两向量越接近,匹配程度越高。由于同一个特征点可能由于主辅方向以及mr和ml的相似度问题而生成多个特征描述符,这种情况导致多个特征描述符相似程度高的可能性增加。为了减少这种情况对匹配结果的影响,本发明改进了SIFT的匹配方法,对特征描述符的附加信息进行检查,如果所比较的两个数值所对应的两组特征点中的坐标、尺度等信息都是相同的,那么将较大的那个数值所对应的一组特征点略过,进入与下一个对应于一组特征点匹配程度的数值进行比较,直至两组特征点的附加信息不尽相同,如果两个值比值低于某阈值,那么最小数值所对应的一组特征点匹配,否则,没有匹配。特征匹配流程,如图7所示。Similarly, MIFT also adopts a matching strategy similar to SIFT, but it has been improved on top of it. This improvement is to reduce or even avoid the situation of mistakenly missing the point pairs that should be matched. The inner product result of two unit vectors reflects the cosine value of the angle between the two vectors, that is to say, the smaller the value, the closer the two vectors are, and the higher the matching degree. Since the same feature point may generate multiple feature descriptors due to the main-auxiliary direction and the similarity problem of m r and m l , this situation leads to an increase in the possibility of multiple feature descriptors with high similarity. In order to reduce the impact of this situation on the matching results, the present invention improves the matching method of SIFT, and checks the additional information of the feature descriptor, if the coordinates, scales, etc. The information is the same, then skip the set of feature points corresponding to the larger value, and compare it with the next value corresponding to the matching degree of a set of feature points until the additional information of the two sets of feature points is exhausted Similarly, if the ratio of the two values is lower than a certain threshold, then the set of feature points corresponding to the minimum value matches, otherwise, there is no match. The feature matching process is shown in Figure 7.
Claims (1)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100679878A CN101493891B (en) | 2009-02-27 | 2009-02-27 | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2009100679878A CN101493891B (en) | 2009-02-27 | 2009-02-27 | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101493891A true CN101493891A (en) | 2009-07-29 |
CN101493891B CN101493891B (en) | 2011-08-31 |
Family
ID=40924483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2009100679878A Expired - Fee Related CN101493891B (en) | 2009-02-27 | 2009-02-27 | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN101493891B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101794395A (en) * | 2010-03-11 | 2010-08-04 | 合肥金诺数码科技股份有限公司 | Image matching positioning method based on Sift algorithm |
CN101937506A (en) * | 2010-05-06 | 2011-01-05 | 复旦大学 | Near Copy Video Detection Method |
CN102004921A (en) * | 2010-11-24 | 2011-04-06 | 上海电机学院 | Target identification method based on image characteristic analysis |
CN102043960A (en) * | 2010-12-03 | 2011-05-04 | 杭州淘淘搜科技有限公司 | Image grey scale and gradient combining improved sift characteristic extracting method |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
CN102375984A (en) * | 2010-08-06 | 2012-03-14 | 夏普株式会社 | Characteristic quantity calculating device, image connecting device, image retrieving device and characteristic quantity calculating method |
CN102054269B (en) * | 2009-10-27 | 2012-09-05 | 华为技术有限公司 | Method and device for detecting feature point of image |
CN102663762A (en) * | 2012-04-25 | 2012-09-12 | 天津大学 | Segmentation method of symmetrical organs in medical image |
CN103034860A (en) * | 2012-12-14 | 2013-04-10 | 南京思创信息技术有限公司 | Scale-invariant feature transform (SIFT) based illegal building detection method |
CN103208000A (en) * | 2012-12-28 | 2013-07-17 | 青岛科技大学 | Method for extracting characteristic points based on fast searching of local extrema |
US8576098B2 (en) | 2011-03-07 | 2013-11-05 | Industrial Technology Research Institute | Device and method for compressing feature descriptor |
CN106778771A (en) * | 2016-11-22 | 2017-05-31 | 上海师范大学 | A kind of new two-value SIFT descriptions and its image matching method |
CN107169458A (en) * | 2017-05-18 | 2017-09-15 | 深圳云天励飞技术有限公司 | Data processing method, device and storage medium |
CN108520265A (en) * | 2012-07-09 | 2018-09-11 | 西斯维尔科技有限公司 | Method for converting image descriptor and associated picture processing equipment |
CN109047026A (en) * | 2018-08-02 | 2018-12-21 | 重庆科技学院 | A kind of ore screening system and method |
CN110969145A (en) * | 2019-12-19 | 2020-04-07 | 珠海大横琴科技发展有限公司 | Remote sensing image matching optimization method and device, electronic equipment and storage medium |
CN111309956A (en) * | 2017-02-13 | 2020-06-19 | 哈尔滨理工大学 | Image retrieval-oriented extraction method |
CN111429394A (en) * | 2019-01-08 | 2020-07-17 | 阿里巴巴集团控股有限公司 | Image-based detection method and device, electronic equipment and storage medium |
-
2009
- 2009-02-27 CN CN2009100679878A patent/CN101493891B/en not_active Expired - Fee Related
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102054269B (en) * | 2009-10-27 | 2012-09-05 | 华为技术有限公司 | Method and device for detecting feature point of image |
CN101794395A (en) * | 2010-03-11 | 2010-08-04 | 合肥金诺数码科技股份有限公司 | Image matching positioning method based on Sift algorithm |
CN101937506A (en) * | 2010-05-06 | 2011-01-05 | 复旦大学 | Near Copy Video Detection Method |
CN101937506B (en) * | 2010-05-06 | 2012-10-17 | 复旦大学 | Near Copy Video Detection Method |
CN102375984A (en) * | 2010-08-06 | 2012-03-14 | 夏普株式会社 | Characteristic quantity calculating device, image connecting device, image retrieving device and characteristic quantity calculating method |
CN102375984B (en) * | 2010-08-06 | 2014-02-26 | 夏普株式会社 | Characteristic quantity calculating device, image connecting device, image retrieving device and characteristic quantity calculating method |
CN102004921A (en) * | 2010-11-24 | 2011-04-06 | 上海电机学院 | Target identification method based on image characteristic analysis |
CN102043960A (en) * | 2010-12-03 | 2011-05-04 | 杭州淘淘搜科技有限公司 | Image grey scale and gradient combining improved sift characteristic extracting method |
CN102043960B (en) * | 2010-12-03 | 2012-12-26 | 杭州淘淘搜科技有限公司 | Image grey scale and gradient combining improved sift characteristic extracting method |
CN102074015A (en) * | 2011-02-24 | 2011-05-25 | 哈尔滨工业大学 | Two-dimensional image sequence based three-dimensional reconstruction method of target |
US8576098B2 (en) | 2011-03-07 | 2013-11-05 | Industrial Technology Research Institute | Device and method for compressing feature descriptor |
CN102663762A (en) * | 2012-04-25 | 2012-09-12 | 天津大学 | Segmentation method of symmetrical organs in medical image |
CN102663762B (en) * | 2012-04-25 | 2015-12-09 | 天津大学 | The dividing method of symmetrical organ in medical image |
CN108520265B (en) * | 2012-07-09 | 2021-09-07 | 新运环球有限公司 | Method and related image processing apparatus for converting image descriptors |
CN108520265A (en) * | 2012-07-09 | 2018-09-11 | 西斯维尔科技有限公司 | Method for converting image descriptor and associated picture processing equipment |
CN103034860A (en) * | 2012-12-14 | 2013-04-10 | 南京思创信息技术有限公司 | Scale-invariant feature transform (SIFT) based illegal building detection method |
CN103208000B (en) * | 2012-12-28 | 2015-10-21 | 青岛科技大学 | Based on the Feature Points Extraction of local extremum fast search |
CN103208000A (en) * | 2012-12-28 | 2013-07-17 | 青岛科技大学 | Method for extracting characteristic points based on fast searching of local extrema |
CN106778771A (en) * | 2016-11-22 | 2017-05-31 | 上海师范大学 | A kind of new two-value SIFT descriptions and its image matching method |
CN111309956B (en) * | 2017-02-13 | 2022-06-24 | 哈尔滨理工大学 | An extraction method for image retrieval |
CN111309956A (en) * | 2017-02-13 | 2020-06-19 | 哈尔滨理工大学 | Image retrieval-oriented extraction method |
CN107169458A (en) * | 2017-05-18 | 2017-09-15 | 深圳云天励飞技术有限公司 | Data processing method, device and storage medium |
CN109047026A (en) * | 2018-08-02 | 2018-12-21 | 重庆科技学院 | A kind of ore screening system and method |
CN111429394A (en) * | 2019-01-08 | 2020-07-17 | 阿里巴巴集团控股有限公司 | Image-based detection method and device, electronic equipment and storage medium |
CN111429394B (en) * | 2019-01-08 | 2024-03-01 | 阿里巴巴集团控股有限公司 | Image-based detection method and device, electronic equipment and storage medium |
CN110969145B (en) * | 2019-12-19 | 2020-08-28 | 珠海大横琴科技发展有限公司 | Remote sensing image matching optimization method and device, electronic equipment and storage medium |
CN110969145A (en) * | 2019-12-19 | 2020-04-07 | 珠海大横琴科技发展有限公司 | Remote sensing image matching optimization method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN101493891B (en) | 2011-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101493891A (en) | Characteristic extracting and describing method with mirror plate overturning invariability based on SIFT | |
Quan et al. | Local voxelized structure for 3D binary feature representation and robust registration of point clouds from low-cost sensors | |
CN113223068B (en) | Multi-mode image registration method and system based on depth global features | |
CN103400388B (en) | Method for eliminating Brisk key point error matching point pair by using RANSAC | |
US20150003723A1 (en) | System and method of detecting objects in scene point cloud | |
JP6393230B2 (en) | Object detection method and image search system | |
CN101630365B (en) | Method for extracting and describing DAISY-based feature with mirror face turning invariance | |
Chen et al. | Rapid detection of multi-QR codes based on multistage stepwise discrimination and a compressed MobileNet | |
CN103955950B (en) | Image tracking method utilizing key point feature matching | |
CN104751475B (en) | A kind of characteristic point Optimum Matching method towards still image Object identifying | |
Al-asadi et al. | Object detection and recognition by using enhanced speeded up robust feature | |
Wang et al. | An improved ORB image feature matching algorithm based on SURF | |
Park et al. | A 182 mW 94.3 f/s in Full HD Pattern-Matching Based Image Recognition Accelerator for an Embedded Vision System in 0.13-$\mu {\rm m} $ CMOS Technology | |
CN101661618A (en) | Method for extracting and describing image characteristics with turnover invariance | |
Arróspide et al. | A study of feature combination for vehicle detection based on image processing | |
CN110516528A (en) | A moving target detection and tracking method based on moving background | |
CN103336964B (en) | SIFT image matching method based on module value difference mirror image invariant property | |
CN111199558A (en) | Image matching method based on deep learning | |
Wang et al. | A Target Corner Detection Algorithm Based on the Fusion of FAST and Harris | |
CN115049833A (en) | Point cloud component segmentation method based on local feature enhancement and similarity measurement | |
CN116051869A (en) | Image tag matching method and system integrating OVR-SVM and PSNR similarity | |
Qiu et al. | An optimized license plate recognition system for complex situations | |
Koutaki et al. | Fast and high accuracy pattern matching using multi-stage refining eigen template | |
Wang et al. | Speed sign recognition in complex scenarios based on deep cascade networks | |
CN108268836A (en) | Automatic fingerprint classification method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20110831 Termination date: 20210227 |
|
CF01 | Termination of patent right due to non-payment of annual fee |