[go: up one dir, main page]

CN104318548A - Rapid image registration implementation method based on space sparsity and SIFT feature extraction - Google Patents

Rapid image registration implementation method based on space sparsity and SIFT feature extraction Download PDF

Info

Publication number
CN104318548A
CN104318548A CN201410531222.6A CN201410531222A CN104318548A CN 104318548 A CN104318548 A CN 104318548A CN 201410531222 A CN201410531222 A CN 201410531222A CN 104318548 A CN104318548 A CN 104318548A
Authority
CN
China
Prior art keywords
image
matched
reference image
matrix
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410531222.6A
Other languages
Chinese (zh)
Other versions
CN104318548B (en
Inventor
钟桦
焦李成
王海明
王爽
侯彪
田小林
熊涛
刘红英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201410531222.6A priority Critical patent/CN104318548B/en
Publication of CN104318548A publication Critical patent/CN104318548A/en
Application granted granted Critical
Publication of CN104318548B publication Critical patent/CN104318548B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

本发明公开了一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,主要克服经典SIFT特征提取算法中,经常会提取到平滑区域和纹理区域不稳定的特征点而造成特征误匹配的问题。其步骤包括:1)分别提取参考图和待匹配图稀疏区域。2)对参考图和待匹配图的稀疏区域提取SIFT特征点。3)对参考图和待匹配图上提取到的SIFT特征点集进行粗匹配。4)用随机一致性估计算法来滤除粗匹配结果中的误匹配。5)利用参考图和待匹配图最终的匹配点对,通过仿射变换得到仿射变换参数来实现对两幅SAR图像配准。本发明能在保证经典SIFT特征配准算法精度前提下提高配准效率,可用于SAR图像的配准处理。

The invention discloses a fast image registration implementation method based on spatial sparsity and SIFT feature extraction, which mainly overcomes feature mismatching caused by feature points that are often extracted in smooth areas and texture areas in the classic SIFT feature extraction algorithm The problem. The steps include: 1) Extracting the sparse regions of the reference image and the image to be matched respectively. 2) Extract SIFT feature points for the sparse regions of the reference image and the image to be matched. 3) Roughly match the SIFT feature point sets extracted from the reference image and the image to be matched. 4) Use a random consensus estimation algorithm to filter out false matches in the rough matching results. 5) Using the final matching point pairs of the reference image and the image to be matched, the affine transformation parameters are obtained through affine transformation to realize the registration of the two SAR images. The invention can improve the registration efficiency under the premise of ensuring the accuracy of the classical SIFT feature registration algorithm, and can be used for registration processing of SAR images.

Description

一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法A Fast Image Registration Method Based on Spatial Sparsity and SIFT Feature Extraction

技术领域technical field

本发明属于图像处理技术领域,涉及图像配准,具体是一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,可用于SAR影像的变化检测、融合、拼接等方面的前期配准工作。The invention belongs to the technical field of image processing and relates to image registration, in particular to a rapid image registration implementation method based on spatial sparsity and SIFT feature extraction, which can be used for early registration of SAR image change detection, fusion, splicing, etc. Work.

背景技术Background technique

合成孔径雷达(SAR)所成的图像具有全天候、全天时、高分辨率和强大的穿透能力等特点,因此,这种图像被广泛的应用到目标识别,变换检测和水面监视,日益成为了当今很具有代表性的对地观测手段之一。在对SAR图像进行拼接、融合、变化检测等操作前,需将来自同一地区、在相同时间不同视点或同一传感器不同时间获取的图像在空间上进行配准,消除因获取图像的时间、角度、环境和传感器成像机理的不同造成图像间的平移、旋转、伸缩及局部形变等问题。The image formed by synthetic aperture radar (SAR) has the characteristics of all-weather, all-time, high resolution and strong penetrating ability. It is one of the most representative means of earth observation today. Before splicing, fusion, change detection and other operations on SAR images, it is necessary to spatially register images from the same region, at different viewpoints at the same time or at different times of the same sensor to eliminate the time, angle, and The difference in environment and sensor imaging mechanism causes problems such as translation, rotation, stretching and local deformation between images.

作为遥感图像处理中的一个基本任务,配准的目标就是将不同时间、不同视角或者不同传感器获得的两幅或多幅图像进行几何对准的处理过程。对于SAR图像,其相干成像过程中形成的斑点噪声对图像灰度分布造成影响,另外成像条件的差异、拍摄时间和季节变化、拍摄场景地物的变化等因素造成图像具有灰度分布复杂、不稳定的特点,使得在光学遥感图像中能够成功应用的算法和技术在应用于SAR图像时却屡屡受挫。As a basic task in remote sensing image processing, the goal of registration is the process of geometrically aligning two or more images obtained at different times, different viewing angles or different sensors. For SAR images, the speckle noise formed in the coherent imaging process affects the gray distribution of the image. In addition, factors such as differences in imaging conditions, shooting time and seasonal changes, and changes in the scene and objects in the shooting scene cause the image to have a complex gray distribution. Due to the stable characteristics, the algorithms and techniques that can be successfully applied in optical remote sensing images are repeatedly frustrated when applied to SAR images.

目前,大量配准的方法已经被提出。根据特征空间的不同,配准算法一般可以分为两类:基于灰度的配准方法和基于特征的配准方法。基于灰度的方法一般都是直接利用整幅图像的灰度信息,通过建立某种像素间的相似性测度来衡量两幅图像重叠部分地表反射属性的匹配程度,进而寻找到最优匹配时的平移、旋转和伸缩等变换参数。基于灰度的图像配准方法,其相似性度量函数包括互相关函数、傅里叶函数和互信息函数等,通过使用相似性最大情况下所对应的变换关系进行变换,最终实现图像之间的配准。在利用基于灰度的配准方法对SAR图像的配准时,容易受到SAR图像成像过程中导致的灰度差异及噪声的影响。基于特征的图像配准方法是通过从图像中选择显著和突出的目标进行检测,提取这些特征点的特征,通过这些特征之间的匹配建立图像间的变换关系,最终实现图像配准。基于特征的图像配准方法能够解决存在较大几何畸变及灰度信息差别的图像间的配准问题。常用的图像特征有:特征点(包括角点、高曲率点、拐点等)、直线段、边缘、轮廓、闭合区域、特征结构等。基于特征的配准方法则首先需要对待配准图像提取特征点、边缘等,然后进行对应特征对或特征集之间的搜索匹配,进而得到图像变换参数。基于边缘特征的配准方法需利用一些边缘检测算子进行边缘特征提取,再对提取的边缘进行匹配,但往往由于提取的边缘过于琐碎,不能很好地体现图像中的结构特征,对后续匹配精度和速度造成影响。而点特征相对于线、面等特征有着易于提取、速度快、精度高、稳定性好等许多方面的优势而有着更广泛的应用。Currently, a large number of registration methods have been proposed. According to different feature spaces, registration algorithms can generally be divided into two categories: grayscale-based registration methods and feature-based registration methods. Grayscale-based methods generally use the grayscale information of the entire image directly, and measure the matching degree of the surface reflection attributes of the overlapping parts of the two images by establishing a similarity measure between pixels, and then find the optimal matching. Transform parameters such as translation, rotation, and scaling. The image registration method based on grayscale, its similarity measurement function includes cross-correlation function, Fourier function and mutual information function, etc., by using the transformation relationship corresponding to the maximum similarity, the final realization of the image registration is achieved. Registration. When registering SAR images using the grayscale-based registration method, it is easily affected by the grayscale difference and noise caused by the SAR image imaging process. The feature-based image registration method selects salient and prominent objects from the image for detection, extracts the features of these feature points, establishes the transformation relationship between images through the matching of these features, and finally realizes image registration. The feature-based image registration method can solve the registration problem between images with large geometric distortion and gray information difference. Commonly used image features include: feature points (including corner points, high curvature points, inflection points, etc.), straight line segments, edges, contours, closed areas, and feature structures. The feature-based registration method first needs to extract feature points, edges, etc. from the image to be registered, and then search and match corresponding feature pairs or feature sets to obtain image transformation parameters. The registration method based on edge features needs to use some edge detection operators to extract edge features and then match the extracted edges, but often because the extracted edges are too trivial to reflect the structural features in the image well, the subsequent matching Accuracy and speed are affected. Compared with features such as lines and surfaces, point features have many advantages such as easy extraction, fast speed, high precision, and good stability, and have wider applications.

基于特征的图像配准对噪声和形变的抗干扰能力更强,因而具有更高的鲁棒性。最常用的基于特征的配准方法之一是基于SIFT特征提取的配准方法。Lowe将SIFT特征的主要步骤概括为:(1)尺度空间极值点提取;(2)极值点定位;(3)主方向分配;(4)关键点描述。SIFT在较小旋转角度下具有较强的旋转、尺度不变性,归一化后还对光照变化鲁棒。但由于SIFT采用了一个128维的向量来描述特征点,在特征点较多的情况下增加了运算代价和时间复杂度,并且在寻找最佳匹配时需要特征间具有较好的区分度。在利用SIFT特征提取算法处理SAR图像时,不仅在直线、折线、交叉处等稀疏结构的区域提取到较多的SIFT特征点,而且在特征不明显的平滑区域和纹理区域也能提取到很多特征点,但是这些平滑区域和纹理区域的特征点周围存在很多与其相似的点,因而耗时较多且这些点特征不明显,容易形成误匹配。Feature-based image registration has stronger anti-interference ability to noise and deformation, so it has higher robustness. One of the most commonly used feature-based registration methods is the registration method based on SIFT feature extraction. Lowe summarized the main steps of SIFT features as: (1) scale space extreme point extraction; (2) extreme point positioning; (3) main direction assignment; (4) key point description. SIFT has strong rotation and scale invariance at small rotation angles, and is also robust to illumination changes after normalization. However, since SIFT uses a 128-dimensional vector to describe the feature points, the calculation cost and time complexity are increased when there are many feature points, and a better distinction between features is required when looking for the best match. When using the SIFT feature extraction algorithm to process SAR images, not only can more SIFT feature points be extracted in areas with sparse structures such as straight lines, broken lines, and intersections, but also many features can be extracted in smooth areas and texture areas where the features are not obvious. points, but there are many similar points around the feature points of these smooth and textured areas, so it takes a lot of time and the features of these points are not obvious, and it is easy to form a mismatch.

发明内容Contents of the invention

本发明的目的在于克服已有技术的不足,提出了一种联合空间稀疏结构和SIFT特征提取的SAR图像配准方法,以实现对SAR图像中稀疏结构区域集中提取SIFT特征点,在减少算法运算代价和时间效率的基础上降低了匹配错误率,提高图像配准的效率。The purpose of the present invention is to overcome the deficiencies of the prior art, and proposes a SAR image registration method that combines spatial sparse structure and SIFT feature extraction, to realize the centralized extraction of SIFT feature points in the sparse structure area of the SAR image, and reduce algorithm operations On the basis of cost and time efficiency, the matching error rate is reduced, and the efficiency of image registration is improved.

实现本发明目的的技术思路是:输入的参考图和待匹配图,分别获取两幅图像的稀疏区域,对参考图和待匹配图的稀疏区域提取SIFT特征点,利用最近次近距离比值准则(NNDR)作为相似度量,得到粗匹配对,并利用随机一致性估计(RANSAC)方法来去除误匹配,最后利用仿射变换来求得仿射变换参数,从而实现参考图和待匹配图的配准。The technical thinking of realizing the object of the present invention is: the input reference image and the image to be matched obtain the sparse regions of the two images respectively, extract the SIFT feature points for the sparse regions of the reference image and the image to be matched, and utilize the nearest short-distance ratio criterion ( NNDR) as a similarity measure to obtain a rough matching pair, and use the random consistency estimation (RANSAC) method to remove the mismatch, and finally use the affine transformation to obtain the affine transformation parameters, so as to realize the registration of the reference image and the image to be matched .

本发明的技术方案是:一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,包括如下步骤:The technical scheme of the present invention is: a kind of fast image registration realization method based on spatial sparsity and SIFT feature extraction, comprises the following steps:

(1)输入在不同时间获取的同一地区的两幅多时相SAR图像,分别记作参考图和待匹配图,对参考图和待匹配图的每个像素点计算稀疏度,挑选稀疏度高的像素点所在的区域作为稀疏区域,分别记作MaskR和MaskS(1) Input two multi-temporal SAR images of the same area acquired at different times, which are respectively recorded as the reference image and the image to be matched, calculate the sparsity of each pixel in the reference image and the image to be matched, and select the one with the highest sparsity The area where the pixel is located is regarded as a sparse area, which is denoted as Mask R and Mask S respectively;

(2)分别对参考图稀疏区域MaskR和待匹配图稀疏区域MaskS提取SIFT特征点,得到参考图和待匹配图SIFT特征点集分别记为{Rm}、{Sn};(2) Extract SIFT feature points from the sparse area Mask R of the reference image and the sparse area Mask S of the image to be matched, and obtain the SIFT feature point sets of the reference image and the image to be matched as {R m } and {S n } respectively;

(3)对参考图和待匹配图上提取到的SIFT特征点集{Rm}和{Sn}进行特征匹配,得到粗匹配特征对{(pq)i},参考图和待匹配图中的粗匹配点集分别为{pi}和{qi},i表示匹配对的索引号;(3) Perform feature matching on the SIFT feature point sets {R m } and {S n } extracted from the reference image and the image to be matched to obtain a rough matching feature pair {(pq) i }, the reference image and the image to be matched The coarse matching point sets of are respectively {p i } and {q i }, i represents the index number of the matching pair;

(4)利用随机一致性估计算法对粗匹配结果去除误匹配对,去除误匹配对后参考图和待匹配图保留下来的特征点对集合记为{(PQ)i},此时参考图和待匹配图中各自保留下来特征点分别为{Pi}、{Qi},其中i表示匹配对的索引号;(4) Use the random consistency estimation algorithm to remove the mismatched pairs from the rough matching results. After removing the mismatched pairs, the set of feature point pairs retained in the reference image and the image to be matched is denoted as {(PQ) i }. At this time, the reference image and The remaining feature points in the graph to be matched are respectively {P i } and {Q i }, where i represents the index number of the matching pair;

(5)利用参考图和待匹配图的匹配点集{Pi}和{Qi},通过仿射变换得到仿射变换参数,最后根据仿射变换参数对两幅图像配准。(5) Using the matching point sets {P i } and {Q i } of the reference image and the image to be matched, affine transformation parameters are obtained through affine transformation, and finally the two images are registered according to the affine transformation parameters.

上述步骤(1)包括如下步骤:Above-mentioned step (1) comprises the steps:

1)分别对参考图R和待匹配图S进行两层小波分解,小波基取db1,得到参考图和待匹配图的细节分量图,分别记为Rdb1、Sdb11) Carry out two-layer wavelet decomposition on the reference image R and the image to be matched respectively, and take db1 as the wavelet base to obtain the detail component images of the reference image and the image to be matched, which are denoted as R db1 and S db1 respectively;

2)对参考图细节分量图和待匹配图细节分量图的每一个像素点计算方差系数,然后以当前像素点为中心、取一个大小为m*m的邻域计算方差,即可得到当前像素点的方差系数CV;设定一个阈值Tcv,它与SAR图像的方差系数呈Tcv=β·CV的线性关系,其中β为线性系数;由阈值Tcv可得到二值化的方差系数矩阵CV_mask:2) Calculate the variance coefficient for each pixel of the detail component map of the reference image and the detail component map of the image to be matched, and then take the current pixel point as the center and take a neighborhood of size m*m to calculate the variance, and the current pixel can be obtained The variance coefficient CV of the point; set a threshold T cv , which has a linear relationship with the variance coefficient of the SAR image T cv = β·CV, where β is a linear coefficient; the binarized variance coefficient matrix can be obtained from the threshold T cv CV_mask:

CVcv __ maskmask == 11 ,, CVcv >> TT cvcv 00 ,, CVcv __ << == TT cvcv

可以计算出参考图和待匹配图二值化的方差系数矩阵,分别记为CV_maskR和CV_maskSThe variance coefficient matrix of the binarization of the reference image and the image to be matched can be calculated, which are respectively denoted as CV_mask R and CV_mask S ;

3)对参考图的方差系数矩阵细化处理来得到细化矩阵TR,同样对待匹配图方差系数矩阵细化处理得到细化矩阵TS3) Thinning the variance coefficient matrix of the reference image to obtain a thinning matrix T R , and similarly refining the variance coefficient matrix of the matching image to obtain a thinning matrix T S ;

4)计算参考图和待匹配图的稀疏矩阵;4) Calculate the sparse matrix of the reference image and the image to be matched;

5)挑选稀疏矩阵中稀疏度值最大的前20%像素点作为稀疏度点,稀疏点膨胀处理即得到了稀疏区域。5) Select the top 20% of the pixels with the largest sparsity value in the sparse matrix as the sparsity point, and expand the sparse point to obtain the sparse area.

上述步骤4)包括如下步骤:Above-mentioned step 4) comprises the steps:

(a)计算细化矩阵中各像素点梯度的大小和方向:(a) Calculate the size and direction of the gradient of each pixel in the refinement matrix:

(b)分析各像素点梯度的方向,统计当前像素点的方向特征,得到参考图和待匹配图的方向特征矩阵,分别记为DR、DS(b) Analyze the direction of the gradient of each pixel point, count the direction characteristics of the current pixel point, and obtain the direction characteristic matrix of the reference image and the image to be matched, which are recorded as DR and DS respectively;

(c)由细化矩阵和方向特征矩阵,可以计算参考图和待匹配图的稀疏度,稀疏度定义为:(c) From the refinement matrix and the direction feature matrix, the sparsity of the reference image and the image to be matched can be calculated, and the sparsity is defined as:

SS == 11 NN &Sigma;&Sigma; (( ii ,, jj )) &Element;&Element; &Omega;&Omega; DD. (( ii ,, jj )) &CenterDot;&Center Dot; TT (( ii ,, jj )) 22

其中,(i,j)是当前像素点的索引,Ω是以当前像素点为中心大小为r×r邻域,N为邻域Ω中像素点总数,D为方向特征矩阵,定义T为细化矩阵;由此再得到参考图和待匹配图的稀疏矩阵SR、SSAmong them, (i, j) is the index of the current pixel point, Ω is the r×r neighborhood centered on the current pixel point, N is the total number of pixels in the neighborhood Ω, D is the direction feature matrix, and T is defined as fine matrix; thus obtain the sparse matrix S R and S S of the reference image and the image to be matched.

上述步骤(b)包括如下步骤:Above-mentioned step (b) comprises the steps:

a)构造一个(2r+1)*(2r+1)大小的16方向模板,方向模板各个子方向对应的角度范围分别为i∈[1,16],r为方向模板半径;a) Construct a 16-direction template with a size of (2r+1)*(2r+1), and the angle ranges corresponding to each sub-direction of the direction template are respectively i∈[1,16], r is the radius of the direction template;

b)用16方向模板分析细化矩阵中各细化边缘点的方向特征;b) Analyzing the direction characteristics of each thinning edge point in the thinning matrix with 16 direction templates;

c)对参考图和待匹配图的方向特征矩阵归一化处理,即可得到参考图和待匹配图的方向特征矩阵DR、DSc) Normalize the directional feature matrices of the reference image and the image to be matched to obtain the directional feature matrices D R and D S of the reference image and the image to be matched.

本发明的有益效果:本发明是利用SAR图像方差系数判断出图像的平滑区域、纹理区域、稀疏结构区域,排除特征不明显且易造成误匹配的平滑区域和纹理区域,仅对那些折线拐点处、交叉处等稀疏度较高的像素点所在的稀疏区域提取SIFT特征点进行特征匹配,通过求取仿射变换参数来实现图像配准。本发明与现有的技术相比具有以下优点:Beneficial effects of the present invention: the present invention uses the variance coefficient of the SAR image to determine the smooth area, texture area, and sparse structure area of the image, and excludes smooth areas and texture areas that are not obvious and easily cause mismatches, and only for those fold line inflection points Extract SIFT feature points for feature matching in the sparse area where pixels with high sparsity such as intersections and intersections are located, and image registration is realized by calculating affine transformation parameters. Compared with the prior art, the present invention has the following advantages:

1.本发明是采用经典了的SIFT算子,在提取SIFT特征点的过程中的建立高斯尺度空间时,利用高斯函数模糊图像的同时也减少了SAR图像斑点噪声,从而提高了像素点在其邻域的可区分性,在一定程度上减少了误匹配的发生。1. The present invention adopts the classic SIFT operator. When the Gaussian scale space is established in the process of extracting SIFT feature points, the Gaussian function is used to blur the image while reducing the speckle noise of the SAR image, thereby improving the pixel point in its The distinguishability of the neighborhood reduces the occurrence of false matching to a certain extent.

2.本发明设计了一种稀疏度定义法,能够比较准确的分析出折线、拐点、交叉点等特征明显的稀疏区域,排除平滑区域、纹理区域等稀疏度低且特征不明显的非稀疏区域。2. The present invention designs a method for defining the degree of sparsity, which can accurately analyze sparse areas with obvious features such as broken lines, inflection points, and intersections, and exclude non-sparse areas with low sparsity and inconspicuous features such as smooth areas and textured areas .

3.本发明仅对参考图和待匹配图稀疏区域提取SIFT特征点,而排除平滑区域、纹理区域等稀疏度低、特征不明显的非稀疏区域,因而降低了误匹配概率,并且使算法更加快捷。3. The present invention only extracts SIFT feature points for the sparse areas of the reference image and the image to be matched, and excludes non-sparse areas with low sparsity and inconspicuous features such as smooth areas and texture areas, thereby reducing the probability of false matching and making the algorithm more efficient. fast.

4.本发明实现简单、结构清晰,对参考图和待匹配图重叠面积要求小,并且SIFT算子对具有旋转、尺度变换的情形或包含变化区域的情形仍具有较高的配准精度。4. The present invention is simple to implement and has a clear structure, and requires a small overlapping area between the reference image and the image to be matched, and the SIFT operator still has high registration accuracy for situations with rotation and scale transformation or situations including changing regions.

附图说明Description of drawings

图1是本发明的流程图;Fig. 1 is a flow chart of the present invention;

图2是本发明中计算稀疏区域的子流程图;Fig. 2 is the sub-flow chart of calculating sparse area among the present invention;

图3是本发明实验输入的两幅SAR图像,图3(a)为参考图,图3(b)为待匹配图;Fig. 3 is two pieces of SAR images that the experiment of the present invention inputs, and Fig. 3 (a) is reference figure, and Fig. 3 (b) is figure to be matched;

图4是参考图和待匹配图的稀疏区域MaskA、MaskBFig. 4 is the sparse area Mask A and Mask B of the reference image and the image to be matched;

图5是经典SIFT算法提取到的参考图和待匹配图的特征点分布;Figure 5 is the distribution of feature points of the reference image and the image to be matched extracted by the classic SIFT algorithm;

图6是对参考图和待匹配图的稀疏区域MaskA、MaskB提取SIFT特征点分布;Fig. 6 is to extract the SIFT feature point distribution of the sparse area Mask A and Mask B of the reference image and the image to be matched;

图7是根据仿射变换参数得到的参考图和待匹配图的配准结果,用棋盘法显示。Fig. 7 is the registration result of the reference image and the image to be matched obtained according to the affine transformation parameters, displayed by the chessboard method.

具体实施方式detailed description

参照图1,本发明的具体实现步骤如下:With reference to Fig. 1, the concrete realization steps of the present invention are as follows:

步骤1,输入两幅在不同时间获取的同一地区的多时相SAR图像R和S,为了便于描述,我们称图像R为参考图,称图像S为待匹配图,分别计算两幅图像的稀疏区域MaskR和MaskS。本步骤参照图2,具体步骤如下:Step 1. Input two multi-temporal SAR images R and S of the same area acquired at different times. For the convenience of description, we call the image R the reference image and the image S the image to be matched. Calculate the sparse areas of the two images respectively Mask R and Mask S. Refer to Figure 2 for this step, and the specific steps are as follows:

(1a)分别对参考图R和待匹配图S进行两层小波分解,小波基取db1,得到参考图和待匹配图的细节分量分别记为Rdb1、Sdb1(1a) Carry out two-layer wavelet decomposition on the reference image R and the image to be matched respectively, and take db1 as the wavelet base, and obtain the detail components of the reference image and the image to be matched as R db1 and S db1 respectively;

(1b)对参考图细节分量Rdb1和待匹配图细节分量Sdb1的每一个像素点计算方差系数CV。CV是概率分布离散度的度量,它被定义为标准差和均值的比:(1b) Calculate the variance coefficient CV for each pixel of the reference image detail component R db1 and the to-be-matched image detail component S db1 . CV is a measure of the dispersion of a probability distribution, which is defined as the ratio of the standard deviation to the mean:

CVcv == &sigma;&sigma; xx &mu;&mu; xx

其中σx表示以当前像素点为中心大小为r×r邻域的标准差,μx表示以当前像素点为中心大小为r×r邻域的均值。然后以当前像素点为中心、取一个大小为m×m的邻域计算方差,即可得到当前像素点的方差系数CV,本发明实例中r=3、m=5。用以上方法可得到参考图细节分量Rdb1的方差系数矩阵CVR;同理对于待匹配图细节分量Sdb1有方差系数矩阵CVS。设定一个阈值Tcv,它与SAR图像的方差系数CV有关(对于强度SAR图像对于幅度SAR图像其中L是SAR图像的视数),一般经验性地把它看作是与方差系数呈线性关系,即Tcv=β·CV,本发明中系数常量β值取为0.55。可得到二值化的方差系数矩阵CV_mask:Among them, σ x represents the standard deviation of the r×r neighborhood centered on the current pixel point, and μx represents the mean value of the r×r neighborhood centered on the current pixel point. Then taking the current pixel as the center and taking a neighborhood with a size of m*m to calculate the variance, the variance coefficient CV of the current pixel can be obtained. In the example of the present invention, r=3, m=5. Using the above method, the variance coefficient matrix CV R of the reference image detail component R db1 can be obtained; similarly, there is a variance coefficient matrix CV S for the image detail component S db1 to be matched. Set a threshold T cv , which is related to the variance coefficient CV of the SAR image (for the intensity SAR image For magnitude SAR images Where L is the visual number of the SAR image), it is generally regarded empirically as having a linear relationship with the variance coefficient, that is, T cv =β·CV, and the value of the constant β of the coefficient in the present invention is taken as 0.55. The binarized variance coefficient matrix CV_mask can be obtained:

CVcv __ maskmask == 11 ,, CVcv >> TT cvcv 00 ,, CVcv __ << == TT cvcv

可以计算出参考图和待匹配图二值化的方差系数矩阵,分别记为CV_maskR和CV_maskSThe variance coefficient matrices for the binarization of the reference image and the image to be matched can be calculated, which are denoted as CV_mask R and CV_mask S respectively.

(1c)对参考图CV_maskR矩阵细化处理得到细化矩阵TR,对待匹配图CV_maskS矩阵细化处理得到细化矩阵TS(1c) Thinning the reference image CV_mask R matrix to obtain a thinning matrix T R , and refining the matching image CV_mask S matrix to obtain a thinning matrix T S .

(1d)分别对参考图和待匹配图的稀疏矩阵。计算稀疏矩阵的具体步骤如下:(1d) Sparse matrices for the reference image and the image to be matched, respectively. The specific steps to calculate the sparse matrix are as follows:

1d1)计算细化矩阵中各像素点的梯度大小和方向:1d1) Calculate the gradient magnitude and direction of each pixel in the refinement matrix:

mm (( xx ,, ythe y )) == (( dd xx ++ 11 ,, ythe y -- dd xx -- 11 ,, ythe y )) 22 ++ (( dd xx ,, ythe y ++ 11 -- dd xx ,, ythe y -- 11 )) 22

kk (( xx ,, ythe y )) == tanthe tan -- 11 (( dd xx ++ 11 ,, ythe y -- dd xx -- 11 ,, ythe y dd xx ,, ythe y ++ 11 -- dd xx ,, ythe y -- 11 ))

其中(x,y)表示的细化矩阵中当前像素点的索引,d为细化矩阵中像素点的灰度值,m(x,y)为梯度大小,k(x,y)为梯度方向。Where (x, y) represents the index of the current pixel in the refinement matrix, d is the gray value of the pixel in the refinement matrix, m (x, y) is the gradient size, and k (x, y) is the gradient direction .

1d2)构造15×15大小的16方向模板,方向模板各个子方向对应的角度范围分别为i∈[1,16],方向模板上各个子方向上的像素点总数目为N,方向模板半径为r。1d2) Construct a 16-direction template with a size of 15×15, and the angle ranges corresponding to each sub-direction of the direction template are respectively i∈[1,16], the total number of pixels in each sub-direction on the direction template is N, and the radius of the direction template is r.

1d3)利用16方向模板分析细化矩阵每个细化边缘点的方向特征。在细化矩阵中,取以当前细化边缘点为中心大小为15×15的邻域,分析各子方向上每个像素点的梯度方向,统计梯度方向与该模板子方向相一致的像素点数目,记为si,统计可得当前细化边缘点归一化后的方向特征向量 hist = [ s 1 &Sigma; s i . . . s i &Sigma; s i . . . s 16 &Sigma; s i ] . 1d3) Using the 16-direction template to analyze the direction characteristics of each thinning edge point in the thinning matrix. In the refinement matrix, take the neighborhood with the current refinement edge point as the center and the size is 15×15, analyze the gradient direction of each pixel in each sub-direction, and count the number of pixels whose gradient direction is consistent with the sub-direction of the template , denoted as s i , statistically obtain the normalized directional feature vector of the current thinning edge point hist = [ the s 1 &Sigma; the s i . . . the s i &Sigma; the s i . . . the s 16 &Sigma; the s i ] .

1d4)计算参考图和待匹配图的方向特征矩阵。设定一个阈值N为方向模板上各个子方向上的像素点总数目,r为方向模板半径,本发明中阈值系数β取值为1.8。分析细化边缘点的方向特征,若hi≥Thist,则此方向为主方向之一,把每个边缘点的主方向个数作为其方向特征,可得到参考图和待匹配图的方向特征矩阵DR、DS1d4) Calculating the directional feature matrix of the reference image and the image to be matched. set a threshold N is the total number of pixels in each sub-direction on the direction template, r is the radius of the direction template, and the value of the threshold coefficient β in the present invention is 1.8. Analyze and refine the direction characteristics of edge points. If h i ≥ T hist , this direction is one of the main directions. The number of main directions of each edge point is used as its direction feature, and the directions of the reference image and the image to be matched can be obtained Characteristic matrices D R , D S .

1d5)计算参考图和待匹配图的稀疏度,定义像稀疏矩阵,1d5) Calculate the sparsity of the reference image and the image to be matched, define a sparse matrix,

SS == 11 NN &Sigma;&Sigma; (( ii ,, jj )) &Element;&Element; &Omega;&Omega; DD. (( ii ,, jj )) &CenterDot;&Center Dot; TT (( ii ,, jj )) 22

其中,(i,j)是当前像素点的索引,Ω是以当前像素点为中心大小为r×r邻域,N为邻域Ω中像素点总数,D为方向特征矩阵,定义T为细化矩阵。可得到参考图和待匹配图的稀疏矩阵SR、SSAmong them, (i, j) is the index of the current pixel point, Ω is the r×r neighborhood centered on the current pixel point, N is the total number of pixels in the neighborhood Ω, D is the direction feature matrix, and T is defined as fine matrix. The sparse matrices S R and S S of the reference image and the image to be matched can be obtained.

(1e)挑选稀疏矩阵中稀疏度值最大的前20%像素点作为稀疏度点,稀疏点膨胀处理即得到了稀疏区域。(1e) Select the top 20% of the pixels with the largest sparsity value in the sparse matrix as the sparsity point, and expand the sparse point to obtain the sparse area.

步骤2,分别对参考图R和待匹配图S的稀疏区域提取SIFT特征点,特征点提取的具体步骤如下:Step 2. Extract SIFT feature points from the sparse regions of the reference image R and the image to be matched respectively. The specific steps of feature point extraction are as follows:

(2a)构建高斯尺度空间。一幅二维图像在不同尺度下的尺度空间可由原图像与高斯核函数卷积得到:(2a) Construct a Gaussian scale space. The scale space of a two-dimensional image at different scales can be obtained by convolving the original image with the Gaussian kernel function:

L(x,y,σ)=G(x,y,σ)*I(x,y)L(x,y,σ)=G(x,y,σ)*I(x,y)

其中:L(x,y,σ)表示图像的尺度空间,I(x,y)表示原图像,(x,y)表示输入图像当前像素点的索引,*表示卷积运算,G(x,y,σ)为可变尺度高斯核函数,σ是尺度空间因子,核函数定义为:Among them: L(x, y, σ) represents the scale space of the image, I(x, y) represents the original image, (x, y) represents the index of the current pixel of the input image, * represents the convolution operation, G(x, y,σ) is a variable scale Gaussian kernel function, σ is a scale space factor, and the kernel function is defined as:

GG (( xx ,, ythe y ,, &sigma;&sigma; )) == 11 22 &pi;&pi; &sigma;&sigma; 22 ee -- xx 22 ++ ythe y 22 22 &sigma;&sigma; 22

对尺度空间进行离散化采样,即取σ的不同倍数下的高斯核与下采样图像卷积,所生成的一系列高斯图像,称为高斯金字塔(LOG)。高斯金字塔(LOG)分为O组,每组包含S+3层尺度空间,本发明中取O=4、S=2。The scale space is discretized and sampled, that is, the Gaussian kernel under different multiples of σ is convolved with the downsampled image, and a series of Gaussian images generated are called Gaussian pyramid (LOG). Gaussian Pyramid (LOG) is divided into O groups, and each group includes S+3 layers of scale space, and O=4 and S=2 are taken in the present invention.

先求第一组高斯尺度空间中的每幅图像。输入图像I0,经不同大小高斯核的核函数(高斯核分别为σi=kiσ)卷积得到输出图像Ii,i=0,12,3,4,即为高斯尺度空间的第一组图像。First find each image in the first set of Gaussian scale spaces. The input image I 0 is convoluted by the kernel functions of Gaussian kernels of different sizes (the Gaussian kernels are σ i =k i σ respectively) to obtain the output image I i , i=0, 12, 3, 4, which is the first Gaussian scale space A set of images.

第二组高斯金字塔的每层图像大小分别是第一组图像大小的1/4,其输入图像是由第一组LoG中第S层图像经采样率为2的下采样得到。如当S=2时,第二组高斯金字塔的输入图像I′0由第一组高斯金字塔I1下采样得到,然后经不同大小高斯核的核函数(高斯核为σi=kiσ)卷积得到输出I′i,i=0,12,3,4,即得到高斯尺度空间的第二组图。同理可得到高斯尺度空间的第三、四组图像。这样就生成了高斯尺度空间。The image size of each layer of the second group of Gaussian pyramids is 1/4 of the size of the first group of images, and its input image is obtained by downsampling the S-th layer image in the first group of LoG with a sampling rate of 2. For example, when S=2, the input image I′ 0 of the second group of Gaussian pyramids is obtained by downsampling the first group of Gaussian pyramids I 1 , and then undergoes kernel functions of Gaussian kernels of different sizes (Gaussian kernels are σ i = k i σ) The convolution obtains the output I′ i , i=0, 12, 3, 4, that is, the second group of images in the Gaussian scale space is obtained. Similarly, the third and fourth groups of images in the Gaussian scale space can be obtained. This creates a Gaussian scale space.

(2b)构建差分高斯尺度空间(DOG)。将高斯金字塔(LOG)中相邻两层的尺度空间函数进行相减就能够得到高斯差分金字塔(DOG),用D(x,y,σ)表示:(2b) Construct a difference-of-Gaussian scale space (DOG). Subtract the scale space functions of two adjacent layers in the Gaussian pyramid (LOG) to obtain the difference of Gaussian pyramid (DOG), represented by D(x,y,σ):

D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ)×I(x,y))=L(x,y,kσ)-L(x,y,σ)D(x,y,σ)=(G(x,y,kσ)-G(x,y,σ)×I(x,y))=L(x,y,kσ)-L(x,y ,σ)

其中,(x,y)表示输入图像中的当前像素点,I(x,y)表示待处理图像,即差分高斯尺度空间D(x,y,σ)由相差一个常数因子k的两层相邻高斯尺度空间函数相减得到。Among them, (x, y) represents the current pixel in the input image, and I(x, y) represents the image to be processed, that is, the difference Gaussian scale space D(x, y, σ) consists of two phases with a difference of a constant factor k It is obtained by subtracting adjacent Gaussian scale space functions.

(2c)极值点检测。将差分高斯尺度空间(DOG)里的每个像素点与同层的8个像素点、上层的9个像素点和下层的9个像素点共26个像素点比较大小,如果当前像素点为极大值或极小值,则把当前像素点定为关键点。(2c) Extreme point detection. Compare the size of each pixel in the differential Gaussian scale space (DOG) with 8 pixels in the same layer, 9 pixels in the upper layer, and 9 pixels in the lower layer, a total of 26 pixels. If the current pixel is extremely If the value is large or small, the current pixel point is set as the key point.

(2d)关键点的精确定位。由于边缘和噪声对差分高斯尺度空间(DOG)的影响较大,因此还需进一步的筛选上一步中检测到的关键点,精确定位具有尺度不变性的特征点,并且需要滤除低对比度点和边缘响应点,提高特征匹配的鲁棒性和稳定性。为了精确确定特征点的尺度和位置,须对检测出得局部极值点进行三维二次函数拟合。在局部极值点(x0,y0,σ)处,尺度空间函数D(x,y,σ)的泰勒展开式为:(2d) Precise localization of key points. Since the edge and noise have a great influence on the differential Gaussian scale space (DOG), it is necessary to further screen the key points detected in the previous step to accurately locate the feature points with scale invariance, and it is necessary to filter out low-contrast points and Edge response points to improve the robustness and stability of feature matching. In order to accurately determine the scale and position of feature points, three-dimensional quadratic function fitting must be performed on the detected local extremum points. At the local extremum point (x 0 ,y 0 ,σ), the Taylor expansion of the scale space function D(x,y,σ) is:

DD. (( Xx )) == DD. (( Xx 00 )) ++ &PartialD;&PartialD; DD. TT &PartialD;&PartialD; Xx (( Xx -- Xx 00 )) ++ 11 22 (( Xx -- Xx 00 )) TT &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; Xx 22 (( Xx -- Xx 00 )) 22

上式中,X=(x,y,σ)T表示特征点的位置和尺度信息,X0表示待选特征点的位置和尺度信息,一阶和二阶偏导计算如下,In the above formula, X = (x, y, σ) T represents the position and scale information of the feature point, X 0 represents the position and scale information of the feature point to be selected, and the first-order and second-order partial derivatives are calculated as follows,

&PartialD;&PartialD; DD. &PartialD;&PartialD; Xx == &PartialD;&PartialD; DD. &PartialD;&PartialD; xx &PartialD;&PartialD; DD. &PartialD;&PartialD; ythe y &PartialD;&PartialD; DD. &PartialD;&PartialD; &sigma;&sigma; ,, &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; Xx 22 == &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; xx 22 &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; xyxy &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; x&sigma;x&sigma; &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; yxyx &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; ythe y 22 &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; y&sigma;y&sigma; &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; &sigma;x&sigma;x &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; &sigma;y&sigma;y &PartialD;&PartialD; 22 DD. &PartialD;&PartialD; &sigma;&sigma; 22

然后对此拟合函数进行求导并令一阶导数为零,即可得到x的极值x Then take the derivative of this fitting function and set the first derivative to zero, then the extremum x of x can be obtained

Xx ^^ == -- &PartialD;&PartialD; 22 DD. -- 11 &PartialD;&PartialD; 22 Xx 22 &PartialD;&PartialD; DD. &PartialD;&PartialD; Xx

代入尺度空间函数D(x,y,σ),可得Will Substituting the scale space function D(x,y,σ), we can get

DD. (( Xx ^^ )) == DD. (( Xx 00 )) ++ 11 22 &PartialD;&PartialD; DD. TT &PartialD;&PartialD; Xx Xx ^^

则表示该点的对比度较低,此时将其剔除。like It means that the contrast of this point is low, and it will be removed at this time.

得到稳定精确的特征点的一个重要步骤,即是过滤掉边缘响应的特征点。过滤边缘响应特征点时,参照如下准则:在高斯差分空间函数D(x,y,σ)的边缘交叉处和峰值处,一个较大的主曲率值存在于图像边缘的特征点上,但在垂直方向主曲率的值较小。其中,求主曲率值可通过2×2的Hessian矩阵H,Hessian矩阵H,An important step to obtain stable and accurate feature points is to filter out edge response feature points. When filtering edge response feature points, refer to the following criteria: at the edge intersection and peak of the Gaussian difference space function D(x, y, σ), a larger principal curvature value exists on the feature point of the image edge, but in The value of the principal curvature in the vertical direction is smaller. Among them, the main curvature value can be obtained through the 2×2 Hessian matrix H, Hessian matrix H,

Hh == DD. xxxx DD. xyxy DD. yxyx DD. yyyy

由于H矩阵的特征值和D的主曲率成比例关系,采用不具体求特征值只求其比例ratio的方法,α、β分别为最大特征值和最小的特征值,γ=α/β,那么有:Since the eigenvalues of the H matrix are proportional to the principal curvature of D, the method of not specifically calculating the eigenvalues but only the ratio ratio is adopted. α and β are the largest and smallest eigenvalues respectively, and γ=α/β, then have:

ratioratio == TrTr (( Hh )) 22 DetDet (( Hh )) == (( &alpha;&alpha; ++ &beta;&beta; )) 22 &alpha;&beta;&alpha;&beta; == (( &gamma;&gamma; ++ 11 )) 22 &gamma;&gamma;

矩阵H的主对角元素之和Tr(H)的计算公式为:Tr(H)=Dxx+Dyy=α+β;矩阵H行列式的值Det(H)的计算公式为:Det(H)=DxxDyy-(Dxy)2=αβ。The calculation formula of the sum Tr(H) of the main diagonal elements of the matrix H is: Tr(H)=D xx +D yy =α+β; the calculation formula of the value Det(H) of the matrix H determinant is: Det( H)=D xx D yy −(D xy ) 2 =αβ.

因此我们只需要检测Therefore we only need to detect

TrTr (( Hh )) DetDet (( Hh )) << (( &gamma;&gamma; ++ 11 )) 22 &gamma;&gamma;

其中γ=10。where γ=10.

(2e)生成特征描述子。为了保证特征点的旋转不变性,需要利用特征点周围领域像素点的梯度分布特性来计算特征点的主方向。每个像素点的梯度大小和方向计算如下:(2e) Generate feature descriptors. In order to ensure the rotation invariance of feature points, it is necessary to use the gradient distribution characteristics of the pixel points around the feature point to calculate the main direction of the feature point. The gradient magnitude and direction of each pixel are calculated as follows:

mm (( xx ,, ythe y )) == (( LL (( xx ++ 11 ,, ythe y )) -- LL (( xx -- 11 ,, ythe y )) )) 22 ++ (( LL (( xx ++ 11 ,, ythe y )) -- LL (( xx -- 11 ,, ythe y )) )) 22

&theta;&theta; (( xx ,, ythe y )) == arctanarctan (( LL (( xx ,, ythe y ++ 11 )) -- LL (( xx ,, ythe y -- 11 )) LL (( xx ++ 11 ,, ythe y )) -- LL (( xx -- 11 ,, ythe y )) ))

其中,特征点在(x,y)处梯度的大小为m(x,y),方向为θ(x,y),在金字塔中特征点(x,y)的尺度用L(x,y)来表示。在以当前特征点为中心的邻域内,采用梯度方向直方图统计邻域内像素的梯度方向。梯度方向范围为[0,360°),其中每10°为一个直方图柱面。共有36柱面。梯度方向直方图的波峰代表了该特征点处邻域梯度的主方向,作为该特征点的主方向。若梯度方向直方图中存在主峰值能力80%以上的峰值,该峰值对应的方向作为辅方向,每个特征点有一个主方向和若干个辅方向。Among them, the size of the gradient of the feature point at (x, y) is m(x, y), the direction is θ(x, y), and the scale of the feature point (x, y) in the pyramid is L(x, y) To represent. In the neighborhood centered on the current feature point, the gradient direction of the pixels in the neighborhood is counted using the gradient direction histogram. The gradient direction range is [0,360°), where every 10° is a histogram cylinder. There are 36 cylinders in total. The peak of the gradient direction histogram represents the main direction of the neighborhood gradient at the feature point, which is taken as the main direction of the feature point. If there is a peak of more than 80% of the main peak capacity in the gradient direction histogram, the direction corresponding to the peak is used as the auxiliary direction, and each feature point has a main direction and several auxiliary directions.

确定特征点主方向之后,可以生成特征描述子。以特征点为中心取16×16窗口,在4×4的图像小块上计算8个方向的梯度方向直方图,绘制每个梯度方向的累加值。为每个4×4的子块统计梯度信息,16×16的邻域产生4×4个子块,因此特征描述子一共4×4×8=128维。特征描述子由所有子块的梯度方向直方图构成,最终形成128维的SIFT特征描述向量。After determining the main direction of the feature points, the feature descriptor can be generated. Take a 16×16 window centered on the feature point, calculate the gradient direction histogram of 8 directions on the 4×4 image block, and draw the cumulative value of each gradient direction. Gradient information is calculated for each 4×4 sub-block, and 4×4 sub-blocks are generated in a 16×16 neighborhood, so the feature descriptor has a total of 4×4×8=128 dimensions. The feature descriptor consists of the gradient orientation histograms of all sub-blocks, and finally forms a 128-dimensional SIFT feature description vector.

步骤3,对参考图像和待匹配图像的SIFT特征点进行特征匹配,再利用随机一致性估计(RANSAC)方法滤除匹配结果中的误匹配对。具体步骤如下:Step 3, perform feature matching on the SIFT feature points of the reference image and the image to be matched, and then use the random consistency estimation (RANSAC) method to filter out the mismatched pairs in the matching results. Specific steps are as follows:

(3a)对参考图和待匹配图稀疏区域提取到的SIFT特征点进行特征点匹配,用最近邻距离与次近邻距离比(NNDR)准则来实现。假设A为参考图中的某个特征点,DA其对应的特征向量。在待配准图中特征点A的最近邻(欧式距离最小)为特征点B和次近邻(欧氏距离次最小)为特征点C,它们对应的特征向量为DB和DC,最近邻距离与次近邻距离比(NNDR)准则定义为:(3a) Perform feature point matching on the SIFT feature points extracted from the sparse regions of the reference image and the image to be matched, and use the nearest neighbor distance to the next nearest neighbor distance ratio (NNDR) criterion to achieve. Suppose A is a certain feature point in the reference image, and D A is its corresponding feature vector. In the image to be registered, the nearest neighbor (the smallest Euclidean distance) of the feature point A is the feature point B and the second nearest neighbor (the second smallest Euclidean distance) is the feature point C, and their corresponding feature vectors are DB and D C , and the nearest neighbor The distance to next nearest neighbor distance ratio (NNDR) criterion is defined as:

|| DD. AA -- DD. BB || || DD. AA -- DD. CC || >> TT disdis

其中Tdis为距离比阈值,DA表示参考图像中当前特征点A的128维特征描述子,DB表示待匹配图中与A最相似的特征点B的128维特征描述子,DC表示待匹配图中与A次相似的特征点C的128维特征描述子,当最近邻欧式距离和次近邻欧式距离之间的比值大于阈值Tdis时,则认为最近邻欧氏距离所对应的一对特征点是匹配特征点对。阈值取值越大,以为着匹配要求越宽松,错误匹配点存在概率增大;阈值取值越小意味着匹配的要求越严格,错误匹配点存在的概率减小,同时会去除一些正确的匹配。因此阈值取值根据所配准图像种类等实际情况设定。匹配完成后参考图和待匹配图中匹配中的点集分别记作{pi}和{qi},即得到了粗匹配对{(pq)i},i表示匹配对的索引号。Where T dis is the distance ratio threshold, D A represents the 128-dimensional feature descriptor of the current feature point A in the reference image, DB represents the 128-dimensional feature descriptor of the feature point B most similar to A in the image to be matched, and D C represents For the 128-dimensional feature descriptor of the feature point C similar to A in the image to be matched, when the ratio between the nearest neighbor Euclidean distance and the second nearest neighbor Euclidean distance is greater than the threshold T dis , it is considered that the nearest neighbor Euclidean distance corresponds to a A pair of feature points is a pair of matching feature points. The larger the threshold value, the looser the matching requirements, and the higher the probability of wrong matching points; the smaller the threshold value, the stricter the matching requirements, the lower the probability of wrong matching points, and some correct matches will be removed. . Therefore, the threshold value is set according to the actual situation such as the type of image to be registered. After the matching is completed, the matching point sets in the reference image and the image to be matched are respectively recorded as {p i } and {q i }, that is, the rough matching pair {(pq) i } is obtained, and i represents the index number of the matching pair.

(3b)因为杂乱背景和噪声等的影响,上一步得到SIFT匹配对中常常存在许多错误匹配点对,它们会引起图像几何校正的误差,我们用随机一致性估计(RANSAC)方法去除粗匹配结果中的错误匹配对,去除误匹配后参考图和待匹配图中匹配对中的点集分别记作{P(xi,yj)}和{Q(x′i,y′i)},i表示匹配对的索引号。(3b) Due to the influence of cluttered background and noise, there are often many wrong matching point pairs in the SIFT matching pairs obtained in the previous step, which will cause errors in the geometric correction of the image. We use the random consistency estimation (RANSAC) method to remove the rough matching results The wrong matching pair in , after removing the wrong matching, the point sets in the matching pair in the reference image and the image to be matched are denoted as {P( xi ,y j )} and {Q(x′ i ,y′ i )}, respectively, i represents the index number of the matched pair.

步骤4,利用参考图和待匹配图的匹配点集{P(xi,yi)}和{Q(x′i,y′i)},通过仿射变换函数得到精确的仿射变换参数,放射变换模型如下:Step 4, use the matching point sets {P( xi ,y i )} and {Q(x′ i ,y′ i )} of the reference image and the image to be matched to obtain accurate affine transformation parameters through the affine transformation function , the radiation transformation model is as follows:

xx ythe y == aa 11 bb 11 aa 22 bb 22 &CenterDot;&Center Dot; xx &prime;&prime; ythe y &prime;&prime; ++ cc 11 cc 22

变换矩阵M为:The transformation matrix M is:

Mm == aa 11 bb 11 cc 11 aa 22 bb 22 cc 22

其中,a1、a2、b1、b2为旋转变量,c1为水平偏移量,c2为垂直偏移量。最后根据求解的仿射变换矩阵对图像进行配准。Wherein, a 1 , a 2 , b 1 , and b 2 are rotation variables, c 1 is a horizontal offset, and c 2 is a vertical offset. Finally, the images are registered according to the solved affine transformation matrix.

本发明效果可以通过以下实验进一步证实:Effect of the present invention can further confirm by following experiment:

一.实验条件和内容1. Experimental conditions and content

硬件平台为:Intel(R)Pentium(R)1CPU2.4GHz;The hardware platform is: Intel(R) Pentium(R) 1CPU2.4GHz;

软件平台为:WindowXPProfessional,MATLABR2010;The software platform is: WindowXP Professional, MATLABR2010;

实验所使用的输入图像如图3所示,其中图3(a)和图3(b)是SAR图像。The input images used in the experiment are shown in Figure 3, where Figure 3(a) and Figure 3(b) are SAR images.

实验内容:Experiment content:

仿真内容1,在上述实验条件下,分别利用经典SIFT算法、本发明中的快速配准算法提取SIFT特征点,再根据最近次近距离比值准则(NNDR),在不同距离比阈值Tdis的条件下对两种方法提取到的特征点进行特征点匹配。Simulation content 1, under the above-mentioned experimental conditions, use the classic SIFT algorithm and the fast registration algorithm in the present invention to extract SIFT feature points respectively, and then according to the nearest short-distance ratio criterion (NNDR), the conditions of different distance ratio thresholds T dis Next, match the feature points extracted by the two methods.

仿真内容2,在得到的粗匹配结果基础上,用随机一致性估计算法(RANCAC)去除误匹配。利用最后保留下来的匹配对来求仿射变换参数,最后实现图像的配准。Simulation content 2, on the basis of the rough matching results obtained, the random consistency estimation algorithm (RANCAC) is used to remove mismatching. Use the last remaining matching pairs to calculate the affine transformation parameters, and finally realize the image registration.

考虑到如果距离比阈值Tdis取得太小,粗匹配结果中正确匹配对的数目就会太少,无法满足仿射变换的要求;如果距离比阈值Tdis取得太大,粗匹配结果中正确匹配对在所有匹配对中所占有的比例太低,随机一致性估计(RANCAC)算法鲁棒性太差。所以实验中距离比阈值Tdis选取如下几个值:0.70、075、0.80、0.85。Considering that if the distance is too small than the threshold Tdis , the number of correct matching pairs in the rough matching result will be too small to meet the requirements of affine transformation; if the distance is too large than the threshold Tdis , the correct matching in the rough matching result If the proportion of pairs in all matching pairs is too low, the robustness of the random consensus estimation (RANCAC) algorithm is too poor. Therefore, the following values are selected for the distance ratio threshold T dis in the experiment: 0.70, 075, 0.80, and 0.85.

二.实验结果2. Experimental results

本发明得到的稀疏区域如图4所示,图4(a)为参考图的稀疏区域;图4(b)为待匹配图的稀疏区域。从图中可以看出稀疏区域区域大多集中在折线拐点、交叉处等区域,这些区域稀疏度高、特征明显。The sparse area obtained by the present invention is shown in Figure 4, Figure 4(a) is the sparse area of the reference image; Figure 4(b) is the sparse area of the image to be matched. It can be seen from the figure that most of the sparse areas are concentrated in areas such as inflection points and intersections of polylines, and these areas have high sparsity and obvious characteristics.

使用经典SIFT特征提取的方法对参考图和待匹配图提取特征点的结果如图5所示,图中白色点表示提取到特征点,图5(a)为参考图的SIFT特征点分布;图5(b)为待匹配图的SIFT特征点分布。从结果图可以看出来,经典SIFT特征提取方法能对图像中线目标区域、交叉处区域等稀疏度高的区域检测出较多的SIFT特征点,对特征不明显的平滑区域和纹理区域也能检测到一些特征点,而这些平滑区域和纹理区域的点周围往往存在很多与其相似的点,因而容易形成误匹配。The results of extracting feature points from the reference image and the image to be matched using the classic SIFT feature extraction method are shown in Figure 5. The white dots in the figure represent the extracted feature points, and Figure 5(a) shows the distribution of SIFT feature points on the reference image; 5(b) is the distribution of SIFT feature points of the image to be matched. It can be seen from the result figure that the classic SIFT feature extraction method can detect more SIFT feature points in areas with high sparsity such as the centerline target area and intersection area in the image, and can also detect smooth and textured areas with inconspicuous features. To some feature points, there are often many similar points around the points in these smooth areas and texture areas, so it is easy to form a mismatch.

使用基于空间稀疏度与SIFT特征提取的快速图像配准方法对参考图和待匹配图提取特征点的结果如图6所示,图中白色点表示提取到特征点,图6(a)为参考图的特征点分布;图6(b)为待匹配图的特征点分布。从结果图可以看出来基于空间稀疏度与SIFT特征提取的快速图像配准方法提取到的特征点仅存在于稀疏度较高且特征明显的折线拐点、交叉处区域,这些区域的点稀疏度高、特征稳定、不易形成误匹配。The results of extracting feature points from the reference image and the image to be matched using the fast image registration method based on spatial sparsity and SIFT feature extraction are shown in Figure 6. The white dots in the figure represent the extracted feature points, and Figure 6(a) is the reference The distribution of feature points of the graph; Figure 6(b) is the distribution of feature points of the graph to be matched. It can be seen from the result figure that the feature points extracted by the fast image registration method based on spatial sparsity and SIFT feature extraction only exist in the inflection point and intersection area of the polyline with high sparsity and obvious features, and the point sparsity in these areas is high , stable features, and not easy to form mismatches.

使用基于欧式距离比的特征匹配方法对两种方法提取到的特征点进行粗匹配。再用随机一致性估计法RANSAC对粗匹配结果去除误匹配。两种方法粗匹配和去除误匹配的结果列在表1中,用匹配正确率来作为匹配效果好坏的评价指标。匹配正确率的定义:The feature points extracted by the two methods are roughly matched using the feature matching method based on the Euclidean distance ratio. Then the random consistency estimation method RANSAC is used to remove the wrong match from the rough matching result. The results of the two methods of rough matching and removing false matching are listed in Table 1, and the matching accuracy is used as the evaluation index of the matching effect. Definition of matching accuracy:

表1  两种方法特征点匹配的结果Table 1 Results of feature point matching of two methods

从表1中可以发现,在不同最近次近距离比阈值Tdis=0.70、Tdis=0.75、Tdis=0.80、Tdis=0.85下,本发明中正确匹配对在所有匹配对中所占的比例都比经典SIFT算法都高。并且,在粗匹配结果中误匹配率较高的情况下本发明的相对于经典SIFT算法的优势更加明显。从匹配正确率统计可以看出,在不同欧氏距离比阈值条件下,本发明匹配正确率比经典SIFT算法的匹配正确效率更高。It can be found from Table 1 that under different nearest short distance ratio thresholds T dis =0.70, T dis =0.75, T dis =0.80, T dis =0.85, the proportion of correct matching pairs in all matching pairs in the present invention The ratio is higher than the classic SIFT algorithm. Moreover, the advantage of the present invention over the classic SIFT algorithm is more obvious when the mismatch rate in the rough matching result is high. It can be seen from the statistics of the matching accuracy rate that under the conditions of different Euclidean distance ratio thresholds, the matching accuracy rate of the present invention is higher than that of the classic SIFT algorithm.

通过仿射变换得到仿射变换参数,取阈值为Tdis=0.75时最终匹配结果得到的仿射变换参数如表2所示:The affine transformation parameters are obtained by affine transformation, and the affine transformation parameters obtained by the final matching result when the threshold is T dis =0.75 are shown in Table 2:

表2:仿射变换参数结果Table 2: Affine transformation parameter results

从表2中可以发现,本发明方法得到的仿射变换参数与SIFT算法结果相差不大,但是在时间效率上有了很大的改善,本发明中的算法时间由原来的的81.04秒减少到了20.06秒,缩短到了原来的四分之一,大大提高了配准效率,阈值为Tdis=0.75时,两幅图像的配准结果如图7所示,用棋盘法显示。因为本发明仅对SAR图像的折线、拐点、交叉点等特征明显的稀疏区域提取SIFT特征点,并且排除平滑区域、纹理区域等稀疏度低、特征不明显、易形成错误匹配的区域,因而在降低误匹配率的前提下提高了配准效率。From table 2, it can be found that the affine transformation parameters obtained by the method of the present invention are not much different from the results of the SIFT algorithm, but the time efficiency has been greatly improved, and the algorithm time in the present invention has been reduced from the original 81.04 seconds to 20.06 seconds, which is shortened to a quarter of the original time, greatly improving the registration efficiency. When the threshold is T dis =0.75, the registration result of the two images is shown in Figure 7, which is displayed by the checkerboard method. Because the present invention only extracts SIFT feature points from the sparse areas with obvious features such as broken lines, inflection points, and intersection points of the SAR image, and excludes areas with low sparsity such as smooth areas and texture areas, where features are not obvious and are easy to form false matches. On the premise of reducing the false matching rate, the registration efficiency is improved.

本发明设计出了一种新的基于空间稀疏度和SIFT特征特征提取的快速图像配准方法,先分析出图像的稀疏度矩阵,对图像中特征明显的稀疏区域提取SIFT特征点,在保证配准精度的同时提高了算法的效率。The present invention designs a new rapid image registration method based on spatial sparsity and SIFT feature extraction. First, the image sparsity matrix is analyzed, and the SIFT feature points are extracted from the sparse areas with obvious features in the image. Accuracy is improved while improving the efficiency of the algorithm.

综上,本发明是利用SAR图像方差系数判断出图像的平滑区域、纹理区域、稀疏结构区域,排除特征不明显且易造成误匹配的平滑区域和纹理区域,仅对那些折线拐点处、交叉处等稀疏度较高的像素点所在的稀疏区域提取SIFT特征点进行特征匹配,通过求取仿射变换参数来实现图像配准。本发明与现有的技术相比具有以下优点:To sum up, the present invention uses the variance coefficient of the SAR image to judge the smooth area, texture area, and sparse structure area of the image, and excludes smooth areas and texture areas that are not obvious and likely to cause mismatches, and only for those polyline inflection points and intersections Extract SIFT feature points for feature matching in the sparse area where the pixels with higher sparsity are located, and achieve image registration by calculating affine transformation parameters. Compared with the prior art, the present invention has the following advantages:

1.本发明是采用经典了的SIFT算子,在提取SIFT特征点的过程中的建立高斯尺度空间时,利用高斯函数模糊图像的同时也减少了SAR图像斑点噪声,从而提高了像素点在其邻域的可区分性,在一定程度上减少了误匹配的发生。1. The present invention adopts the classic SIFT operator. When the Gaussian scale space is established in the process of extracting SIFT feature points, the Gaussian function is used to blur the image while reducing the speckle noise of the SAR image, thereby improving the pixel point in its The distinguishability of the neighborhood reduces the occurrence of false matching to a certain extent.

2.本发明设计了一种稀疏度定义法,能够比较准确的分析出折线、拐点、交叉点等特征明显的稀疏区域,排除平滑区域、纹理区域等稀疏度低且特征不明显的非稀疏区域。2. The present invention designs a method for defining the degree of sparsity, which can accurately analyze sparse areas with obvious features such as broken lines, inflection points, and intersections, and exclude non-sparse areas with low sparsity and inconspicuous features such as smooth areas and textured areas .

3.本发明仅对参考图像和待匹配图像稀疏区域提取SIFT特征点,而排除平滑区域、纹理区域等稀疏度低、特征不明显的非稀疏区域,因而降低了误匹配概率,并且使算法更加快捷。3. The present invention only extracts SIFT feature points for the sparse regions of the reference image and the image to be matched, and excludes non-sparse regions with low sparsity and inconspicuous features such as smooth regions and texture regions, thereby reducing the probability of false matching and making the algorithm more efficient. fast.

4.本发明实现简单、结构清晰,对参考图和待匹配图重叠面积要求小,并且SIFT算子对具有旋转、尺度变换的情形或包含变化区域的情形仍具有较高的配准精度。4. The present invention is simple to implement and has a clear structure, and requires a small overlapping area between the reference image and the image to be matched, and the SIFT operator still has high registration accuracy for situations with rotation and scale transformation or situations including changing regions.

本实施方式没有详细叙述的部分属本行业的公知的常用手段,这里不一一叙述。以上例举仅仅是对本发明的举例说明,并不构成对本发明的保护范围的限制,凡是与本发明相同或相似的设计均属于本发明的保护范围之内。The parts that are not described in detail in this embodiment are commonly known and commonly used means in this industry, and will not be described here one by one. The above examples are only illustrations of the present invention, and do not constitute a limitation to the protection scope of the present invention. All designs that are the same as or similar to the present invention fall within the protection scope of the present invention.

Claims (4)

1.一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,其特征在于:包括如下步骤:1. A fast image registration implementation method based on spatial sparsity and SIFT feature extraction, is characterized in that: comprise the steps: (1)输入在不同时间获取的同一地区的两幅多时相SAR图像,分别记作参考图和待匹配图,对参考图和待匹配图的每个像素点计算稀疏度,挑选稀疏度高的像素点所在的区域作为稀疏区域,分别记作MaskR和MaskS(1) Input two multi-temporal SAR images of the same area acquired at different times, which are respectively recorded as the reference image and the image to be matched, calculate the sparsity of each pixel in the reference image and the image to be matched, and select the one with the highest sparsity The area where the pixel is located is regarded as a sparse area, which is denoted as Mask R and Mask S respectively; (2)分别对参考图稀疏区域MaskR和待匹配图稀疏区域MaskS提取SIFT特征点,得到参考图和待匹配图SIFT特征点集分别记为{Rm}、{Sn};(2) Extract SIFT feature points from the sparse area Mask R of the reference image and the sparse area Mask S of the image to be matched, and obtain the SIFT feature point sets of the reference image and the image to be matched as {R m } and {S n } respectively; (3)对参考图和待匹配图上提取到的SIFT特征点集{Rm}和{Sn}进行特征匹配,得到粗匹配特征对{(pq)i},参考图和待匹配图中的粗匹配点集分别为{pi}和{qi},i表示匹配对的索引号;(3) Perform feature matching on the SIFT feature point sets {R m } and {S n } extracted from the reference image and the image to be matched to obtain a rough matching feature pair {(pq) i }, the reference image and the image to be matched The coarse matching point sets of are respectively {p i } and {q i }, i represents the index number of the matching pair; (4)利用随机一致性估计算法对粗匹配结果去除误匹配对,去除误匹配对后参考图和待匹配图保留下来的特征点对集合记为{(PQ)i},此时参考图和待匹配图中各自保留下来特征点分别为{Pi}、{Qi},其中i表示匹配对的索引号;(4) Use the random consistency estimation algorithm to remove the mismatched pairs from the rough matching results. After removing the mismatched pairs, the set of feature point pairs retained in the reference image and the image to be matched is denoted as {(PQ) i }. At this time, the reference image and The remaining feature points in the graph to be matched are respectively {P i } and {Q i }, where i represents the index number of the matching pair; (5)利用参考图和待匹配图的匹配点集{Pi}和{Qi},通过仿射变换得到仿射变换参数,最后根据仿射变换参数对两幅图像配准。(5) Using the matching point sets {P i } and {Q i } of the reference image and the image to be matched, affine transformation parameters are obtained through affine transformation, and finally the two images are registered according to the affine transformation parameters. 2.根据权利要求1所述的一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,其特征在于:所述的步骤(1),包括如下步骤:2. a kind of fast image registration realization method based on spatial sparsity and SIFT feature extraction according to claim 1, is characterized in that: described step (1), comprises the steps: 1)分别对参考图R和待匹配图S进行两层小波分解,小波基取db1,得到参考图和待匹配图的细节分量图,分别记为Rdb1、Sdb11) Carry out two-layer wavelet decomposition on the reference image R and the image to be matched respectively, and take db1 as the wavelet base to obtain the detail component images of the reference image and the image to be matched, which are denoted as R db1 and S db1 respectively; 2)对参考图细节分量图和待匹配图细节分量图的每一个像素点计算方差系数,然后以当前像素点为中心、取一个大小为m*m的邻域计算方差,即可得到当前像素点的方差系数CV;设定一个阈值Tcv,它与SAR图像的方差系数呈Tcv=β·CV的线性关系,其中β为线性系数;由阈值Tcv可得到二值化的方差系数矩阵CV_mask:2) Calculate the variance coefficient for each pixel of the detail component map of the reference image and the detail component map of the image to be matched, and then take the current pixel point as the center and take a neighborhood of size m*m to calculate the variance, and the current pixel can be obtained The variance coefficient CV of the point; set a threshold T cv , which has a linear relationship with the variance coefficient of the SAR image T cv = β·CV, where β is a linear coefficient; the binarized variance coefficient matrix can be obtained from the threshold T cv CV_mask: CVcv __ maskmask == 11 ,, CVcv >> TT cvcv 00 ,, CVcv << == TT cvcv 计算出参考图和待匹配图二值化的方差系数矩阵,分别记为CV_maskR和CV_maskSCalculate the variance coefficient matrix for the binarization of the reference image and the image to be matched, which are respectively denoted as CV_mask R and CV_mask S ; 3)对参考图的方差系数矩阵细化处理来得到细化矩阵TR,同样对待匹配图方差系数矩阵细化处理得到细化矩阵TS3) Thinning the variance coefficient matrix of the reference image to obtain a thinning matrix T R , and similarly refining the variance coefficient matrix of the matching image to obtain a thinning matrix T S ; 4)计算参考图和待匹配图的稀疏矩阵;4) Calculate the sparse matrix of the reference image and the image to be matched; 5)挑选稀疏矩阵中稀疏度值最大的前20%像素点作为稀疏度点,稀疏点膨胀处理即得到了稀疏区域。5) Select the top 20% of the pixels with the largest sparsity value in the sparse matrix as the sparsity point, and expand the sparse point to obtain the sparse area. 3.根据权利要求2所述的一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,其特征在于:所述的步骤4),包括如下步骤:3. a kind of fast image registration realization method based on spatial sparsity and SIFT feature extraction according to claim 2, is characterized in that: described step 4), comprises the steps: (a)计算细化矩阵中各像素点梯度的大小和方向:(a) Calculate the size and direction of the gradient of each pixel in the refinement matrix: (b)分析各像素点梯度的方向,统计当前像素点的方向特征,得到参考图和待匹配图的方向特征矩阵,分别记为DR、DS(b) Analyze the direction of the gradient of each pixel point, count the direction characteristics of the current pixel point, and obtain the direction characteristic matrix of the reference image and the image to be matched, which are recorded as DR and DS respectively; (c)由细化矩阵和方向特征矩阵,可以计算参考图像和待匹配图像的稀疏度,稀疏度定义为:(c) From the refinement matrix and the direction feature matrix, the sparsity of the reference image and the image to be matched can be calculated, and the sparsity is defined as: SS == 11 NN &Sigma;&Sigma; (( ii ,, jj )) &Element;&Element; &Omega;&Omega; DD. (( ii ,, jj )) &CenterDot;&Center Dot; TT (( ii ,, jj )) 22 其中,(i,j)是当前像素点的索引,Ω是以当前像素点为中心大小为r×r邻域,N为邻域Ω中像素点总数,D为方向特征矩阵,定义T为细化矩阵;由此再得到参考图和待匹配图的稀疏矩阵SR、SSAmong them, (i, j) is the index of the current pixel point, Ω is the r×r neighborhood centered on the current pixel point, N is the total number of pixels in the neighborhood Ω, D is the direction feature matrix, and T is defined as fine matrix; thus obtain the sparse matrix S R and S S of the reference image and the image to be matched. 4.根据权利要求3所述的一种基于空间稀疏度和SIFT特征提取的快速图像配准实现方法,其特征是:所述的步骤(b),包括如下步骤:4. a kind of fast image registration realization method based on spatial sparsity and SIFT feature extraction according to claim 3, is characterized in that: described step (b), comprises the steps: a)构造一个(2r+1)*(2r+1)大小的16方向模板,方向模板各个子方向对应的角度范围分别为i∈[1,16],r为方向模板半径;a) Construct a 16-direction template with a size of (2r+1)*(2r+1), and the angle ranges corresponding to each sub-direction of the direction template are respectively i∈[1,16], r is the radius of the direction template; b)用16方向模板分析细化矩阵中各细化边缘点的方向特征;b) Analyzing the direction characteristics of each thinning edge point in the thinning matrix with 16 direction templates; c)对参考图和待匹配图的方向特征矩阵归一化处理,即可得到参考图和待匹配图的方向特征矩阵DR、DSc) Normalize the directional feature matrices of the reference image and the image to be matched to obtain the directional feature matrices D R and D S of the reference image and the image to be matched.
CN201410531222.6A 2014-10-10 2014-10-10 Rapid image registration implementation method based on space sparsity and SIFT feature extraction Expired - Fee Related CN104318548B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410531222.6A CN104318548B (en) 2014-10-10 2014-10-10 Rapid image registration implementation method based on space sparsity and SIFT feature extraction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410531222.6A CN104318548B (en) 2014-10-10 2014-10-10 Rapid image registration implementation method based on space sparsity and SIFT feature extraction

Publications (2)

Publication Number Publication Date
CN104318548A true CN104318548A (en) 2015-01-28
CN104318548B CN104318548B (en) 2017-02-15

Family

ID=52373774

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410531222.6A Expired - Fee Related CN104318548B (en) 2014-10-10 2014-10-10 Rapid image registration implementation method based on space sparsity and SIFT feature extraction

Country Status (1)

Country Link
CN (1) CN104318548B (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732546A (en) * 2015-04-02 2015-06-24 西安电子科技大学 Non-rigid SAR image registration method based on region similarity and local spatial constraint
CN104966294A (en) * 2015-06-15 2015-10-07 清华大学 Polarimetric SAR image matching method and apparatus based on orientation angle inversion
CN105139412A (en) * 2015-09-25 2015-12-09 深圳大学 Hyperspectral image corner detection method and system
CN105279522A (en) * 2015-09-30 2016-01-27 华南理工大学 Scene object real-time registering method based on SIFT
CN105427308A (en) * 2015-11-20 2016-03-23 中国地质大学(武汉) Sparse and dense characteristic matching combined image registration method
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance
CN106550229A (en) * 2016-10-18 2017-03-29 安徽协创物联网技术有限公司 A kind of parallel panorama camera array multi-view image bearing calibration
CN108121972A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of target identification method under the conditions of partial occlusion
CN108304766A (en) * 2017-12-12 2018-07-20 交通运输部规划研究院 A method of based on high-definition remote sensing screening dangerous material stockyard
CN109003293A (en) * 2017-06-07 2018-12-14 北京航空航天大学 Inhibit the SAR image registration method of model based on anisotropy spot
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 Heterologous image registration method based on FAST feature extraction and PIIFD feature description
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110097015A (en) * 2019-05-08 2019-08-06 杭州视在科技有限公司 One kind deviating automatic identifying method based on the matched ball machine presetting bit of dense characteristic point
WO2019184719A1 (en) * 2018-03-29 2019-10-03 青岛海信移动通信技术股份有限公司 Photographing method and apparatus
CN110659637A (en) * 2019-09-24 2020-01-07 国网河北省电力有限公司电力科学研究院 Electric energy meter number and label automatic identification method combining deep neural network and SIFT features
CN110992413A (en) * 2019-12-13 2020-04-10 中国人民解放军火箭军工程大学 High-precision rapid registration method for airborne remote sensing image
CN111325722A (en) * 2020-02-17 2020-06-23 江苏诚印科技有限公司 Stamp image accurate identification method, stamp image identification processing method and stamp image identification system
CN111460864A (en) * 2019-01-22 2020-07-28 天津大学青岛海洋技术研究院 Animal disease detection method based on image recognition
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points
CN112669360A (en) * 2020-11-30 2021-04-16 西安电子科技大学 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
WO2021248270A1 (en) * 2020-06-08 2021-12-16 上海交通大学 Heterogeneous image registration method and system
CN114998630A (en) * 2022-07-19 2022-09-02 北京科技大学 A Coarse-to-fine Earth-to-Air Image Registration Method
CN115272153A (en) * 2022-08-12 2022-11-01 中国人民解放军战略支援部队信息工程大学 An image matching enhancement method based on feature sparse region detection
CN116109915A (en) * 2023-04-17 2023-05-12 济宁能源发展集团有限公司 Intelligent recognition method for container door state
CN116109682A (en) * 2022-09-23 2023-05-12 中国航空无线电电子研究所 An Image Registration Method Based on Image Diffusion Feature
CN116797636A (en) * 2023-06-05 2023-09-22 北京数慧时空信息技术有限公司 Remote sensing image registration method based on sparse region extraction strategy
CN117830301A (en) * 2024-03-04 2024-04-05 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics
CN118761917A (en) * 2024-07-17 2024-10-11 山东中微星辰电子科技有限公司 Dual-light fusion method of visible light and thermal imaging images based on multi-scale features
CN119183028A (en) * 2024-11-25 2024-12-24 贵州省公路建设养护集团有限公司 Image data acquisition method, device and system applied to bridge engineering

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222781A1 (en) * 2010-03-15 2011-09-15 U.S. Government As Represented By The Secretary Of The Army Method and system for image registration and change detection
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors
CN103544711A (en) * 2013-11-08 2014-01-29 国家测绘地理信息局卫星测绘应用中心 Automatic registering method of remote-sensing image
CN103984966A (en) * 2014-05-29 2014-08-13 西安电子科技大学 SAR image target recognition method based on sparse representation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222781A1 (en) * 2010-03-15 2011-09-15 U.S. Government As Represented By The Secretary Of The Army Method and system for image registration and change detection
CN103020647A (en) * 2013-01-08 2013-04-03 西安电子科技大学 Image classification method based on hierarchical SIFT (scale-invariant feature transform) features and sparse coding
CN103413119A (en) * 2013-07-24 2013-11-27 中山大学 Single sample face recognition method based on face sparse descriptors
CN103544711A (en) * 2013-11-08 2014-01-29 国家测绘地理信息局卫星测绘应用中心 Automatic registering method of remote-sensing image
CN103984966A (en) * 2014-05-29 2014-08-13 西安电子科技大学 SAR image target recognition method based on sparse representation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓朝省 等: "基于局部SIFT特征点的双阈值配准算法", 《计算机工程与应用》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732546B (en) * 2015-04-02 2017-06-16 西安电子科技大学 The non-rigid SAR image registration method of region similitude and local space constraint
CN104732546A (en) * 2015-04-02 2015-06-24 西安电子科技大学 Non-rigid SAR image registration method based on region similarity and local spatial constraint
CN104966294A (en) * 2015-06-15 2015-10-07 清华大学 Polarimetric SAR image matching method and apparatus based on orientation angle inversion
CN105139412A (en) * 2015-09-25 2015-12-09 深圳大学 Hyperspectral image corner detection method and system
CN105139412B (en) * 2015-09-25 2018-04-24 深圳大学 A kind of high spectrum image angular-point detection method and system
CN105279522A (en) * 2015-09-30 2016-01-27 华南理工大学 Scene object real-time registering method based on SIFT
CN105427308A (en) * 2015-11-20 2016-03-23 中国地质大学(武汉) Sparse and dense characteristic matching combined image registration method
CN105427308B (en) * 2015-11-20 2017-03-15 中国地质大学(武汉) A kind of sparse and dense characteristic mates the method for registering images for combining
CN105809693A (en) * 2016-03-10 2016-07-27 西安电子科技大学 SAR image registration method based on deep neural networks
CN105809693B (en) * 2016-03-10 2018-11-16 西安电子科技大学 SAR image registration method based on deep neural network
CN106023187A (en) * 2016-05-17 2016-10-12 西北工业大学 Image registration method based on SIFT feature and angle relative distance
CN106550229A (en) * 2016-10-18 2017-03-29 安徽协创物联网技术有限公司 A kind of parallel panorama camera array multi-view image bearing calibration
CN109003293A (en) * 2017-06-07 2018-12-14 北京航空航天大学 Inhibit the SAR image registration method of model based on anisotropy spot
CN108304766B (en) * 2017-12-12 2019-01-04 交通运输部规划研究院 A method of dangerous material stockyard is screened based on high-definition remote sensing
CN108304766A (en) * 2017-12-12 2018-07-20 交通运输部规划研究院 A method of based on high-definition remote sensing screening dangerous material stockyard
CN108121972A (en) * 2017-12-25 2018-06-05 北京航空航天大学 A kind of target identification method under the conditions of partial occlusion
WO2019184719A1 (en) * 2018-03-29 2019-10-03 青岛海信移动通信技术股份有限公司 Photographing method and apparatus
CN111460864A (en) * 2019-01-22 2020-07-28 天津大学青岛海洋技术研究院 Animal disease detection method based on image recognition
CN111460864B (en) * 2019-01-22 2023-10-17 天津大学青岛海洋技术研究院 Animal disease detection method based on image recognition
CN110009670A (en) * 2019-03-28 2019-07-12 上海交通大学 Heterologous image registration method based on FAST feature extraction and PIIFD feature description
CN110097585A (en) * 2019-04-29 2019-08-06 中国水利水电科学研究院 A kind of SAR image matching method and system based on SIFT algorithm
CN110097015A (en) * 2019-05-08 2019-08-06 杭州视在科技有限公司 One kind deviating automatic identifying method based on the matched ball machine presetting bit of dense characteristic point
CN110097015B (en) * 2019-05-08 2020-05-26 杭州视在科技有限公司 Automatic identification method for deviation of preset position of dome camera based on dense feature point matching
CN110659637A (en) * 2019-09-24 2020-01-07 国网河北省电力有限公司电力科学研究院 Electric energy meter number and label automatic identification method combining deep neural network and SIFT features
CN110992413A (en) * 2019-12-13 2020-04-10 中国人民解放军火箭军工程大学 High-precision rapid registration method for airborne remote sensing image
CN111325722A (en) * 2020-02-17 2020-06-23 江苏诚印科技有限公司 Stamp image accurate identification method, stamp image identification processing method and stamp image identification system
CN111325722B (en) * 2020-02-17 2024-02-20 江苏诚印科技有限公司 Seal image accurate identification method and system and seal image identification processing method
WO2021248270A1 (en) * 2020-06-08 2021-12-16 上海交通大学 Heterogeneous image registration method and system
US12067728B2 (en) 2020-06-08 2024-08-20 Shanghai Jiaotong University Heterogeneous image registration method and system
CN112150520A (en) * 2020-08-18 2020-12-29 徐州华讯科技有限公司 Image registration method based on feature points
CN112669360A (en) * 2020-11-30 2021-04-16 西安电子科技大学 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
CN112669360B (en) * 2020-11-30 2023-03-10 西安电子科技大学 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
CN114998630A (en) * 2022-07-19 2022-09-02 北京科技大学 A Coarse-to-fine Earth-to-Air Image Registration Method
CN115272153A (en) * 2022-08-12 2022-11-01 中国人民解放军战略支援部队信息工程大学 An image matching enhancement method based on feature sparse region detection
CN116109682A (en) * 2022-09-23 2023-05-12 中国航空无线电电子研究所 An Image Registration Method Based on Image Diffusion Feature
CN116109915A (en) * 2023-04-17 2023-05-12 济宁能源发展集团有限公司 Intelligent recognition method for container door state
CN116797636A (en) * 2023-06-05 2023-09-22 北京数慧时空信息技术有限公司 Remote sensing image registration method based on sparse region extraction strategy
CN117830301A (en) * 2024-03-04 2024-04-05 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics
CN117830301B (en) * 2024-03-04 2024-05-14 青岛正大正电力环保设备有限公司 Slag dragging region detection method based on infrared and visible light fusion characteristics
CN118761917A (en) * 2024-07-17 2024-10-11 山东中微星辰电子科技有限公司 Dual-light fusion method of visible light and thermal imaging images based on multi-scale features
CN119183028A (en) * 2024-11-25 2024-12-24 贵州省公路建设养护集团有限公司 Image data acquisition method, device and system applied to bridge engineering
CN119183028B (en) * 2024-11-25 2025-05-06 贵州省公路建设养护集团有限公司 Image data acquisition method, device and system applied to bridge engineering

Also Published As

Publication number Publication date
CN104318548B (en) 2017-02-15

Similar Documents

Publication Publication Date Title
CN104318548B (en) Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN107563438B (en) A Fast and Robust Multimodal Remote Sensing Image Matching Method and System
CN105335973B (en) Apply to the visual processing method of strip machining production line
CN104867126B (en) Based on point to constraint and the diameter radar image method for registering for changing region of network of triangle
CN103727930B (en) A kind of laser range finder based on edge matching and camera relative pose scaling method
CN107301661A (en) High-resolution remote sensing image method for registering based on edge point feature
CN103020945A (en) Remote sensing image registration method of multi-source sensor
CN103295232B (en) Based on the SAR image registration method in straight line and region
CN102654902A (en) Contour vector feature-based embedded real-time image matching method
CN104376564B (en) Method based on anisotropic Gaussian directional derivative wave filter extraction image thick edge
CN105046271A (en) MELF (Metal Electrode Leadless Face) component positioning and detecting method based on match template
CN112396643A (en) Multi-mode high-resolution image registration method with scale-invariant features and geometric features fused
CN101334263A (en) Method for locating the center of a circular target
CN108346162A (en) Remote sensing image registration method based on structural information and space constraint
CN105631872B (en) Remote sensing image registration method based on multi-characteristic points
CN106981077A (en) Infrared image and visible light image registration method based on DCE and LSS
CN107564006B (en) Circular target detection method utilizing Hough transformation
CN107452030A (en) Method for registering images based on contour detecting and characteristic matching
CN104899888A (en) A Method of Image Subpixel Edge Detection Based on Legendre Moments
CN105654423A (en) Area-based remote sensing image registration method
CN103136525A (en) High-precision positioning method for special-shaped extended target by utilizing generalized Hough transformation
CN103839262A (en) SAR image registration method based on straight lines and FFT
CN101833763B (en) Method for detecting reflection image on water surface
CN114494371A (en) Optical image and SAR image registration method based on multi-scale phase consistency
CN110222661A (en) It is a kind of for motion estimate and the feature extracting method of tracking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170215

CF01 Termination of patent right due to non-payment of annual fee