[go: up one dir, main page]

CN113689403B - Feature description system based on azimuth distance between features - Google Patents

Feature description system based on azimuth distance between features Download PDF

Info

Publication number
CN113689403B
CN113689403B CN202110974875.1A CN202110974875A CN113689403B CN 113689403 B CN113689403 B CN 113689403B CN 202110974875 A CN202110974875 A CN 202110974875A CN 113689403 B CN113689403 B CN 113689403B
Authority
CN
China
Prior art keywords
feature
main
point
points
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110974875.1A
Other languages
Chinese (zh)
Other versions
CN113689403A (en
Inventor
陶淑苹
冯钦评
刘春雨
曲宏松
徐伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Original Assignee
Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changchun Institute of Optics Fine Mechanics and Physics of CAS filed Critical Changchun Institute of Optics Fine Mechanics and Physics of CAS
Priority to CN202110974875.1A priority Critical patent/CN113689403B/en
Publication of CN113689403A publication Critical patent/CN113689403A/en
Application granted granted Critical
Publication of CN113689403B publication Critical patent/CN113689403B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

A feature description system based on inter-feature azimuth distance relates to the field of image feature extraction. All feature points of an image are detected by a feature detector, feature points are screened by a feature screening unit, then the azimuth and the distance between each main feature point and all other secondary feature points are calculated by a feature relation calculating unit, and the relation strength is further calculated. Finally, the feature description generating unit performs feature description, namely a feature description vector. The method adopts the information of the azimuth distance between the features to quantitatively describe the features, can describe the features without using scale space and gray gradient change information, and simultaneously maintains certain differentiation and scale invariance. The method has low computational complexity, is easy to realize in hardware, is beneficial to accelerating applications such as feature matching, image registration and the like, and brings potential to quick and real-time application.

Description

基于特征间方位距的特征描述系统Feature description system based on azimuth distance between features

技术领域Technical field

本发明涉及图像特征提取领域,具体涉及一种特征间方位距的特征描述系统The invention relates to the field of image feature extraction, and in particular to a feature description system for azimuth distances between features.

背景技术Background technique

特征提取是检测图像中的点状、角点等特征,并用恰当的方法来量化描述这些特征,为后续的诸如图像配准、图像拼接、图像融合等处理做准备。因此特征提取是许多图像应用的关键步骤。特征提取分为特征检测以及特征描述两个步骤,从计算复杂度方面来比较,特征描述具有较高的计算复杂度,因为它需要充分利用检测到的特征点周围的像元的灰度变化信息来确定方向、尺度等信息。这使得许多基于特征的图像处理在实时性应用中受到了限制。Feature extraction is to detect features such as points and corners in the image, and use appropriate methods to quantitatively describe these features to prepare for subsequent processing such as image registration, image splicing, and image fusion. Therefore feature extraction is a key step for many image applications. Feature extraction is divided into two steps: feature detection and feature description. In terms of computational complexity, feature description has high computational complexity because it needs to make full use of the grayscale change information of the pixels around the detected feature points. To determine the direction, scale and other information. This makes many feature-based image processing limited in real-time applications.

发明内容Contents of the invention

本发明针对现有技术中对特征描述的高计算复杂,且实时性差等问题,提出一种基于特征间方位距的特征描述系统。In order to solve the problems of high computational complexity and poor real-time performance of feature description in the prior art, the present invention proposes a feature description system based on the azimuth distance between features.

基于特征间方位距的特征描述系统,该系统包括高斯平滑单元、特征检测器、特征筛选单元、特征间关系计算单元和特征描述子生成单元;A feature description system based on the azimuth distance between features, which includes a Gaussian smoothing unit, a feature detector, a feature screening unit, a relationship calculation unit between features and a feature descriptor generation unit;

所述特征筛选单元包括次要特征筛选单元和主要特征筛选单元;The feature screening unit includes a secondary feature screening unit and a primary feature screening unit;

所述特征描述子生成单元包括方向估计单元和方向强度直方图统计单元;The feature descriptor generation unit includes a direction estimation unit and a direction intensity histogram statistics unit;

所述高斯平滑单元对输入图像I(x,y)进行平滑处理后通过特征检测器进行特征检测;The Gaussian smoothing unit smoothes the input image I(x, y) and then performs feature detection through the feature detector;

所述特征检测器检测获得图像的特征点集F(I),同时获得特征点fi (I)在图像中的位置Loc(fi (I))=(xi,yi)及所述特征点fi (I)的响应强度Mag(fi (I)),所述特征点集表示为:The feature detector detects and obtains the feature point set F (I) of the image, and at the same time obtains the position of the feature point fi (I) in the image Loc ( fi (I) ) = (xi , yi ) and the The response intensity Mag( fi (I )) of the feature point fi (I) , the feature point set is expressed as:

F(I)={fi (I)|i∈[1,2,...,Nf]}F (I) = {f i (I) |i∈[1,2,...,N f ]}

式中,i表示检测到的特征点fi (I)的序数,所述图像的特征点集包括Nf个特征点;In the formula, i represents the ordinal number of the detected feature point fi (I) , and the feature point set of the image includes N f feature points;

所述次要特征筛选单元对所述特征检测器检测获得的特征点集F(I)进行次要特征点筛选;具体为:The secondary feature screening unit performs secondary feature point screening on the feature point set F (I) detected and obtained by the feature detector; specifically:

引入调制后的特征响应强度Magm,用下式表示为:The characteristic response intensity Mag m after the introduction of modulation is expressed by the following formula:

式中,M,N表示图像的宽和高;将所有的特征点按照调制后的特征响应强度的降序排序,选出前半部分的特征点作为次要特征点,然后重新分配序号,即fi (I,SF);所述次要特征点组成次要特征点集F(I,SF),用下式表示为:In the formula, M, N represent the width and height of the image; sort all the feature points in descending order of the modulated feature response intensity, select the first half of the feature points as the secondary feature points, and then reassign the sequence numbers, that is, fi (I,SF) ; the secondary feature points constitute the secondary feature point set F (I,SF) , which is expressed by the following formula:

F(I,SF)={fi (I,SF)|i∈[1,2,…,Nf/2]}F (I,SF) ={f i (I,SF) |i∈[1,2,…,N f /2]}

将所述次要特征点集F(I,SF)采用所述主要特征筛选单元进行筛选,确定主要特征点,具体过程为:The secondary feature point set F (I, SF) is screened using the main feature screening unit to determine the main feature points. The specific process is:

初始化图像的主要特征点集F(I,PF)为空集,即 The main feature point set F (I, PF) of the initialized image is an empty set, that is

对每一个次级特征点fi (I,SF),定义其半径为R的圆域D(fi (I,SF)),用下式表示为:For each secondary feature point fi (I,SF) , define a circular domain D( fi (I,SF) ) with a radius R, which is expressed by the following formula:

D(fi (I,SF))={(x,y)|(x-xi)2+(y-yi)2<R2}D(f i (I,SF) )={(x,y)|(xx i ) 2 +(yy i ) 2 <R 2 }

若该所述圆域内不存在其他次级特征点fj (I,SF)满足Mag(fj (I,SF))>Mag(fi (I,SF)),则将所述fi (I,SF)作为主要特征点,即fi (I,SF)∈F(I,PF)If there is no other secondary feature point f j (I, SF) in the circular domain that satisfies Mag(f j (I, SF) ) > Mag (f i (I, SF) ), then the f i ( I,SF) as the main feature point, that is, f i (I,SF) ∈F (I,PF) ;

对主要特征点集F(I,PF)中的所有主要特征点重新分配序数,即得到主要特征点fi (I,PF)Reassign the ordinal numbers to all main feature points in the main feature point set F (I, PF) , that is, obtain the main feature points f i (I, PF) ;

则主要特征点集满足下式:Then the main feature point set satisfies the following formula:

通过所述特征间关系计算单元计算每个主要特征点fi (I,PF)与其他次要特征点fj (I ,SF)之间的相对方位和距离,The relative orientation and distance between each main feature point f i (I, PF) and other secondary feature points f j (I , SF) are calculated through the inter-feature relationship calculation unit,

所述方向估计单元将确定每个主要特征点的主方向,并将所述主方向作为参考方向,将所有次要特征点与该主要特征点的相对方位按顺时针映射到0~2π范围;The direction estimation unit will determine the main direction of each main feature point, use the main direction as a reference direction, and map the relative orientations of all secondary feature points and the main feature point clockwise to the range of 0 to 2π;

其中,主方向Ori(fi (I,PF))按照与该主要特征点关系最强的次要特征点的相对方位来确定;即:Among them, the main direction Ori(f i (I,PF) ) is determined according to the relative orientation of the secondary feature point that has the strongest relationship with the main feature point; that is:

其中满足:in satisfy:

将重映射后的相对方位记作Azimrm(fj (I,SF)|fi (I,PF));Denote the relative orientation after remapping as Azim rm (f j (I,SF) |f i (I,PF) );

所述方向强度直方图统计单元将对所述主方向Ori(fi (I,PF))和重映射后的相对方位Azimrm(fj (I,SF)|fi (I,PF)))进行统计,生成特征描述向量;具体方法如下:The direction intensity histogram statistics unit will calculate the main direction Ori( fi (I,PF) ) and the remapped relative orientation Azim rm (f j (I,SF) | fi (I,PF) ) ) to perform statistics and generate feature description vectors; the specific method is as follows:

对于每个主要特征点,以此为中心,从它的主方向开始,将图像分成n个扇形区域,分别代表n个方向区间;For each main feature point, taking this as the center and starting from its main direction, the image is divided into n sector-shaped areas, representing n direction intervals respectively;

然后统计每个方向区间内所有次级特征点相对于该主要特征点的关系强度总和,成为该方向区间的强度;Then the sum of the relationship strengths of all secondary feature points relative to the main feature point in each direction interval is calculated and becomes the strength of that direction interval;

在获取所有方向区间的强度后,组成fi (I,PF)的特征描述向量V(fj (I,PF)|F(I,SF)),用下式表示为:After obtaining the intensity in all direction intervals, the feature description vector V(f j (I,PF ) |F (I,SF) ) that constitutes f i (I,PF) is expressed by the following formula:

V(fj (I,PF)|F(I,SF))=[O1(fi (I,PF)|F(I,SF)),…,On(fi (I,PF)|F(I,SF))]V(f j (I,PF) |F (I,SF) )=[O 1 (f i (I,PF) |F (I,SF) ),…,O n (f i (I,PF) |F (I,SF) )]

直到获得所有主要特征点的特征描述向量。Until the feature description vectors of all main feature points are obtained.

本发明的有益效果:Beneficial effects of the present invention:

本发明中,采用次要特征筛选单元结合被检测到的特征点的响应强度和其在图像的位置坐标来确定次要特征,以提升该系统的可重复性;主要特征筛选单元将从次要特征点中确定出主要特征点,以提升该系统对视角变化的鲁棒性。In the present invention, the secondary feature screening unit is used to determine the secondary features by combining the response intensity of the detected feature points and their position coordinates in the image to improve the repeatability of the system; the primary feature screening unit will determine the secondary features from the secondary features. The main feature points are determined to improve the system's robustness to changes in viewing angle.

本发明的系统在特征描述时,采用了特征之间的方位距的信息来量化描述特征,此方法无需利用尺度空间以及灰度梯度变化信息即可描述特征,同时保持一定的区分性,以及尺度不变性。该方法计算复杂度低,易于在硬件中实现,有利于加速特征匹配、图像配准等应用,给快速实时性的应用带来了潜能。When describing features, the system of the present invention uses information on the azimuth distance between features to quantitatively describe features. This method can describe features without using scale space and gray gradient change information, while maintaining a certain degree of distinction and scale. Immutability. This method has low computational complexity and is easy to implement in hardware, which is beneficial to accelerating applications such as feature matching and image registration, and brings potential for fast and real-time applications.

附图说明Description of drawings

图1为本发明所述的基于特征间方位距的特征描述系统的原理框图;Figure 1 is a functional block diagram of the feature description system based on the azimuth distance between features according to the present invention;

图2为主要特征点确定示意图;Figure 2 is a schematic diagram for determining the main feature points;

图3为特征间的方位和距离计算示意图;Figure 3 is a schematic diagram for calculating the orientation and distance between features;

图4为特征相对方位-关系强度的柱状图;Figure 4 is a histogram of feature relative orientation-relationship strength;

图5为特征间重映射方位-关系强度柱状图;Figure 5 is a histogram of remapping orientation-relationship strength between features;

图6为方位区间直方图。Figure 6 shows the azimuth interval histogram.

具体实施方式Detailed ways

结合图1至图6说明本实施方式,基于特征间方位距的特征描述系统,该系统包括高斯平滑单元、特征检测器、特征筛选单元、特征间关系计算单元和特征描述子生成单元;所述特征筛选单元包括次要特征筛选单元和主要特征筛选单元,所述特征描述子生成单元包括方向估计单元和方向强度直方图统计单元;This embodiment will be described with reference to Figures 1 to 6, which is a feature description system based on the azimuth distance between features. The system includes a Gaussian smoothing unit, a feature detector, a feature screening unit, a relationship calculation unit between features, and a feature descriptor generation unit; The feature screening unit includes a secondary feature screening unit and a primary feature screening unit, and the feature descriptor generation unit includes a direction estimation unit and a direction intensity histogram statistics unit;

首先从外部设备(例如相机、CMOS探测器等)获取图像I(x,y),高斯平滑单元对获取的图像进行平滑,以减少噪声对后续特征检测的不确定性,提升后续的特征筛选以及特征描述时的可重复性,获得平滑后的图像。First, the image I(x,y) is obtained from an external device (such as a camera, CMOS detector, etc.). The Gaussian smoothing unit smoothes the acquired image to reduce the uncertainty of noise on subsequent feature detection and improve subsequent feature screening and Repeatability in characterization and obtaining smoothed images.

所述特征检测器对平滑后的图像进行特征检测,同时获得特征点在图像中的位置以及其响应强度。其中特征检测所采用的方法可以是任意的,例如FAST,SIFT,SURF等。但为了提升计算性能,一般采用FAST。获取所有特征后,得到这张图像I(x,y)里的一个特征集合,包括所有特征点fi (I),以及它们的位置坐标Loc(fi (I))=(xi,yi)以及其响应强度Mag(fi (I));检测到特征后形成特征集,即The feature detector performs feature detection on the smoothed image, and simultaneously obtains the position of the feature point in the image and its response intensity. The method used for feature detection can be arbitrary, such as FAST, SIFT, SURF, etc. However, in order to improve computing performance, FAST is generally used. After obtaining all the features, a feature set in this image I(x,y) is obtained, including all feature points f i (I) and their position coordinates Loc(f i (I) ) = (x i , y i ) and its response intensity Mag(f i (I) ); after detecting the features, a feature set is formed, that is

F(I)={fi (I)|i∈[1,2,...,Nf]} (1)F (I) ={f i (I) |i∈[1,2,...,N f ]} (1)

上式中i表示检测到的特征点的序数,即该图像的特征点集包含里Nf个特征点。In the above formula, i represents the ordinal number of the detected feature points, that is, the feature point set of the image contains N f feature points.

为了提升系统的可重复性,次要特征点筛选单元对这些检测到的特征点结合其响应强度和图像坐标进行筛选。由于靠近图像中心的特征点能由其各方位的其他特征点的方位距信息所描述,而靠近边缘或角落的特征点则只能由半边方位或四分之一方位的其他特征点来描述,因此靠近中心的特征点更能被准确地描述。因此,对靠近图像中心的特征响应强度予以较大的权重,而靠近图像边缘或角落的特征的响应强度予以较小的权重。引入调制后的特征响应强度Magm,即In order to improve the repeatability of the system, the secondary feature point screening unit screens these detected feature points based on their response intensity and image coordinates. Since the feature points close to the center of the image can be described by the azimuth distance information of other feature points in each direction, while the feature points close to the edges or corners can only be described by other feature points in the half-azimuth or quarter-azimuth, Therefore, feature points closer to the center can be described more accurately. Therefore, the response intensity of features close to the center of the image is given greater weight, while the response intensity of features closer to the edges or corners of the image is given less weight. The characteristic response intensity Mag m after the modulation is introduced, that is

上式中M,N表示图像的宽和高。将所有的特征点按照调制后的特征响应强度的降序排序,选出前一半的特征点作为次要特征点,然后重新分配序号,即fi (I,SF),这些次要特征点组成次要特征点集F(I,SF)In the above formula, M and N represent the width and height of the image. Sort all the feature points in descending order of the modulated feature response intensity, select the first half of the feature points as secondary feature points, and then redistribute the sequence numbers, that is, f i (I, SF) . These secondary feature points form the secondary feature points. Feature point set F (I,SF) :

F(I,SF)={fi (I,SF)|i∈[1,2,…,Nf/2]} (3)F (I,SF) ={f i (I,SF) |i∈[1,2,…,N f /2]} (3)

由于特征检测器在检测特征时,返回的特征响应强度会随着图像的视差而改变,这会影响特征描述的稳定性。为了缓和这一问题,主要特征筛选单元从次要特征点中进一步确定主要特征点。主要特征点筛选单元会进一步按照下列步骤来确定主要特征点:Since the feature detector detects features, the intensity of the returned feature response will change with the disparity of the image, which will affect the stability of feature description. In order to alleviate this problem, the main feature screening unit further determines the main feature points from the secondary feature points. The main feature point screening unit will further follow the following steps to determine the main feature points:

a)初始化该图像的主要特征点集为空集,即 a) Initialize the main feature point set of the image to an empty set, that is

b)对于每一个次级特征点fi (I,SF),定义其半径为R的圆域D(fi (I,SF)):b) For each secondary feature point fi (I,SF) , define a circular domain D( fi (I,SF) ) with a radius of R:

D(fi (I,SF))={(x,y)|(x-xi)2+(y-yi)2<R2} (1)D(f i (I,SF) )={(x,y)|(xx i ) 2 +(yy i ) 2 <R 2 } (1)

上式中R取1/20·min(M,N);In the above formula, R is taken as 1/20·min(M,N);

c)若该圆域内不存在其他次级特征点fj (I,SF)满足Mag(fj (I,SF))>Mag(fi (I,SF)),则fi (I,SF)为主要特征点,即fi (I,SF)∈F(I,PF)c) If there are no other secondary feature points f j (I,SF) in the circle domain such that Mag(f j (I,SF) )>Mag(f i (I,SF) ), then f i (I,SF ) is the main feature point, that is, fi (I,SF) ∈F (I,PF) .

d)对F(I,PF)里的所有主要特征点重新分配序数,即得到fi (I,PF)(不关心如何分配序数)。d) Reassign ordinal numbers to all main feature points in F (I, PF) , that is, get f i (I, PF) (regardless of how to assign ordinal numbers).

如图2所示,假设一张图像内有14个次要特征点A~N,标号旁的数字表示其调制后的响应响度,给定圆域半径如图所示,则根据式(3)可以确定A~C、F、J、K、M、N为主要特征点;As shown in Figure 2, assume that there are 14 secondary feature points A ~ N in an image. The numbers next to the labels represent the response loudness after modulation. Given the radius of the circle as shown in the figure, according to Equation (3) It can be determined that A~C, F, J, K, M, and N are the main feature points;

可以看到,这些特征点集满足It can be seen that these feature point sets satisfy

在确定出所有主要特征点后,特征间关系计算单元将对计算每一个主要特征点与其他所有次要特征点之间的相对方位和距离:假设有一个主要特征点fi (I,PF),以及另外一个次要特征点fj (I,SF),那么fj (I,SF)相对于fi (I,PF之间的方位Azim(fj (I,SF)|fi (I,PF))以及距离Dist(fj (I,SF)|fi (I,PF)):After all main feature points are determined, the inter-feature relationship calculation unit will calculate the relative orientation and distance between each main feature point and all other secondary feature points: assuming there is a main feature point f i (I, PF) , and another secondary feature point f j (I,SF) , then the orientation between f j (I,SF) relative to f i (I, PF ) Azim(f j (I,SF) |f i (I ,PF) ) and distance Dist(f j (I,SF) |f i (I,PF) ):

Dist(fj (I,SF)|fi (I,PF))=(xi-xj)2+(yi-yj)2 (6)Dist(f j (I, SF) | f i (I, PF) ) = (x i -x j ) 2 + (y i -y j ) 2 (6)

上式中(xi,yi)=Loc(fi (I,PF)),(xj,yj)=Loc(fj (I,SF))。并定义fj (I,SF)与fi (I,PF)之间的关系强度为其距离的倒数:In the above formula (x i , y i )=Loc (f i (I, PF) ), (x j , y j )=Loc (f j (I, SF) ). And define the relationship strength between f j (I, SF) and f i (I, PF) as the reciprocal of its distance:

如图3所示,以特征点A为例,其余的特征点将视作次要特征点,在选定一个方向作为绝对方向后,以此绝对方向为参考,求出它们与A点之间的相对方位θi和距离,同时求出关系强度,如表1所示。As shown in Figure 3, taking feature point A as an example, the remaining feature points will be regarded as secondary feature points. After selecting a direction as the absolute direction, use this absolute direction as a reference to find the distance between them and point A. The relative orientation θ i and distance are obtained, and the relationship strength is obtained at the same time, as shown in Table 1.

表1Table 1

在确定完所有主要特征点与其他所有次要特征点之间的关系后,方向估计单元将确定每个主要特征点的主方向(图3还展示了一个主要特征点的主方向),并以此方向视作参考方向(零方向),将所有次要特征点与该主要特征点的相对方位按顺时针映射到0~2π范围。如图4所示,该柱状图展示了另一个例子,包括了一个主要特征点与其他所有次要特征点之间的关系。After determining the relationship between all main feature points and all other secondary feature points, the direction estimation unit will determine the main direction of each main feature point (Figure 3 also shows the main direction of a main feature point), and This direction is regarded as the reference direction (zero direction), and the relative orientations of all secondary feature points and the main feature point are mapped clockwise to the range of 0 to 2π. As shown in Figure 4, this histogram shows another example, including the relationship between a primary feature point and all other secondary feature points.

其中主方向Ori(fi (I,PF))按照与该主要特征点关系最强的次要特征点的相对方位来确定。即The main direction Ori( fi (I,PF) ) is determined according to the relative orientation of the secondary feature point that has the strongest relationship with the main feature point. Right now

其中满足:in satisfy:

将重映射后的相对方位记作Azimrm(fj (I,SF)|fi (I,PF))Denote the relative orientation after remapping as Azim rm (f j (I,SF) |f i (I,PF) )

若将图4中的主方向视作参考方向,将所有次要特征点与该主要特征点的相对方位按顺时针映射到0~2π范围后可得图5。If the main direction in Figure 4 is regarded as the reference direction, Figure 5 can be obtained by mapping the relative orientations of all secondary feature points and the main feature point clockwise to the range of 0~2π.

以此为基础,方向强度直方图统计单元将对这些信息进行统计,生成特征描述向量。具体方法如下:对于每个主要特征点,以此为中心,从它的主方向开始,将图像分成n个扇形区域(n一般取10以上,n越大,生成的描述子的区分性越好,但会增加特征匹配时的计算量),分别代表n个方向区间。然后统计每个方向区间内所有次级特征点相对于该主要特征点的关系强度总和,称为该方向区间的强度。例如第p个扇形区域内所有次级特征点相对于该主要特征点的关系强度总和为Op(fi (I,PF)|F(I,SF)):Based on this, the direction intensity histogram statistics unit will count this information and generate a feature description vector. The specific method is as follows: for each main feature point, with this as the center, starting from its main direction, the image is divided into n fan-shaped areas (n is generally more than 10, and the larger n is, the better the discriminability of the generated descriptor is. , but it will increase the calculation amount during feature matching), representing n direction intervals respectively. Then the sum of the relationship strengths of all secondary feature points in each direction interval relative to the main feature point is calculated, which is called the strength of that direction interval. For example, the sum of the relationship strengths of all secondary feature points in the p-th sector area relative to the primary feature point is O p ( fi (I,PF) |F (I,SF) ):

其中为指示函数:in is the indicator function:

在获取所有方向区间的强度后,组成fi (I,PF)的特征描述向量V(fj (I,PF)|F(I,SF)):After obtaining the intensity of all direction intervals, the feature description vector V(f j (I,PF ) |F (I,SF) ) of f i (I,PF ) is composed:

V(fj (I,PF)|F(I,SF))=[O1(fi (I,PF)|F(I,SF)),…,On(fi (I,PF)|F(I,SF))] (12)V(f j (I,PF) |F (I,SF) )=[O 1 (f i (I,PF) |F (I,SF) ),…,O n (f i (I,PF) |F (I,SF) )] (12)

例如图5,从主方向开始将所有方向分成n个方向区间,这里n=10,然后统计每个方向区间内计算方位区间强度,得到方位区间直方图,即为该特征的特征描述向量,如图6所示。For example, in Figure 5, all directions are divided into n direction intervals starting from the main direction, where n=10, and then the intensity of the azimuth interval is calculated in each direction interval to obtain the azimuth interval histogram, which is the feature description vector of the feature, such as As shown in Figure 6.

重复上述步骤,得到所有主要特征点的特征描述向量。Repeat the above steps to obtain the feature description vectors of all main feature points.

最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制;尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art should understand that it can still be used Modifications are made to the technical solutions described in the foregoing embodiments, or equivalent substitutions are made to some of the technical features; however, these modifications or substitutions do not cause the essence of the corresponding technical solutions to deviate from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (3)

1. The feature description system based on the inter-feature azimuth distance is characterized in that: the system comprises a Gaussian smoothing unit, a feature detector, a feature screening unit, a feature relation calculating unit and a feature descriptor generating unit;
the feature screening unit comprises a secondary feature screening unit and a primary feature screening unit;
the feature descriptor generating unit comprises a direction estimating unit and a direction intensity histogram statistic unit;
the Gaussian smoothing unit performs smoothing processing on the input image I (x, y) and then performs feature detection through a feature detector;
the feature detector detects a feature point set F of the obtained image (I) At the same time obtain feature point f i (I) Position Loc (f in image i (I) )=(x i ,y i ) The characteristic point f i (I) Response intensity Mag (f) i (I) ) The feature point set is expressed as:
F (I) ={f i (I) |i∈[1,2,...,N f ]}
wherein i represents the detectedCharacteristic point f i (I) Is the ordinal number of the image, the feature point set of the image comprises N f Feature points;
the secondary feature screening unit detects the obtained feature point set F by the feature detector (I) Secondary feature point screening is carried out; the method comprises the following steps:
introducing modulated characteristic response intensity Mag m Expressed by the following formula:
wherein M, N represent the width and height of the image; all the characteristic points are ordered according to the descending order of the modulated characteristic response intensity, the characteristic points of the first half part are selected as secondary characteristic points, and then the serial numbers are reassigned, namely f i (I,SF) The method comprises the steps of carrying out a first treatment on the surface of the The secondary feature points form a secondary feature point set F (I,SF) Expressed by the following formula:
F (I,SF) ={f i (I,SF) |i∈[1,2,…,N f /2]}
the secondary feature point set F (I,SF) The main characteristic screening unit is adopted for screening, and main characteristic points are determined, wherein the specific process is as follows:
initializing a set of principal feature points F of an image (I,PF) Is empty set, i.e
For each secondary feature point f i (I,SF) Define a circle domain D (f) with radius R i (I,SF) ) Expressed by the following formula:
D(f i (I,SF) )={(x,y)|(x-x i ) 2 +(y-y i ) 2 <R 2 }
if there are no other secondary feature points in the circleSatisfy->Then f is set to i (I,SF) As the main characteristic point, i.e. f i (I,SF) ∈F (I,PF)
For the main characteristic point set F (I,PF) Ordinals are reassigned to all main feature points in the table to obtain a main feature point f i (I,PF)
The main feature point set satisfies the following equation:
calculating each principal feature point f by the inter-feature relationship calculating unit i (I,PF) With other secondary characteristic pointsThe relative orientation and distance between them,
the direction estimation unit determines the main direction of each main characteristic point, takes the main direction as a reference direction, and maps the relative directions of all secondary characteristic points and the main characteristic point to a range of 0-2 pi clockwise;
wherein the main direction Ori (f i (I,PF) ) Determining according to the relative orientation of the secondary characteristic point with the strongest relation with the primary characteristic point; namely:
wherein the method comprises the steps ofThe method meets the following conditions:
the relative position after remapping is recorded as
The direction intensity histogram statistics unit will calculate the direction intensity histogram for the main direction Ori (f i (I,PF) ) And the remapped relative orientationCounting to generate a feature description vector; the specific method comprises the following steps:
for each main feature point, taking the main feature point as a center, dividing the image into n sector areas from the main direction of the main feature point, and respectively representing n direction intervals;
then, calculating the sum of the relation intensities of all secondary characteristic points in each direction interval relative to the main characteristic point to form the intensity of the direction interval;
after the intensity of all direction intervals is obtained, the composition f i (I,PF) Feature description vector of (a)Expressed by the following formula:
until the feature description vectors of all the principal feature points are obtained.
2. The inter-feature bearing based characterization system of claim 1, wherein: the inter-feature relation calculation unit calculates each principal feature point f i (I,PF) With other secondary characteristic pointsThe relative orientation and distance between the two parts are as follows:
setting a main characteristic point f i (I,PF) And another secondary feature pointThen->Relative to f i (I,PF) Orientation between->Distance->Expressed by the following formula:
in (x) i ,y i )=Loc(f i (I,PF) ),And define->And f i (I,PF) The reciprocal between them is the relationship strength:
3. the inter-feature bearing based characterization system of claim 1, wherein: the sum of the relation intensity of all secondary characteristic points in the p-th fan-shaped area relative to the main characteristic point is O p (f i (I,PF) |F (I,SF) ) Expressed by the following formula:
wherein the method comprises the steps ofTo indicate a function, it is expressed by the following formula:
after the intensity of all direction intervals is obtained, the composition f i (I,PF) Feature description vector of (a)Until the feature description vectors of all the principal feature points are obtained.
CN202110974875.1A 2021-08-24 2021-08-24 Feature description system based on azimuth distance between features Active CN113689403B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110974875.1A CN113689403B (en) 2021-08-24 2021-08-24 Feature description system based on azimuth distance between features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110974875.1A CN113689403B (en) 2021-08-24 2021-08-24 Feature description system based on azimuth distance between features

Publications (2)

Publication Number Publication Date
CN113689403A CN113689403A (en) 2021-11-23
CN113689403B true CN113689403B (en) 2023-09-19

Family

ID=78581910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110974875.1A Active CN113689403B (en) 2021-08-24 2021-08-24 Feature description system based on azimuth distance between features

Country Status (1)

Country Link
CN (1) CN113689403B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011149303A2 (en) * 2010-05-27 2011-12-01 Samsung Electronics Co., Ltd. Image capturing and display apparatus and method
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN107945221A (en) * 2017-12-08 2018-04-20 北京信息科技大学 A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process
CN108664983A (en) * 2018-05-21 2018-10-16 天津科技大学 A kind of scale and the adaptive SURF characteristic point matching methods of characteristic strength
CN111738278A (en) * 2020-06-22 2020-10-02 黄河勘测规划设计研究院有限公司 Underwater multi-source acoustic image feature extraction method and system
CN112085772A (en) * 2020-08-24 2020-12-15 南京邮电大学 Remote sensing image registration method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008130907A1 (en) * 2007-04-17 2008-10-30 Mikos, Ltd. System and method for using three dimensional infrared imaging to identify individuals
CN107633526B (en) * 2017-09-04 2022-10-14 腾讯科技(深圳)有限公司 Image tracking point acquisition method and device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2011149303A2 (en) * 2010-05-27 2011-12-01 Samsung Electronics Co., Ltd. Image capturing and display apparatus and method
CN104680550A (en) * 2015-03-24 2015-06-03 江南大学 Method for detecting defect on surface of bearing by image feature points
CN107945221A (en) * 2017-12-08 2018-04-20 北京信息科技大学 A kind of three-dimensional scenic feature representation based on RGB D images and high-precision matching process
CN108664983A (en) * 2018-05-21 2018-10-16 天津科技大学 A kind of scale and the adaptive SURF characteristic point matching methods of characteristic strength
CN111738278A (en) * 2020-06-22 2020-10-02 黄河勘测规划设计研究院有限公司 Underwater multi-source acoustic image feature extraction method and system
CN112085772A (en) * 2020-08-24 2020-12-15 南京邮电大学 Remote sensing image registration method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Selection and Fusion of Color Models for Image Feature Detection;Harro Stokman;IEEE Transactions on Pattern Analysis and Machine intelligence;全文 *
实现遥感相机自主辨云的小波SCM算法;陶淑苹;测绘学报;全文 *
改进的基于深度卷积网的图像匹配算法;雷鸣;刘传才;;计算机系统应用(第01期);168-174 *

Also Published As

Publication number Publication date
CN113689403A (en) 2021-11-23

Similar Documents

Publication Publication Date Title
CN106960451B (en) A method to increase the number of feature points in weak texture areas of images
CN111667506B (en) Motion estimation method based on ORB feature points
CN105184830B (en) A kind of symmetrical shaft detection localization method of symmetric graph picture
CN104820996B (en) A kind of method for tracking target of the adaptive piecemeal based on video
CN110443295A (en) Improved images match and error hiding reject algorithm
CN111444948B (en) Image feature extraction and matching method
CN108346162A (en) Remote sensing image registration method based on structural information and space constraint
CN113643334A (en) Different-source remote sensing image registration method based on structural similarity
CN104899888B (en) A Method of Image Subpixel Edge Detection Based on Legendre Moments
CN112085709B (en) Image comparison method and device
CN114331879A (en) Visible light and infrared image registration method for equalized second-order gradient histogram descriptor
CN108537832A (en) Method for registering images, image processing system based on local invariant gray feature
CN117870659A (en) Visual inertial integrated navigation algorithm based on dotted line characteristics
CN113689403B (en) Feature description system based on azimuth distance between features
CN114943891A (en) Prediction frame matching method based on feature descriptors
CN106485264A (en) Divided based on gradient sequence and the curve of mapping policy is described and matching process
Anggara et al. Integrated Colormap and ORB detector method for feature extraction approach in augmented reality
CN117291790A (en) A SAR image registration method, device, equipment and medium
Alam et al. A comparative analysis of feature extraction algorithms for augmented reality applications
CN115690458A (en) A Method for Generating Rotation-Invariant Multiscale Ring Feature Descriptors
CN115760996A (en) Corner screening method and device, computer equipment and computer-readable storage medium
CN105787487B (en) Similarity matching method for shearing tool pictures
CN111105381B (en) Dense point cloud smoothing method based on spherical model
Moon et al. Fast image-matching technique robust to rotation in spherical images
CN103902982B (en) Target tracking method based on soft distribution BoF

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant