CN114677518A - An image feature point detection method, system and computer storage medium - Google Patents
An image feature point detection method, system and computer storage medium Download PDFInfo
- Publication number
- CN114677518A CN114677518A CN202210417636.0A CN202210417636A CN114677518A CN 114677518 A CN114677518 A CN 114677518A CN 202210417636 A CN202210417636 A CN 202210417636A CN 114677518 A CN114677518 A CN 114677518A
- Authority
- CN
- China
- Prior art keywords
- image
- points
- processing
- feature
- feature point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims description 26
- 238000001914 filtration Methods 0.000 claims description 26
- 238000012545 processing Methods 0.000 claims description 26
- 239000011159 matrix material Substances 0.000 claims description 20
- 238000000034 method Methods 0.000 claims description 17
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims 1
- 238000004364 calculation method Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000009499 grossing Methods 0.000 description 4
- 239000013598 vector Substances 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012937 correction Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20068—Projection on vertical or horizontal image axis
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
Description
技术领域technical field
本发明涉及图像处理技术领域,特别涉及一种图像特征点检测方法、系统及计算机存储介质。The present invention relates to the technical field of image processing, and in particular, to a method, a system and a computer storage medium for detecting image feature points.
背景技术Background technique
在远距离红外及可见光成像配准及融合系统中,目标在图像中占比很小,且目标能量很弱,导致图像中目标特征点的检测准确率较低。In the long-distance infrared and visible light imaging registration and fusion system, the target occupies a small proportion in the image, and the target energy is very weak, resulting in a low detection accuracy of target feature points in the image.
发明内容SUMMARY OF THE INVENTION
本发明实施例提供了一种图像特征点检测方法、系统及计算机存储介质,用以解决现有技术中由于目标在图像中的占比低且能量弱造成特征点检测准确率较低的问题。Embodiments of the present invention provide an image feature point detection method, system and computer storage medium to solve the problem of low feature point detection accuracy in the prior art due to the low proportion of the target in the image and weak energy.
一方面,本发明实施例提供了一种图像特征点检测方法,包括:On the one hand, an embodiment of the present invention provides an image feature point detection method, including:
对待检测图像进行图像增强处理,获得增强图像;Perform image enhancement processing on the image to be detected to obtain an enhanced image;
对增强图像进行引导滤波处理,获得滤波图像;Perform guided filtering processing on the enhanced image to obtain a filtered image;
对滤波图像进行SURF特征点检测,获得目标特征点。Perform SURF feature point detection on the filtered image to obtain target feature points.
另一方面,本发明实施例提供了一种图像特征点检测系统,包括:On the other hand, an embodiment of the present invention provides an image feature point detection system, including:
图像增强模块,用于对待检测图像进行图像增强处理,获得增强图像;The image enhancement module is used to perform image enhancement processing on the image to be detected to obtain an enhanced image;
图像滤波模块,用于对增强图像进行引导滤波处理,获得滤波图像;The image filtering module is used to conduct guided filtering processing on the enhanced image to obtain the filtered image;
特征点检测模块,用于对滤波图像进行SURF特征点检测,获得目标特征点。The feature point detection module is used to perform SURF feature point detection on the filtered image to obtain target feature points.
另一方面,本发明实施例提供了一种计算机存储介质,该计算机存储介质中存储有多条计算机指令,该多条计算机指令用于使计算机执行上述的方法。On the other hand, an embodiment of the present invention provides a computer storage medium, where a plurality of computer instructions are stored in the computer storage medium, and the plurality of computer instructions are used to cause a computer to execute the above method.
本发明中的一种图像特征点检测方法、系统及计算机存储介质,具有以下优点:An image feature point detection method, system and computer storage medium in the present invention have the following advantages:
针对低信噪比图像特征点检测,能够有效提高信噪比,同时抑制图像噪声,能够快速准确地实现特征点检测,且方法的实时性强,利于工程化实现。For the detection of image feature points with low signal-to-noise ratio, the signal-to-noise ratio can be effectively improved, and image noise can be suppressed at the same time, and feature point detection can be realized quickly and accurately, and the method has strong real-time performance, which is conducive to engineering realization.
附图说明Description of drawings
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the embodiments of the present invention or the technical solutions in the prior art more clearly, the following briefly introduces the accompanying drawings that need to be used in the description of the embodiments or the prior art. Obviously, the accompanying drawings in the following description are only These are some embodiments of the present invention. For those of ordinary skill in the art, other drawings can also be obtained according to these drawings without creative efforts.
图1为本发明实施例提供的一种图像特征点检测方法的流程图;1 is a flowchart of a method for detecting image feature points according to an embodiment of the present invention;
图2为本发明实施例提供的确定特征点主方向的示意图;2 is a schematic diagram of determining the main direction of a feature point according to an embodiment of the present invention;
图3为本发明实施例提供的确定harr小波特征的示意图。FIG. 3 is a schematic diagram of determining a harr wavelet feature according to an embodiment of the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only a part of the embodiments of the present invention, but not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by those of ordinary skill in the art without creative efforts shall fall within the protection scope of the present invention.
图1为本发明实施例提供的一种图像特征点检测方法的流程图。本发明实施例提供了一种图像特征点检测方法,包括:FIG. 1 is a flowchart of an image feature point detection method according to an embodiment of the present invention. An embodiment of the present invention provides an image feature point detection method, including:
S100、对待检测图像进行图像增强处理,获得增强图像。S100. Perform image enhancement processing on the image to be detected to obtain an enhanced image.
示例性地,S100具体包括:S101、对待检测图像进行滤波,获得低频图像;S102、将待检测图像和低频图像作差,获得高频图像;S103、对低频图像进行自适应混合调光处理,获得调光图像;S104、将调光图像与高频图像叠加,获得增强图像。Exemplarily, S100 specifically includes: S101, filtering the image to be detected to obtain a low-frequency image; S102, making a difference between the image to be detected and the low-frequency image to obtain a high-frequency image; S103, performing adaptive hybrid dimming processing on the low-frequency image, Obtain a dimming image; S104, superimpose the dimming image and the high-frequency image to obtain an enhanced image.
在S101中,可以采用均值滤波器对待检测图像进行滤波,获得低频图像。S103具体可以包括:S105、对低频图像进行直方图统计,获得相应的直方图映射输出;S106、对低频图像进行线性调光处理,获得相应的调光输出;S107、将直方图映射输出和调光输出进行加权求和,获得调光图像。In S101, an average filter may be used to filter the image to be detected to obtain a low-frequency image. S103 may specifically include: S105, perform histogram statistics on the low-frequency image to obtain a corresponding histogram map output; S106, perform linear dimming processing on the low-frequency image to obtain a corresponding dimming output; S107, map the histogram output and adjust the output. The light output is weighted and summed to obtain a dimming image.
具体地,在S105中,假设图像的灰度级为0~L,令表示第k个灰度级,为第k个灰度级出现的像素个数,遍历整幅图像得到所有灰度级的个数,即得到图像的直方图统计。根据直方图统计结果计算累积直方图,计算公式如下:Specifically, in S105, assuming that the gray level of the image is 0~ L , let represents the kth gray level, For the number of pixels appearing in the kth gray level, traverse the entire image to get the number of all gray levels , that is, the histogram statistics of the image are obtained. The cumulative histogram is calculated according to the statistical results of the histogram, and the calculation formula is as follows:
(1) (1)
其中,表示rk灰度级的统计个数,r l 和r h 分别表示最小的灰度级和最大的灰度级。in, Represents the statistical number of rk gray levels, and rl and rh represent the smallest gray level and the largest gray level, respectively.
根据公式(1)求得的累积直方图计算直方图映射输出,计算公式如下:According to the cumulative histogram obtained by formula (1), the histogram mapping output is calculated, and the calculation formula is as follows:
(2) (2)
其中,上式中的即表示式(1)中的,表示图像坐标为时的灰度级。Among them, in the above formula That is, in the expression (1) , Represents the image coordinates as grayscale level.
在S106中,线性调光输出计算公式如下,In S106, the linear dimming output calculation formula is as follows,
(3) (3)
其中分别表示细节层图像的最小和最大值,表示直方图映射输出的最小和最大值,表示当前像素的灰度级,表示图像坐标为时的灰度数值。in represent the minimum and maximum values of the detail layer image, respectively, represents the minimum and maximum output of the histogram map, represents the gray level of the current pixel, Represents the image coordinates as grayscale value.
在S107中,加权求和输出的最终的输出计算公式如下:In S107, the final output calculation formula of the weighted summation output is as follows:
(4) (4)
其中,分别为线性调光与直方图修正的权重值,且。in, are the weight values of linear dimming and histogram correction, respectively, and .
在S104中,将调光后低频图像与高频图像叠加,且低频图像和高频图像的权值系数动态可配置,最终可得到信噪比增强后的图像,即增强图像。输出计算公式如下:In S104, the low-frequency image and the high-frequency image after dimming are superimposed, and the weight coefficients of the low-frequency image and the high-frequency image are dynamically configurable, and finally an image with enhanced signal-to-noise ratio, that is, an enhanced image can be obtained. The output calculation formula is as follows:
(5) (5)
S110、对增强图像进行引导滤波处理,获得滤波图像。S110. Perform guided filtering processing on the enhanced image to obtain a filtered image.
示例性地,引导滤波是一种自适应权重的滤波方法,能够在平滑图像的同时起到保持边界的作用。引导滤波器作为一种线性滤波器,可以简单定义为如下形式:Illustratively, guided filtering is an adaptive weighted filtering method that can smooth the image while maintaining boundaries. As a linear filter, the guided filter can be simply defined as follows:
(6) (6)
其中,I是引导图像,P是输入的待滤波图像,Q是滤波后的输出图像,W是根据引导图I确定的权重值。Among them, I is the guide image, P is the input image to be filtered, Q is the filtered output image, and W is the weight value determined according to the guide map I.
权重值W可以用下式表示: (7)The weight value W can be expressed by the following formula: (7)
其中,是窗口内像素点的均值,指相邻两个像素点的值,代表窗口内像素点的方差,是一个惩罚值。权重值W可以根据上式分析得到:在边界两侧时,异号,否则同号。而异号时的权重值将远远小于同号时的权重值,这样处于平坦区域的像素则会被加以较大的权重,平滑效果更明显,而处于边界两侧的像素则会被加以较小的权重,平滑效果较弱,能够起到保持边界的效果。in, is the mean of the pixels in the window, refers to the value of two adjacent pixels, represents the variance of the pixels in the window, is a penalty value. The weight value W can be obtained according to the above analysis: on both sides of the border, Different sign, otherwise same sign. The weight value of the different sign will be much smaller than the weight value of the same sign, so that the pixels in the flat area will be given a larger weight, and the smoothing effect will be more obvious, while the pixels on both sides of the boundary will be compared. Small weights have a weaker smoothing effect and can maintain the effect of boundaries.
惩罚值对滤波效果影响也很大,当值很小时,滤波如前;当值很大时,权重的计算公式将近似为一个均值滤波器,平滑效果会更明显。penalty value It also has a great influence on the filtering effect, when When the value is small, the filtering is as before; when the value is large, the calculation formula of the weight will be approximated as a mean filter, and the smoothing effect will be more obvious.
同样也可以根据线性滤波公式来理解引导滤波的自适应权重原理,局部线性滤波模型公式如下: (8)The adaptive weight principle of guided filtering can also be understood according to the linear filtering formula. The local linear filtering model formula is as follows: (8)
(9) (9)
(10) (10)
其中,两个系数根据引导图I和输入图像P共同决定。而a和b的值将会决定梯度信息和平滑信息的权重大小。in, The two coefficients are jointly determined according to the guide map I and the input image P. The values of a and b will determine the weight of gradient information and smoothing information.
通过观察a和b的公式,a的分子为I和P的协方差,分母部分为I的方差加上惩罚值;b的值为P的均值减去a乘以I的均值。可以看出当a值很小时,b约等于窗口内像素点的均值,近似于均值滤波;当a值很大时,输出则主要取决于的大小,梯度信息能够得到保留。By observing the formulas of a and b , the numerator of a is the covariance of I and P , and the denominator part is the variance of I plus the penalty value ; The value of b is the mean of P minus the mean of a times I. It can be seen that when the value of a is very small, b is approximately equal to the mean of the pixels in the window , which is similar to mean filtering; when the value of a is large, the output mainly depends on The size of the gradient information can be preserved.
引导滤波计算过程如下:首先分别计算出引导图像I与输入图像P的均值图像,其计算公式如下:The calculation process of the guided filtering is as follows: First, the mean image of the guided image I and the input image P are calculated respectively, and the calculation formula is as follows:
(11) (11)
(12) (12)
其次分别计算出的均值图像,其计算公式如下:Next, calculate the The mean image of , its calculation formula is as follows:
(13) (13)
(14) (14)
然后分别求出I的方差图像,以及的协方差图像,其计算公式如下:Then find the variance image of I respectively, and The covariance image of , its calculation formula is as follows:
(15) (15)
(16) (16)
根据以上和计算公式,计算可得如下结果:According to the above and The calculation formula can be calculated as follows:
(17) (17)
(18) (18)
再对求均值,并将均值求和得到最终结果,表达式如下:Right again Find the mean and sum the mean to get the final result, the expression is as follows:
(19) (19)
(20) (20)
(21) (twenty one)
S120、对滤波图像进行SURF特征点检测,获得目标特征点。S120. Perform SURF feature point detection on the filtered image to obtain target feature points.
示例性地,S120具体包括:S121、构建Hessian矩阵,利用Hessian矩阵对滤波图像进行处理,获得相应的像素点和二维图像空间;S122、构建尺度空间;S123、将经过Hessian矩阵处理得到的像素点和二维图像空间以及尺度空间邻域内的点进行比较,确定特征点;S124、根据特征点确定主方向;S125、按照主方向确定相应的特征点描述子;S126、根据特征点描述子对特征点进行匹配,剔除不匹配的特征点。Exemplarily, S120 specifically includes: S121, constructing a Hessian matrix, and using the Hessian matrix to process the filtered image to obtain corresponding pixel points and a two-dimensional image space; S122, constructing a scale space; S123, processing the pixels obtained by the Hessian matrix. The point is compared with the points in the two-dimensional image space and the scale space neighborhood to determine the feature point; S124, determine the main direction according to the feature point; S125, determine the corresponding feature point descriptor according to the main direction; S126, according to the feature point descriptor pair The feature points are matched, and the unmatched feature points are eliminated.
其中,在S121中,构建Hessian矩阵能够生成稳定的边缘点,在构建Hessian矩阵前,首先需要对引导滤波后的滤波图像进行高斯滤波,经高斯滤波后的Hessian矩阵表达式如下:Among them, in S121, the construction of the Hessian matrix can generate stable edge points. Before the construction of the Hessian matrix, Gaussian filtering needs to be performed on the filtered image after guided filtering. The expression of the Hessian matrix after Gaussian filtering is as follows:
(22) (twenty two)
当Hessian矩阵的判别式取得局部极大值时,判定当前点是比周围邻域内其它点更亮或更暗的点,由此来定位关键点的位置。同时为了提高运算速度,本发明中采用引导滤波器来近似替代高斯滤波器。Hessian矩阵的判别式表示如下:When the discriminant of the Hessian matrix obtains a local maximum value, it is determined that the current point is brighter or darker than other points in the surrounding neighborhood, thereby locating the position of the key point. At the same time, in order to improve the operation speed, a guided filter is used to replace the Gaussian filter approximately in the present invention. The discriminant of the Hessian matrix is expressed as follows:
(23) (twenty three)
在S122中,同SIFT一样,SURF的尺度空间也是由O和L组成。在SURF特征点中,不同组间图像的尺寸都是一致的,且不同组间使用的引导滤波器的模板尺寸逐渐增大,同一组间不同层间使用相同尺寸的引导滤波器,但是引导滤波器的模糊系数逐渐增大。In S122, like SIFT, the scale space of SURF is also composed of O and L. In the SURF feature points, the size of the images in different groups is the same, and the template size of the guided filters used in different groups gradually increases. The blur coefficient of the filter gradually increases.
S123具体可以包括:S127、将经过Hessian矩阵处理得到的像素点和二维图像空间以及尺度空间邻域内的点进行比较,确定关键点;S128、滤除能量弱的关键点以及错误定位的关键点,筛选出最终的特征点。具体地,可以将经过Hessian矩阵处理的每个像素点与二维图像空间和尺度空间邻域内的26个点进行比较,以初步定位出关键点。S123 may specifically include: S127, compare the pixel points obtained by processing the Hessian matrix with the points in the two-dimensional image space and the scale space neighborhood to determine the key points; S128, filter out the key points with weak energy and the key points that are wrongly located , filter out the final feature points. Specifically, each pixel point processed by the Hessian matrix can be compared with 26 points in the neighborhood of the two-dimensional image space and scale space to preliminarily locate key points.
S124具体可以包括:S129、在特征点的圆形邻域内,统计一定角度扇形内所有点的水平、垂直harr小波特征值的和;S130、以一定弧度大小的间隔对扇形进行旋转,统计旋转后扇形区域内所有点的水平、垂直harr小波特征值的和;S131、将区域内每个点的水平、垂直harr小波特征值的和最大的扇形的方向作为特征点的主方向。具体地,在SURF特征点检测算法中,采用的是统计特征点圆形邻域内的harr小波特征值。即在特征点的圆形邻域内,统计60度扇形内所有点的水平、垂直harr小波特征值的总和,然后扇形以0.2弧度大小的间隔进行旋转并再次统计该区域内harr小波特征值之后,最后将值最大的那个扇形的方向作为该特征点的主方向,该过程如图2所示。S124 may specifically include: S129, in the circular neighborhood of the feature point, count the sum of the horizontal and vertical harr wavelet eigenvalues of all points in the sector of a certain angle; S130, rotate the sector at intervals of a certain radian size, and after counting the rotation The sum of the horizontal and vertical harr wavelet eigenvalues of all points in the fan-shaped region; S131 , the direction of the sector with the largest sum of the horizontal and vertical harr wavelet eigenvalues of each point in the region is taken as the main direction of the feature point. Specifically, in the SURF feature point detection algorithm, the harr wavelet eigenvalues in the circular neighborhood of the statistical feature points are used. That is, in the circular neighborhood of the feature points, the sum of the horizontal and vertical harr wavelet eigenvalues of all points in the 60-degree sector is counted, and then the sector is rotated at intervals of 0.2 radians and the harr wavelet eigenvalues in this area are counted again. Finally, the direction of the sector with the largest value is used as the main direction of the feature point, and the process is shown in Figure 2.
在S125中,在SIFT算法中,取特征点周围4*4个区域块,统计每小块内8个梯度方向,用4*4*8=128维向量作为SIFT特征的描述子。在SURF算法中,也是在特征点周围取一个4*4的矩形区域块,但是所取得矩形区域方向是沿着特征点的主方向。每个子区域统计25个像素的水平方向和垂直方向的harr小波特征,这里的水平和垂直方向都是相对主方向而言的。该harr小波特征为水平方向值之和、垂直方向值之和、水平方向绝对值之和以及垂直方向绝对值之和4个方向,该过程如图3所示。把这4个方向的值作为每个子块区域的特征向量,所以一共有4*4*4=64维向量作为SURF特征的描述子,比SIFT特征的描述子减少了2倍。In S125, in the SIFT algorithm, 4*4 area blocks around the feature point are taken, 8 gradient directions in each small block are counted, and a 4*4*8=128-dimensional vector is used as the descriptor of the SIFT feature. In the SURF algorithm, a 4*4 rectangular area block is also taken around the feature point, but the direction of the obtained rectangular area is along the main direction of the feature point. Each sub-region counts the harr wavelet features in the horizontal and vertical directions of 25 pixels, where the horizontal and vertical directions are relative to the main direction. The harr wavelet features are four directions: the sum of the horizontal direction value, the sum of the vertical direction value, the sum of the absolute value of the horizontal direction and the sum of the absolute value of the vertical direction. The process is shown in Figure 3. The values of these 4 directions are used as the feature vector of each sub-block area, so a total of 4*4*4=64-dimensional vectors are used as the descriptors of SURF features, which is 2 times less than the descriptors of SIFT features.
S126具体可以包括:S132、确定两个特征点间的欧式距离,根据欧式距离确定特征点之间的匹配度;S133、确定匹配度高于阈值的两个特征点之间的Hessian矩阵迹;S134、如果两个特征点之间的Hessian矩阵迹的正负号不同,对相应的特征点进行剔除。具体地,欧式距离越短,代表两个特征点的匹配度越好。如果两个特征点的Hessian矩阵迹正负号相同,代表这两个特征点具有相同方向上的对比度变化,如果不同,说明这两个特征点的对比度变化方向是相反的,即使这两个特征点的欧式距离为零,也需要将其剔除。S126 may specifically include: S132, determining the Euclidean distance between the two feature points, and determining the matching degree between the feature points according to the Euclidean distance; S133, determining the Hessian matrix trace between the two feature points whose matching degree is higher than the threshold; S134 . If the signs of the Hessian matrix traces between the two feature points are different, the corresponding feature points are eliminated. Specifically, the shorter the Euclidean distance, the better the matching degree of the two feature points. If the signs of the Hessian matrix traces of the two feature points are the same, it means that the two feature points have contrast changes in the same direction. If they are different, it means that the contrast changes of the two feature points are opposite, even if the two features The Euclidean distance of the point is zero, which also needs to be culled.
本发明实施例还提供了一种图像特征点检测系统,该系统包括:The embodiment of the present invention also provides an image feature point detection system, the system includes:
图像增强模块,用于对待检测图像进行图像增强处理,获得增强图像;The image enhancement module is used to perform image enhancement processing on the image to be detected to obtain an enhanced image;
图像滤波模块,用于对增强图像进行引导滤波处理,获得滤波图像;The image filtering module is used to conduct guided filtering processing on the enhanced image to obtain the filtered image;
特征点检测模块,用于对滤波图像进行SURF特征点检测,获得目标特征点。The feature point detection module is used to perform SURF feature point detection on the filtered image to obtain target feature points.
本发明实施例还提供了一种计算机存储介质,该计算机存储介质中存储有多条计算机指令,该多条计算机指令用于使计算机执行上述的方法。An embodiment of the present invention further provides a computer storage medium, where a plurality of computer instructions are stored in the computer storage medium, and the plurality of computer instructions are used to cause a computer to execute the above method.
尽管已描述了本发明的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明范围的所有变更和修改。Although preferred embodiments of the present invention have been described, additional changes and modifications to these embodiments may occur to those skilled in the art once the basic inventive concepts are known. Therefore, the appended claims are intended to be construed to include the preferred embodiment and all changes and modifications that fall within the scope of the present invention.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。It will be apparent to those skilled in the art that various modifications and variations can be made in the present invention without departing from the spirit and scope of the invention. Thus, provided that these modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include these modifications and variations.
Claims (9)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210417636.0A CN114677518A (en) | 2022-04-21 | 2022-04-21 | An image feature point detection method, system and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210417636.0A CN114677518A (en) | 2022-04-21 | 2022-04-21 | An image feature point detection method, system and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114677518A true CN114677518A (en) | 2022-06-28 |
Family
ID=82078679
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210417636.0A Pending CN114677518A (en) | 2022-04-21 | 2022-04-21 | An image feature point detection method, system and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114677518A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135438A (en) * | 2019-05-09 | 2019-08-16 | 哈尔滨工程大学 | An Improved SURF Algorithm Based on Gradient Amplitude Precomputing |
CN110390338A (en) * | 2019-07-10 | 2019-10-29 | 武汉大学 | A High Precision Matching Method for SAR Based on Nonlinear Guided Filtering and Ratio Gradient |
KR102173244B1 (en) * | 2019-12-10 | 2020-11-03 | (주)인펙비전 | Video stabilization system based on SURF |
CN113673515A (en) * | 2021-08-20 | 2021-11-19 | 国网上海市电力公司 | A computer vision target detection algorithm |
WO2021238655A1 (en) * | 2020-05-29 | 2021-12-02 | 展讯通信(上海)有限公司 | Image processing method and apparatus, storage medium and terminal |
CN114332081A (en) * | 2022-03-07 | 2022-04-12 | 泗水县亿佳纺织厂 | Textile surface abnormity determination method based on image processing |
-
2022
- 2022-04-21 CN CN202210417636.0A patent/CN114677518A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110135438A (en) * | 2019-05-09 | 2019-08-16 | 哈尔滨工程大学 | An Improved SURF Algorithm Based on Gradient Amplitude Precomputing |
CN110390338A (en) * | 2019-07-10 | 2019-10-29 | 武汉大学 | A High Precision Matching Method for SAR Based on Nonlinear Guided Filtering and Ratio Gradient |
KR102173244B1 (en) * | 2019-12-10 | 2020-11-03 | (주)인펙비전 | Video stabilization system based on SURF |
WO2021238655A1 (en) * | 2020-05-29 | 2021-12-02 | 展讯通信(上海)有限公司 | Image processing method and apparatus, storage medium and terminal |
CN113673515A (en) * | 2021-08-20 | 2021-11-19 | 国网上海市电力公司 | A computer vision target detection algorithm |
CN114332081A (en) * | 2022-03-07 | 2022-04-12 | 泗水县亿佳纺织厂 | Textile surface abnormity determination method based on image processing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114399522B (en) | An edge detection method based on Canny operator with high and low thresholds | |
CN107203973B (en) | Sub-pixel positioning method for center line laser of three-dimensional laser scanning system | |
CN111080661B (en) | Image-based straight line detection method and device and electronic equipment | |
CN111080529A (en) | A Robust UAV Aerial Image Mosaic Method | |
CN110992263B (en) | Image stitching method and system | |
CN108416789A (en) | Method for detecting image edge and system | |
CN117853510A (en) | Canny edge detection method based on bilateral filtering and self-adaptive threshold | |
CN104933434A (en) | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method | |
Li et al. | Road lane detection with gabor filters | |
CN112017223B (en) | Heterologous image registration method based on improved SIFT-Delaunay | |
CN107169979A (en) | A kind of method for detecting image edge of improvement Canny operators | |
CN109559273B (en) | Quick splicing method for vehicle bottom images | |
CN111223063A (en) | Finger vein image NLM denoising method based on texture feature and dual kernel function | |
CN115994870B (en) | Image processing method for enhancing denoising | |
CN114119437A (en) | GMS-based image stitching method for improving moving object distortion | |
CN108416801B (en) | A Har-SURF-RAN Feature Point Matching Method for Stereo Vision 3D Reconstruction | |
CN115147613B (en) | A method for infrared small target detection based on multi-directional fusion | |
CN108550165A (en) | A kind of image matching method based on local invariant feature | |
Kumar et al. | A novel method of edge detection using cellular automata | |
CN113160332A (en) | Multi-target identification and positioning method based on binocular vision | |
CN110929598A (en) | Contour feature-based matching method for unmanned aerial vehicle SAR images | |
CN117934863A (en) | Adaptive FAST corner detection optimization algorithm based on grayscale mean | |
CN115661110A (en) | A method for identifying and locating transparent workpieces | |
CN113793372A (en) | Optimal registration method and system for different-source images | |
CN113205540B (en) | Multi-scale automatic anisotropic morphological direction derivative edge detection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |