[go: up one dir, main page]

CN103337068B - The multiple subarea matching process of spatial relation constraint - Google Patents

The multiple subarea matching process of spatial relation constraint Download PDF

Info

Publication number
CN103337068B
CN103337068B CN201310219392.6A CN201310219392A CN103337068B CN 103337068 B CN103337068 B CN 103337068B CN 201310219392 A CN201310219392 A CN 201310219392A CN 103337068 B CN103337068 B CN 103337068B
Authority
CN
China
Prior art keywords
matching
sub
subtemplate
template
spatial relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310219392.6A
Other languages
Chinese (zh)
Other versions
CN103337068A (en
Inventor
王岳环
刘畅
吴明强
张利
张天序
陈君灵
周辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201310219392.6A priority Critical patent/CN103337068B/en
Publication of CN103337068A publication Critical patent/CN103337068A/en
Application granted granted Critical
Publication of CN103337068B publication Critical patent/CN103337068B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种基于空间关系约束的多子区匹配方法,其通过在匹配模板中选取多个子区域作为子模板与待匹配图像分别同时进行匹配,利用各子模板的空间位置关系实现图像精确匹配,其包括:在匹配模板选取若干个子区域,分别作为子模板;确定各子模板之间的空间位置关系;各子模板保持其空间位置关系不变形成组合模板,利用组合模板在待匹配图像上移动搜索以进行匹配,获得多个该组合模板的相似度值;比较上述多个相似度值,并以其中相似度值最大的位置作为最佳匹配位置,完成匹配。本发明的方法利用各个子区之间的相互空间关系,来达到匹配的准确性和精度要求,该方法在目标识别性能相对于传统的大模板灰度或轮廓匹配算法,实时性能有较大的提高。

The invention discloses a multi-subregion matching method based on spatial relationship constraints, which selects a plurality of subregions in a matching template as sub-templates and simultaneously matches with an image to be matched respectively, and uses the spatial position relationship of each sub-template to realize image accuracy. Matching, which includes: selecting several sub-regions in the matching template as sub-templates; determining the spatial positional relationship between each sub-template; each sub-template keeps its spatial positional relationship unchanged to form a combined template, and utilizes the combined template in the image to be matched Move up and search for matching, and obtain multiple similarity values of the combined template; compare the above multiple similarity values, and use the position with the largest similarity value as the best matching position to complete the matching. The method of the present invention utilizes the mutual spatial relationship between each sub-area to achieve matching accuracy and precision requirements. Compared with the traditional large template grayscale or contour matching algorithm, the method has a greater real-time performance in target recognition performance. improve.

Description

空间关系约束的多子区匹配方法Multi-subregion matching method with spatial relationship constraints

技术领域technical field

本发明属于数字图像匹配技术领域,具体涉及一种图像多子区匹配方法。The invention belongs to the technical field of digital image matching, and in particular relates to an image multi-subregion matching method.

背景技术Background technique

随着科学技术尤其是计算机技术的迅速发展,使得直接从图像中获取信息的图像处理技术得到了飞速的发展,图像匹配是计算机视觉和图像处理中非常重要的技术之一。With the rapid development of science and technology, especially computer technology, image processing technology that directly obtains information from images has developed rapidly. Image matching is one of the very important technologies in computer vision and image processing.

图像匹配就是在一幅或者多幅未知的图像中,通过匹配计算寻找和已知的模式相对应的子图的过程。目前,图像匹配技术广泛的应用在军事、工业、遥感、医学和机器视觉等各个领域。Image matching is the process of finding subimages corresponding to known patterns through matching calculations in one or more unknown images. At present, image matching technology is widely used in various fields such as military, industry, remote sensing, medicine and machine vision.

在实际的图像匹配应用中,需要选择合适大小的模板。但是,在匹配计算中,如果增大模板图,匹配的计算量会剧烈的增加,导致无法适应实时性要求较高的场合,如果为了减少计算量而减小模板的大小,则会影响匹配的正确率和精度。在对实时性要求较高的应用场合,有的时候不得不为了保证实时性而降低匹配的正确率和精度。In an actual image matching application, it is necessary to select a template with an appropriate size. However, in the matching calculation, if the template graph is enlarged, the calculation amount of the matching will increase dramatically, which will not be able to adapt to the occasions with high real-time requirements. If the size of the template is reduced in order to reduce the calculation amount, it will affect the matching performance. Accuracy and precision. In applications that require high real-time performance, sometimes the accuracy and accuracy of matching have to be reduced in order to ensure real-time performance.

在最近的技术中,也有较多采用了模板匹配的类似方法,比如在明,田等人的《红外前视对一类特殊建筑目标识别技术研究》(出自《宇航学报》2010年4月第31卷第4期)一文,针对红外/可见光多模图像匹配的特点,提出了基于梯度矢量相关系数的计算方法,对于梯度强度相关系数计算方法丢失了梯度方向信息的缺点,该方法使用梯度矢量场来进行匹配,避免了方向信息的丢失,采用大模板匹配的相似性测度,使匹配性能有较大的提高。但是在实时性要求较高的场合,该方法会有所欠缺,无法在较短的计算时间内充分利用模板的重要信息,达到快速计算的效果。In recent technologies, there are many similar methods using template matching, such as "Research on Infrared Forward-Looking Technology for Recognition of a Class of Special Building Targets" by Ming Tian et al. 31, No. 4), in view of the characteristics of infrared/visible multi-mode image matching, a calculation method based on the gradient vector correlation coefficient is proposed. For the shortcomings of the gradient strength correlation coefficient calculation method that loses the gradient direction information, this method uses the gradient vector Matching is carried out using the field to avoid the loss of direction information, and the similarity measure of large template matching is used to greatly improve the matching performance. However, in occasions with high real-time requirements, this method will be lacking, and it cannot make full use of the important information of the template in a short calculation time to achieve the effect of fast calculation.

现有的提高匹配速度的方法难以保证较好的匹配性能,要么是匹配的正确率和精度相对于未提速的匹配算法相差较大,要么是对匹配速度提高的稳定性较差,在某些情况下能够减少较多的计算量,提高较多的匹配速度,但是在某些情况下几乎不能减少计算量,即提速后的匹配算法和未提速的匹配算法的计算速度基本差不多。Existing methods for improving matching speed are difficult to guarantee better matching performance, either because the accuracy and accuracy of matching are significantly different from matching algorithms that have not been speeded up, or because the stability of improving matching speed is poor, and in some In some cases, it can reduce the amount of calculation and increase the matching speed, but in some cases it can hardly reduce the amount of calculation, that is, the calculation speed of the matching algorithm after speed-up is basically the same as that of the matching algorithm without speed-up.

发明内容Contents of the invention

针对现有技术的以上缺陷或改进需求,本发明提供了一种基于空间关系约束的多子区匹配方法,其目的在于通过将一个大模板拆分为若干个较小的子模板,每个子模板保持和大模板一样的位置关系,由此解决在保证匹配正确率和精度的情况下,减少计算量达到提高运算速度的技术问题。In view of the above defects or improvement needs of the prior art, the present invention provides a multi-subregion matching method based on spatial relationship constraints, the purpose of which is to divide a large template into several smaller sub-templates, each sub-template Maintain the same positional relationship as the large template, thereby solving the technical problem of reducing the amount of calculation and improving the calculation speed while ensuring the matching accuracy and accuracy.

为实现本发明的目的所采取的具体技术方案如下:The concrete technical scheme that takes for realizing the object of the present invention is as follows:

(1)选取子区(1) Select a sub-area

将属于大区域中的若干小区域称作子区,在模板上选取若干个子区,该子区也即作为子模板。Several small areas belonging to the large area are called sub-areas, and several sub-areas are selected on the template, and the sub-areas are also used as sub-templates.

子区选择的原则一般是选取角点、线特征明显、具有有别于其他子区或者具有一定区分度的区域,尽量避免重复模式较多的区域。每个子区的大小不宜太小,否则在有尺度或者角度误差的时候对匹配的精度和正确率会有较大的影响。子区选择时应注意空间关系约束,各个子区选择不要相邻太近,应使这些子区可包含大模板大部分信息,这样也可以减少误差,提高匹配精度。The principle of sub-area selection is generally to select corner points, areas with obvious line features, areas that are different from other sub-areas or have a certain degree of discrimination, and try to avoid areas with many repeated patterns. The size of each sub-area should not be too small, otherwise, when there is a scale or angle error, it will have a great impact on the matching accuracy and correct rate. Pay attention to the spatial relationship constraints when selecting sub-regions. The selection of each sub-region should not be too close to each other. These sub-regions should contain most of the information of the large template, which can also reduce errors and improve matching accuracy.

(2)计算子区空间位置关系(2) Calculate the spatial position relationship of the sub-areas

子区之间的空间位置关系是通过模板图计算得来的。计算方法如下:The spatial position relationship between the sub-regions is calculated through the template map. The calculation method is as follows:

先以任意一个子区作为基准,以基准子区中的任一点(例如可以用左上角点或者中心点)为坐标原点,通过计算其它子区相应点与该基准子区上的该任一点的距离,得到其它子区的坐标点,从而获得其他各子区与基准子区的空间位置关系。First, take any sub-area as the benchmark, and take any point in the benchmark sub-area (for example, the upper left corner point or the center point) as the coordinate origin, and calculate the distance between the corresponding point of other sub-areas and the any point on the benchmark sub-area. The coordinate points of other sub-areas are obtained, so as to obtain the spatial position relationship between other sub-areas and the reference sub-area.

由于计算的是各个子区的相对位置关系,子区之间的位置关系是固定的,因此无论选取哪一个子区作为基准,各个子区间的空间位置关系不变,对匹配性能没有影响。Since the calculation is the relative positional relationship of each sub-area, the positional relationship between the sub-areas is fixed, so no matter which sub-area is selected as the reference, the spatial positional relationship of each sub-interval remains unchanged and has no effect on the matching performance.

(3)匹配计算(3) Match calculation

匹配即指在待匹配的实时图中,利用基准图中的子区搜索到相同或最相近的匹配区域。在进行匹配计算时,保持各个子区空间位置关系不变将各子模板同时在待匹配图像上移动以进行匹配。匹配程度通过相似度来衡量,匹配过程即是计算相似度的过程。Matching refers to searching for the same or the closest matching area by using the sub-areas in the reference image in the real-time image to be matched. When performing matching calculation, the spatial position relationship of each sub-region is kept unchanged, and each sub-template is moved on the image to be matched at the same time for matching. The degree of matching is measured by similarity, and the matching process is the process of calculating similarity.

计算相似度的时候,可以采用单独计算每个子区的相似度值,然后再将每个子区的相似度(置信度)值相加起来,得到一个相似度值之和,该相似度值作为衡量上述多个子区构成的区域的相似程度。When calculating the similarity, you can calculate the similarity value of each sub-region separately, and then add the similarity (confidence) values of each sub-region to obtain a sum of similarity values, which is used as a measure The degree of similarity of the area formed by the above-mentioned multiple sub-areas.

相似度的计算有许多成熟的方法,现在就归一化互相关法说明如何计算一个子区的相似度值R。There are many mature methods for the calculation of similarity. Now, the normalized cross-correlation method is used to explain how to calculate the similarity value R of a sub-region.

归一化互相关法(Normal cross—correlation简称NCC)的计算如公式(1):The normalized cross-correlation method (Normal cross-correlation referred to as NCC) is calculated as formula (1):

RR (( xx ,, ythe y )) == ΣΣ ii == 00 Mm -- 11 ΣΣ jj == 00 NN -- 11 II (( xx ++ ii ,, ythe y ++ jj )) TT (( ii ,, jj )) ΣΣ ii == 00 Mm -- 11 ΣΣ jj == 11 NN -- 11 II 22 (( xx ++ ii ,, ythe y ++ jj )) ΣΣ ii == 00 Mm -- 11 ΣΣ jj == 00 NN -- 11 TT 22 (( ii ,, jj )) -- -- -- (( 11 ))

式中:R(x,y)为相似度值,I(i,j)是大小为W×H的搜索图即待匹配图像,T(i,j)是大小为M×N的子区模板,其中,(i,j)为任意像素点,M,N,W,H均为正整数,分别代表搜索图的长度和宽度,以及子区模板的长度和宽度。(x,y)是模板所覆盖子图任一点(例如左上角顶点)在搜索图中的坐标。In the formula: R(x, y) is the similarity value, I(i, j) is the search image with the size of W×H, that is, the image to be matched, and T(i, j) is the sub-region template with the size of M×N , where (i, j) is any pixel point, M, N, W, H are all positive integers, representing the length and width of the search map, and the length and width of the sub-region template, respectively. (x, y) is the coordinate of any point in the subgraph covered by the template (such as the upper left corner vertex) in the search graph.

当然,在计算相似度值时,也可以采用将各个子区看成一个大模板计算,计算出一个整体相似度值。Of course, when calculating the similarity value, it is also possible to calculate an overall similarity value by considering each sub-area as a large template.

(4)得到最佳匹配点(4) Get the best matching point

在搜索区域内比较这些相似度值,取相似度最大的位置作为最佳匹配位置。Compare these similarity values in the search area, and take the position with the largest similarity as the best matching position.

本发明的方法通过将一个大模板拆分为若干个较小的子模板,每个子模板保持和大模板一样的位置关系,并且每个子模板中只包含大模板中重要的信息,忽略大模板中重用性不大的信息,从而达到在充分保证匹配正确率和精度的情况下,可适用于多场合,在保证匹配正确率和精度前提下较大提高匹配速度,很大程度地减少计算量,达到提高运算速度的目的。The method of the present invention splits a large template into several smaller sub-templates, each sub-template maintains the same positional relationship as the large template, and each sub-template only contains important information in the large template, ignoring the information in the large template. Information with little reusability can be applied to multiple occasions while fully ensuring the matching accuracy and accuracy. On the premise of ensuring the matching accuracy and accuracy, the matching speed is greatly improved, and the amount of calculation is greatly reduced. To achieve the purpose of improving the operation speed.

通过本发明的空间关系约束的多子区匹配方法进行图像匹配,既充分利用了大模板图中的重要信息,忽略掉大模板中在计算过程中重用性不大的信息,减少了计算量,满足实时性要求,也可以利用各个子区之间的相互空间关系,来达到匹配的准确性和精度要求。大量试验结果表明,该方法在目标识别性能相对于传统的大模板灰度或轮廓匹配算法,实时性能有较大的提高。Image matching is carried out by the multi-subregion matching method constrained by the spatial relationship of the present invention, which not only makes full use of the important information in the large template graph, but ignores the information in the large template that is not reusable in the calculation process, and reduces the amount of calculation. To meet the real-time requirements, the mutual spatial relationship between each sub-area can also be used to meet the matching accuracy and precision requirements. A large number of experimental results show that, compared with the traditional large template grayscale or contour matching algorithm, the real-time performance of this method is greatly improved in target recognition performance.

附图说明Description of drawings

图1是本发明实施例的空间关系约束的多子区匹配方法流程示意图。FIG. 1 is a schematic flowchart of a multi-subregion matching method with spatial relationship constraints according to an embodiment of the present invention.

图2是本发明实施例具体应用时的场景图。FIG. 2 is a scene diagram of a specific application of the embodiment of the present invention.

图3是本发明实施例选取的目标大模板图。Fig. 3 is a large target template diagram selected by the embodiment of the present invention.

图4是本发明实施例选取的多子区示意图。Fig. 4 is a schematic diagram of multiple sub-regions selected in the embodiment of the present invention.

图5是本发明实施例各个子区的相互位置关系的示意图。Fig. 5 is a schematic diagram of the mutual positional relationship of each sub-area according to the embodiment of the present invention.

图6是本发明实施例在匹配计算时各个子区在待匹配图上的坐标。Fig. 6 is the coordinates of each sub-region on the image to be matched during the matching calculation according to the embodiment of the present invention.

具体实施方式Detailed ways

为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

如图1所示,本实施例的基于空间关系约束的多子区匹配方法的具体步骤如下:As shown in Figure 1, the specific steps of the multi-subregion matching method based on spatial relationship constraints in this embodiment are as follows:

(1)选取子区(1) Select a sub-area

场景图以及目标如图2所示,匹配目标所选取的模板图如图3所示。根据模板图的特点和前面介绍子区选取原则,在模板图上选取了多个(例如5个)具有一定区分度的子区,这些区域中主要包含角点、线特征、具有有别于其他子区的特殊形状或者其他具有一定区分度的内容,选取的区域如图4所示。The scene graph and target are shown in Figure 2, and the template graph selected to match the target is shown in Figure 3. According to the characteristics of the template map and the sub-area selection principle introduced earlier, multiple (for example, 5) sub-areas with a certain degree of distinction are selected on the template map. These areas mainly include corner points, line features, and features different from other areas. The special shape of the sub-area or other content with a certain degree of discrimination, the selected area is shown in Figure 4.

(2)取得各个子区的空间位置关系(2) Obtain the spatial position relationship of each sub-area

在选取了子区以后,还需要取得各个子区的空间位置关系。After selecting the sub-areas, it is also necessary to obtain the spatial position relationship of each sub-area.

可以取任意一个子区作为基准,以基准子区中的某个点(优先为左上角点或者中心点)为坐标原点,通过计算其它子区与基准子区的相对距离,该距离的计算,可以通过它们在大模板上的相对位置读取它们的相对坐标,得到其它子区的坐标值。Any sub-area can be taken as a reference, and a point in the reference sub-area (preferably the upper left corner point or center point) is used as the origin of coordinates. By calculating the relative distance between other sub-areas and the reference sub-area, the calculation of the distance, Their relative coordinates can be read through their relative positions on the large template to obtain the coordinate values of other sub-regions.

这里以基准子区的左上角点作为坐标原点为例,其他子区在模板图上于基准子区的空间位置关系如图5所示。Here, taking the upper left corner point of the reference sub-area as the coordinate origin as an example, the spatial position relationship of other sub-areas on the template map with respect to the reference sub-area is shown in Figure 5.

如果在高空的不同视角观察目标的图像,由于透视变换的存在,其图像的大小、形状都发生了改变,下视图像和前下视图像拍摄的角度不同,两者存在着较大的几何差异,给配准带来困难。为了减少高空视角影响,在进行匹配前,首先应将各子模板从下视方向透视变换至前视方向,然后再利用变换后的子模板进行匹配。If the image of the target is observed from different angles of view at high altitude, the size and shape of the image will change due to the existence of perspective transformation. , making the alignment difficult. In order to reduce the influence of high-altitude perspective, before matching, each sub-template should be perspective-transformed from the down-view direction to the front-view direction, and then the transformed sub-templates should be used for matching.

(3)匹配计算(3) Match calculation

在进行匹配计算的时候,保持各个子区空间位置关系进行匹配,即各个子区保持其位置关系不变并一起同时在待匹配图像上进行搜索匹配。When performing matching calculations, the spatial positional relationship of each sub-area is maintained for matching, that is, each sub-area keeps its positional relationship unchanged and simultaneously searches and matches on the image to be matched.

为了保持各个子区的空间位置关系,可以先确定基准子区的位置,然后通过其他子区与基准子区的相互位置关系,计算出其他子区在待匹配图上的位置,再进行匹配计算。In order to maintain the spatial position relationship of each sub-area, the position of the reference sub-area can be determined first, and then the position of other sub-areas on the map to be matched is calculated through the mutual positional relationship between other sub-areas and the reference sub-area, and then the matching calculation is performed .

例如,基准子区在待匹配图像上的坐标为(70,221),则其他的子区在待匹配图上的坐标,通过它们之间的相互位置关系计算得到,如图6所示。For example, the coordinates of the reference sub-region on the image to be matched are (70, 221), then the coordinates of other sub-regions on the image to be matched are calculated through their mutual positional relationship, as shown in Figure 6.

在确定了各个子区在待匹配图上的匹配位置之后,就可以计算模板与待匹配图的相似度。After determining the matching position of each sub-region on the image to be matched, the similarity between the template and the image to be matched can be calculated.

在进行相似度度量计算的时候,先单独计算每个子区的相似度,然后再将每个子区的相似度相加起来,得到最后的结果如公式2所示。When calculating the similarity measure, the similarity of each sub-area is calculated separately, and then the similarity of each sub-area is added together to obtain the final result as shown in formula 2.

c=c1+c2+…+cn(2)c=c 1 +c 2 +...+c n (2)

其中c是模板图在实时图上的一个位置计算出来的相似度,c1,c2…cn表示各个子区的相似度,n表示子区的数量。Where c is the similarity calculated at a position of the template map on the real-time map, c 1 , c 2 ... c n represent the similarity of each sub-region, and n represents the number of sub-regions.

当然,也可以采用将各个子区当做一个整体大模板的各个部分计算,即在计算的时候将各个子区组合看成一个大模板,直接计算出一个整体相似度值c,此种把各个子区看作整体进行的计算与分开计算子区相似度再相加的方法类似,相对于直接使用大模板的全部信息进行匹配也有速度上的优势。此种利用空间关系约束的多子区匹配的方法,把组合起来的每个子区得到的相似度累加,充分利用大模板图中包含的重要特征信息,排除了大模板中冗余的特征信息,减少了计算量,所以在匹配实时性上更具优势。Of course, it is also possible to use each sub-area as a whole large template for calculation, that is, when calculating, the combination of each sub-area is regarded as a large template, and an overall similarity value c is directly calculated. In this way, each sub-area The calculation of the area as a whole is similar to the method of separately calculating the similarity of the sub-areas and adding them together. Compared with directly using all the information of the large template for matching, it also has an advantage in speed. This method of multi-subregion matching using spatial relationship constraints accumulates the similarity obtained by each subregion combined, makes full use of the important feature information contained in the large template map, and eliminates redundant feature information in the large template. The amount of calculation is reduced, so it is more advantageous in matching real-time performance.

(4)得到最佳匹配点(4) Get the best matching point

最后,在搜索区域内比较这些相似度值,取得相似度最大的位置,作为最佳匹配位置,完成匹配。Finally, compare these similarity values in the search area, obtain the position with the largest similarity, and use it as the best matching position to complete the matching.

本领域的技术人员容易理解,以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。It is easy for those skilled in the art to understand that the above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. Any modifications, equivalent replacements and improvements made within the spirit and principles of the present invention, All should be included within the protection scope of the present invention.

Claims (3)

1. the multiple subarea matching process based on spatial relation constraint, it mates with image to be matched as subtemplate respectively by choosing multiple subregion in matching template simultaneously, the spatial relation of each subtemplate is utilized to realize accurate matching of image, it is characterized in that, the method specifically comprises:
Choose several subregions at matching template, respectively as subtemplate, described all subregion is separate, does not respectively overlap;
Determine the spatial relation between each subtemplate, wherein the spatial relation of each subtemplate is determined by the distance of corresponding point in each subtemplate, namely using any subtemplate as benchmark, and with any point in this benchmark subtemplate for true origin, calculate the point of other subtemplate corresponding position and the distance of this this any point, the spatial relation between other subtemplate and benchmark subtemplate can be obtained;
Each subtemplate keeps the indeformable one-tenth gang form of its spatial relation, utilizes the mobile search on image to be matched of this gang form to mate, to obtain the Similarity value of this gang form multiple;
More above-mentioned multiple Similarity value, and using the maximum position of wherein Similarity value as best match position, complete coupling.
2. the multiple subarea matching process based on spatial relation constraint according to claim 1, is characterized in that, the Similarity value of described gang form is added by the Similarity value of each subtemplate in this gang form and obtains.
3. the multiple subarea matching process based on spatial relation constraint according to claim 1, is characterized in that, any point in described benchmark subtemplate can be other arbitrfary points on the central point in subtemplate region, frontier point or region.
CN201310219392.6A 2013-06-04 2013-06-04 The multiple subarea matching process of spatial relation constraint Expired - Fee Related CN103337068B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310219392.6A CN103337068B (en) 2013-06-04 2013-06-04 The multiple subarea matching process of spatial relation constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310219392.6A CN103337068B (en) 2013-06-04 2013-06-04 The multiple subarea matching process of spatial relation constraint

Publications (2)

Publication Number Publication Date
CN103337068A CN103337068A (en) 2013-10-02
CN103337068B true CN103337068B (en) 2015-09-09

Family

ID=49245216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310219392.6A Expired - Fee Related CN103337068B (en) 2013-06-04 2013-06-04 The multiple subarea matching process of spatial relation constraint

Country Status (1)

Country Link
CN (1) CN103337068B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103761730B (en) * 2013-12-31 2016-04-13 华中科技大学 The road vehicle target image aero-optical effect bearing calibration of knowledge constraints
CN105260740B (en) * 2015-09-23 2019-03-29 广州视源电子科技股份有限公司 Element identification method and device
CN106996911B (en) * 2016-01-22 2019-08-20 名硕电脑(苏州)有限公司 The localization method of two-dimensional detection paste solder printing
CN106503737B (en) * 2016-10-20 2019-03-05 广州视源电子科技股份有限公司 Electronic component positioning method and device
CN107734268A (en) * 2017-09-18 2018-02-23 北京航空航天大学 A kind of structure-preserved wide baseline video joining method
CN107590517B (en) * 2017-09-19 2020-07-10 安徽大学 An image similarity measurement method and image registration method based on shape information
CN108010068A (en) * 2017-11-29 2018-05-08 中国人民解放军火箭军工程大学 Ground time critical target recognition methods based on gradient direction characteristic point pair
CN109410175B (en) * 2018-09-26 2020-07-14 北京航天自动控制研究所 SAR radar imaging quality rapid automatic evaluation method based on multi-subregion image matching
CN110288040B (en) * 2019-06-30 2022-02-11 北京华融塑胶有限公司 Image similarity judging method and device based on topology verification
CN111915898B (en) * 2020-07-24 2022-07-08 杭州金通科技集团股份有限公司 Parking monitoring AI electronic post house
CN112634140B (en) * 2021-03-08 2021-08-13 广州松合智能科技有限公司 A high-precision full-size visual image acquisition system and method
CN119478464A (en) * 2025-01-16 2025-02-18 华芯程(杭州)科技有限公司 A graphic matching method, device, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102194131B (en) * 2011-06-01 2013-04-10 华南理工大学 Fast human face recognition method based on geometric proportion characteristic of five sense organs
CN102663754B (en) * 2012-04-17 2014-12-10 北京博研新创数码科技有限公司 Image matching calculation method based on regional Gaussian weighting

Also Published As

Publication number Publication date
CN103337068A (en) 2013-10-02

Similar Documents

Publication Publication Date Title
CN103337068B (en) The multiple subarea matching process of spatial relation constraint
CN101950419B (en) Quick image rectification method in presence of translation and rotation at same time
CN103679702B (en) A kind of matching process based on image border vector
CN104200461B (en) The remote sensing image registration method of block and sift features is selected based on mutual information image
CN109711321B (en) A structure-adaptive method for wide-baseline image view-invariant line feature matching
CN111354033B (en) Digital image measuring method based on feature matching
CN103558850A (en) Laser vision guided welding robot full-automatic movement self-calibration method
CN102032877A (en) Three-dimensional measuring method based on wavelet transformation
US11645846B2 (en) Closed-loop detecting method using inverted index-based key frame selection strategy, storage medium and device
CN111275748B (en) Point cloud registration method based on laser radar in dynamic environment
CN104154911B (en) A kind of sea-floor relief two dimension matching auxiliary navigation method with rotational invariance
CN104268880A (en) Depth information obtaining method based on combination of features and region matching
CN106056605A (en) In-orbit high-precision image positioning method based on image coupling
CN107862735A (en) A kind of RGBD method for reconstructing three-dimensional scene based on structural information
CN106296587A (en) The joining method of tire-mold image
CN106355607A (en) Wide-baseline color image template matching method
CN108021886A (en) A kind of unmanned plane repeats texture image part remarkable characteristic matching process
CN110211178A (en) A kind of pointer instrument recognition methods calculated using projection
CN103513247B (en) Method for matching synthetic aperture radar image and optical image same-name point
CN104574519A (en) Threshold-free automatic robust matching method for multi-source residence surface elements
CN111598177A (en) An Adaptive Maximum Sliding Window Matching Method for Low Overlap Image Matching
CN102999895A (en) Method for linearly solving intrinsic parameters of camera by aid of two concentric circles
CN104318566A (en) Novel multi-image plumb line track matching method capable of returning multiple elevation values
CN110210576A (en) A kind of the figure spot similarity calculation method and system of map datum
CN104517280A (en) Three-dimensional imaging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150909

Termination date: 20200604

CF01 Termination of patent right due to non-payment of annual fee