CN104065932B - A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) - Google Patents
A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) Download PDFInfo
- Publication number
- CN104065932B CN104065932B CN201410305768.XA CN201410305768A CN104065932B CN 104065932 B CN104065932 B CN 104065932B CN 201410305768 A CN201410305768 A CN 201410305768A CN 104065932 B CN104065932 B CN 104065932B
- Authority
- CN
- China
- Prior art keywords
- target
- matching
- bipartite graph
- observation
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000000007 visual effect Effects 0.000 title description 4
- 238000012544 monitoring process Methods 0.000 claims abstract description 34
- 230000003044 adaptive effect Effects 0.000 claims abstract description 13
- 238000005070 sampling Methods 0.000 claims abstract description 13
- 238000004364 calculation method Methods 0.000 claims description 8
- 238000009826 distribution Methods 0.000 claims description 8
- 230000008034 disappearance Effects 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000001514 detection method Methods 0.000 claims description 4
- 238000005315 distribution function Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000007704 transition Effects 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims description 2
- 238000009827 uniform distribution Methods 0.000 claims description 2
- 238000010276 construction Methods 0.000 abstract description 8
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 6
- 238000000605 extraction Methods 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 230000004927 fusion Effects 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000007500 overflow downdraw method Methods 0.000 description 2
- 238000010207 Bayesian analysis Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
本发明提出一种基于修正加权二部图的无重叠视域目标匹配方法,涉及计算机视觉领域。本发明将无重叠视域目标匹配问题表示成一个最大后验概率问题,从而将目标观测模型与监控网络时空约束信息结合起来,并通过求解加权二部图的最大权匹配解决最大后验概率问题。针对普通加权二部图构造中容易引入错误匹配的问题,本发明提出了一种基于自适应阈值的修正加权二部图构造方法,从而尽可能地避免在构造加权二部图的过程中引入错误的匹配。针对传统KM方法在处理大规模加权二部图匹配问题时计算量过大的弊端,提出了一种基于MH采样的方法近似求解加权二部图的最大权匹配,从而得到无重叠视域目标匹配关系。
The invention proposes a non-overlapping view target matching method based on a modified weighted bipartite graph, which relates to the field of computer vision. The present invention expresses the matching problem of non-overlapping sight domain targets as a maximum a posteriori probability problem, thereby combining the target observation model with the space-time constraint information of the monitoring network, and solves the maximum a posteriori probability problem by solving the maximum weight matching of a weighted bipartite graph . Aiming at the problem that it is easy to introduce error matching in the construction of ordinary weighted bipartite graph, the present invention proposes a modified weighted bipartite graph construction method based on adaptive threshold, so as to avoid introducing errors in the process of constructing weighted bipartite graph as much as possible match. Aiming at the drawbacks of the traditional KM method, which is too computationally intensive when dealing with large-scale weighted bipartite graph matching, a method based on MH sampling is proposed to approximately solve the maximum weight matching of weighted bipartite graphs, so as to obtain non-overlapping view-shed target matching. relation.
Description
技术领域technical field
本发明属于计算机视觉领域,具体涉及智能监控领域,特别涉及一种基于修正加权二部图的无重叠视域目标匹配方法。The invention belongs to the field of computer vision, specifically relates to the field of intelligent monitoring, in particular to a non-overlapping view target matching method based on a modified weighted bipartite graph.
背景技术Background technique
随着摄像机监控技术的发展,对大范围区域进行监控成为了保障人民生命财产安全的一项重要手段。然而,对于区域较大的监控场合来说,使用摄像机覆盖所有的监控区域很不现实。因此,通常采用重点区域覆盖的方法来搭建包含无重叠视域的多摄像机监控系统。对于无重叠视域监控网络,只有确定了目标在不同视域中的匹配关系,才能较好地理解目标在整个监控场景中的行为。但是,无重叠视域环境下的光照变化、场景变化等干扰因素会导致目标在不同视域下的观测值发生变化,从而降低目标正确匹配的概率。因此,需要综合考虑目标信息以及场景信息,从而尽可能地提高无重叠视域下的目标匹配率。With the development of camera monitoring technology, monitoring a large area has become an important means to ensure the safety of people's lives and property. However, for a monitoring situation with a large area, it is unrealistic to use cameras to cover all the monitoring areas. Therefore, the method of covering key areas is usually used to build a multi-camera surveillance system with non-overlapping fields of view. For non-overlapping viewshed monitoring networks, only by determining the matching relationship of targets in different viewsheds can we better understand the behavior of targets in the entire surveillance scene. However, interference factors such as illumination changes and scene changes in a non-overlapping view environment will cause changes in the observed values of the target under different view areas, thereby reducing the probability of correct matching of the target. Therefore, it is necessary to comprehensively consider the target information and scene information, so as to improve the target matching rate in the non-overlapping view as much as possible.
无重叠视域目标匹配问题本质上属于数据融合问题,目的是把观测模型的外表相似信息与监控网络的时空约束信息相融合,从而实现无重叠视域下的目标匹配。国内外研究者对这种数据融合问题进行了研究,取得了一定的成果。如文献“Huang T,RussellA.Object identification:A Bayesian analysis with application to trafficsurveillance[J].Artificial Intelligence.1998,103(1):1-17”建立了一个贝叶斯框架来匹配高速公路上相邻两个摄像机检测到的车辆。文献“Kettnaker V,Zabih R.Bayesianmulti-camera surveillance[C].IEEE Computer Society Conference on ComputerVision and Pattern Recognition.Fort Collins:IEEE,1999.2010-2016”也是用贝叶斯框架实现了室内型人的匹配和识别,将最大后验概率估计问题近似成一个线性规划问题。文献“Javed O,Shafique K.Tracking across multiple cameras with disjoint views[C].Proceedings of the IEEE International Conference on Computer Vision.Nice:IEEE,2003.952-957”不但更加细致深入地给出了外表模型和拓扑关系的估计方法,还使用二分图匹配的方法计算最大后验概率,计算过程符合多项式时间要求。The problem of target matching in non-overlapping horizons is essentially a data fusion problem. The purpose is to fuse the similarity information of the observation model with the spatio-temporal constraint information of the monitoring network, so as to achieve target matching in non-overlapping horizons. Researchers at home and abroad have studied this data fusion problem and achieved certain results. For example, the document "Huang T, RussellA. Object identification: A Bayesian analysis with application to trafficsurveillance [J]. Artificial Intelligence. 1998, 103(1): 1-17" establishes a Bayesian framework to match adjacent Vehicles detected by two cameras. The document "Kettnaker V, Zabih R.Bayesianmulti-camera surveillance[C]. IEEE Computer Society Conference on ComputerVision and Pattern Recognition. Fort Collins: IEEE, 1999.2010-2016" also uses the Bayesian framework to achieve indoor human matching and recognition , approximate the maximum a posteriori probability estimation problem as a linear programming problem. The document "Javed O, Shafique K. Tracking across multiple cameras with disjoint views[C]. Proceedings of the IEEE International Conference on Computer Vision. Nice: IEEE, 2003.952-957" not only gives a more detailed and in-depth appearance model and topological relationship The estimation method also uses the method of bipartite graph matching to calculate the maximum posterior probability, and the calculation process meets the requirements of polynomial time.
国内对相关数据融合问题的研究主要是在文献“Javed O,Shafique K.Trackingacross multiple cameras with disjoint views[C].Proceedings of the IEEEInternational Conference on Computer Vision.Nice:IEEE,2003.952-957”的基础上展开的。譬如,文献“刘少华,张茂军,陈旺.无重叠视域多摄像机的数据关联方法[J].计算机应用,2009,29(9):2378-2382”提出了一种基于权重二部图的关联策略,以目标为节点,结合时间约束和空间约束构造边,目标的相似度为边的权重,通过求取二部图的最大权匹配得到最优关联结果。文献“刘少华,赖世铭,张茂军.基于最小费用流模型的无重叠视域多摄像机目标关联方法[J].自动化学报,2010,36(10):1484-1489”提出了一种基于最小费用流的关联方法,修正了二部图关联中过度依赖效用函数的缺点。但是,费用流关联问题的时间复杂度大于二部图关联问题,因此不适用于处理大规模匹配问题。Domestic research on relevant data fusion issues is mainly based on the document "Javed O, Shafique K. Tracking across multiple cameras with disjoint views[C]. Proceedings of the IEEE International Conference on Computer Vision. Nice: IEEE, 2003.952-957" of. For example, the literature "Liu Shaohua, Zhang Maojun, Chen Wang. Data association method for multi-cameras with non-overlapping views [J]. Computer Applications, 2009, 29(9): 2378-2382" proposed a weighted bipartite graph-based association The strategy uses the target as a node, combines time constraints and space constraints to construct edges, the similarity of the target is the weight of the edge, and obtains the optimal association result by finding the maximum weight matching of the bipartite graph. The literature "Liu Shaohua, Lai Shiming, Zhang Maojun. Non-overlapping multi-camera target association method based on the minimum cost flow model [J]. Association method, which corrects the shortcoming of over-reliance on utility function in bipartite graph association. However, the time complexity of the cost flow association problem is larger than that of the bipartite graph association problem, so it is not suitable for dealing with large-scale matching problems.
因此,利用无监督学习方法自适应学习多摄像机监控网络的拓扑结构,对于多摄像机监控系统能否应用于实际至关重要。在无重叠视域目标匹配方法中,主要有以下三点需要注意:(1)如何对目标自身信息与外部环境信息进行融合,构建无重叠视域目标匹配方法框架;(2)如何利用成熟的方法解决所构建的无重叠视域目标匹配问题;(3)在利用图论方法解决无重叠视域目标匹配问题时,如何尽可能避免引入错误匹配关系,并提高匹配的速度。Therefore, using an unsupervised learning method to adaptively learn the topology of a multi-camera surveillance network is crucial for the practical application of a multi-camera surveillance system. In the non-overlapping view-shed target matching method, there are three main points to be noted: (1) How to integrate the target’s own information and external environment information to build a non-overlapping view-shed target matching method framework; (2) How to use mature method to solve the non-overlapping horizon object matching problem; (3) how to avoid the introduction of wrong matching relationship as much as possible and improve the matching speed when using the graph theory method to solve the non-overlapping horizon object matching problem.
发明内容Contents of the invention
本发明的目的在于提出一种基于修正加权二部图的无重叠视域目标匹配方法。此方法对无重叠视域下光照变化、环境差异对目标匹配造成的影响具有很强的鲁棒性。The purpose of the present invention is to propose a non-overlapping view target matching method based on modified weighted bipartite graph. This method has strong robustness to the impact of illumination changes and environmental differences on target matching under non-overlapping view fields.
本发明的技术方案为:将无重叠视域目标匹配表示成最大后验概率问题,在构造最大后验概率问题时,综合考虑了目标观测模型与监控网络时空约束。构造加权二部图,通过求解二部图的最大权匹配从而解决最大后验概率问题。针对普通加权二部图构造中容易引入错误匹配的问题,提出一种基于自适应阈值的修正加权二部图构造方法,从而尽可能地避免在构造加权二部图的过程中引入错误的匹配。针对传统KM方法在处理大规模加权二部图匹配问题时计算量过大的弊端,提出一种基于MH采样的方法近似计算加权二部图的最大权匹配,从而得到无重叠视域目标匹配关系。The technical solution of the present invention is: the non-overlapping sight target matching is expressed as a maximum a posteriori probability problem, and when constructing the maximum a posteriori probability problem, the target observation model and the space-time constraints of the monitoring network are comprehensively considered. Construct a weighted bipartite graph, and solve the maximum a posteriori probability problem by solving the maximum weight matching of the bipartite graph. Aiming at the problem that it is easy to introduce wrong matching in the construction of ordinary weighted bipartite graph, a modified weighted bipartite graph construction method based on adaptive threshold is proposed, so as to avoid introducing wrong matching in the process of constructing weighted bipartite graph as much as possible. Aiming at the drawbacks of the traditional KM method, which has too much calculation when dealing with large-scale weighted bipartite graph matching, a method based on MH sampling is proposed to approximate the maximum weight matching of weighted bipartite graphs, so as to obtain the non-overlapping visual domain target matching relationship .
本发明的具体实现步骤依次为:Concrete implementation steps of the present invention are as follows:
1)确定目标观测模型1) Determine the target observation model
2)确定监控网络时空约束2) Determine the time and space constraints of the monitoring network
3)划分时空最小单元3) Divide the smallest unit of time and space
4)构造最大后验概率问题4) Construct the maximum a posteriori probability problem
5)计算自适应阈值5) Calculate the adaptive threshold
6)构造修正加权二部图6) Construct a modified weighted bipartite graph
7)MH方法近似求解加权二部图的最大权匹配7) The MH method approximates the maximum weight matching of the weighted bipartite graph
附图说明Description of drawings
图1是本发明基于修正加权二部图的无重叠视域目标匹配方法的系统流程图Fig. 1 is a system flow chart of the non-overlapping view target matching method based on the modified weighted bipartite graph of the present invention
图2(a)是本发明使用的主颜色特征的提取示意图Fig. 2 (a) is the extraction schematic diagram of the main color feature that the present invention uses
图2(b)是本发明使用的空间纹理特征的提取示意图Fig. 2 (b) is the extraction schematic diagram of the space texture feature that the present invention uses
图3是本发明中监控网络拓扑示意图Fig. 3 is a schematic diagram of monitoring network topology in the present invention
图4是本发明中空间最小单元划分示意图Fig. 4 is a schematic diagram of the division of the smallest space unit in the present invention
图5是本发明中时间最小单元划分示意图Fig. 5 is a schematic diagram of time minimum unit division in the present invention
图6是本发明中修正加权二部图示意图Fig. 6 is a schematic diagram of the modified weighted bipartite graph in the present invention
具体实施方式Detailed ways
图1给出了基于修正加权二部图的无重叠视域目标匹配方法的系统流程图:使用最大后验概率框架描述无重叠视域目标匹配问题,在构造最大后验概率框架时,综合考虑了目标观测模型与监控网络时空约束。本发明提取了目标的主颜色特征与空间纹理特征,构造出对光照差异、环境变化有较强鲁棒性的目标观测模型。本发明在假设监控网络拓扑已知的前提下,从拓扑中得到监控网络的时空约束。根据监控网络的时空约束,将监控网络拓扑按独立性原则与最小化原则划分为多个最小单元,对于最小单元中的任意消失目标,其再次出现仍然位于最小单元中。对于任意最小单元内的目标匹配问题,利用贝叶斯推断准则将其表示为最大后验概率问题。本发明利用图论的方法解决最大后验概率问题,将求解最大后验概率问题转化为求解加权二部图的最大权匹配问题,进而得到无重叠视域目标的匹配关系。针对普通加权二部图构造中容易引入错误匹配的问题,本发明提出了一种基于自适应阈值的修正加权二部图构造方法,从而尽可能地避免在构造加权二部图的过程中引入错误的匹配。针对传统KM方法在处理大规模加权二部 图匹配问题时计算量过大的弊端,提出了一种基于MH采样的方法近似计算加权二部图的最大权匹配,从而得到无重叠视域目标匹配关系。Figure 1 shows the system flow chart of the non-overlapping view-shed target matching method based on the modified weighted bipartite graph: using the maximum a posteriori probability framework to describe the non-overlapping view-shed target matching problem, when constructing the maximum a posteriori probability framework, comprehensive consideration The target observation model and the space-time constraints of the monitoring network are established. The invention extracts the main color feature and the space texture feature of the target, and constructs a target observation model with strong robustness to illumination differences and environmental changes. On the premise that the topology of the monitoring network is known, the invention obtains the space-time constraints of the monitoring network from the topology. According to the space-time constraints of the monitoring network, the monitoring network topology is divided into multiple minimum units according to the principle of independence and minimization. For any target that disappears in the minimum unit, its reappearance is still located in the minimum unit. For the target matching problem in any smallest unit, it is expressed as a maximum a posteriori probability problem by using the Bayesian inference criterion. The invention uses graph theory to solve the maximum posterior probability problem, transforms the maximum posterior probability problem into the maximum weight matching problem of the weighted bipartite graph, and then obtains the matching relationship of non-overlapping visual domain objects. Aiming at the problem that it is easy to introduce error matching in the construction of ordinary weighted bipartite graph, the present invention proposes a modified weighted bipartite graph construction method based on adaptive threshold, so as to avoid introducing errors in the process of constructing weighted bipartite graph as much as possible match. Aiming at the drawbacks of the traditional KM method, which has too much calculation when dealing with large-scale weighted bipartite graph matching, a method based on MH sampling is proposed to approximate the maximum weight matching of weighted bipartite graphs, so as to obtain non-overlapping view-shed target matching relation.
本发明的具体操作步骤Concrete operation steps of the present invention
1)构造目标观测模型1) Construct target observation model
利用成熟的特征提取技术提取目标特征,进而构造目标观测模型。考虑到无重叠视域下光照变化、背景差异等环境因素都会对目标成像造成影响,进而降低目标观测模型的辨别能力,本发明综合考虑了目标的主颜色特征与空间纹理特征,构造出对环境变化因素比较鲁棒的目标观测模型。主颜色特征描述了目标的宏观信息,本发明使用K均值聚类方法提取目标的主颜色特征,并利用标准化RGB距离计算主颜色间的相似度,从而提高了主颜色特征对光照变化的鲁棒性。空间纹理特征描述了目标的细节信息,本发明将目标分块,并在每个子块不同方向上统计满足梯度条件的像素数目,从而得到用直方图表示的空间纹理特征。本发明使用加权方法将主颜色特征与空间纹理特征融合起来,构造目标观测模型。本发明所用到的主颜色特征的提取示意图如图2(a)所示,其中任意子图左侧为检测到的目标,右侧为所提取的15种主颜色;本发明所用到的空间纹理特征的提取示意图如图2(b)所示,其中任意子图左侧为检测到的目标,中间为梯度图像及分块情况,右侧为表示空间纹理特征的直方图。Use the mature feature extraction technology to extract the target features, and then construct the target observation model. Considering that environmental factors such as illumination changes and background differences in non-overlapping fields of view will affect the target imaging, thereby reducing the discrimination ability of the target observation model, the present invention comprehensively considers the main color features and spatial texture features of the target, and constructs an image of the environment. Variation factors are more robust to target observation models. The main color features describe the macroscopic information of the target. The present invention uses the K-means clustering method to extract the main color features of the target, and uses the standardized RGB distance to calculate the similarity between the main colors, thereby improving the robustness of the main color features to illumination changes. sex. The spatial texture feature describes the detailed information of the target. The invention divides the target into blocks, and counts the number of pixels satisfying the gradient condition in different directions of each sub-block, thereby obtaining the spatial texture feature represented by a histogram. The invention uses a weighting method to fuse main color features and spatial texture features to construct a target observation model. The extraction schematic diagram of the main color feature used in the present invention is as shown in Figure 2 (a), wherein the left side of any sub-graph is the detected target, and the right side is the extracted 15 kinds of main colors; the space texture used in the present invention The schematic diagram of feature extraction is shown in Figure 2(b), in which the detected target is on the left side of any subgraph, the gradient image and block situation are in the middle, and the histogram representing spatial texture features is on the right side.
2)确定监控网络时空约束2) Determine the time and space constraints of the monitoring network
本发明在假设监控网络拓扑已知的前提下,从监控网络拓扑中获取网络的时空约束信息。监控网络的时空约束包括网络中节点的可达性以及平均通过时间,节点可达性可以通过监控网络拓扑中的连通边确定。如果拓扑中某两个节点之间存在连通边,那么这两个节点对应的监控区域是可达的。对于拓扑中连通的两节点,其转移时间分布通常由一个单高斯分布决定,那么这两个点对应的网络节点间的平均通过时间为单高斯分布的均值。本发明所用到的监控网络拓扑示意图如图3所示,其中,黑色箭头表示拓扑节点间的连通性。On the premise that the topology of the monitoring network is known, the invention obtains the time-space constraint information of the network from the topology of the monitoring network. The space-time constraints of the monitoring network include the reachability of the nodes in the network and the average transit time. The reachability of the nodes can be determined by monitoring the connected edges in the network topology. If there is a connected edge between two nodes in the topology, then the monitoring area corresponding to the two nodes is reachable. For two connected nodes in the topology, the transfer time distribution is usually determined by a single Gaussian distribution, then the average transit time between the network nodes corresponding to these two points is the mean value of the single Gaussian distribution. The schematic diagram of the monitoring network topology used in the present invention is shown in FIG. 3 , where the black arrows represent the connectivity between topology nodes.
3)划分时空最小单元3) Divide the smallest unit of time and space
对于无重叠视域目标匹配问题,相同目标可能会在多个地点、多个时刻出现,从而导致目标匹配问题存在较大不确定性。本发明提出了时空最小单元的概念,在任意时空最小单元中,消失观测值对应的目标再次出现时,只可能在此最小单元 所确定的空间范围内,而且出现的时间必须在此最小单元所确定的时间范围内。时空最小单元的划分将整个监控网络、所有时间段内的目标匹配问题分解为独立的观测值两两匹配问题,从而最大程度上消除了目标匹配中的不确定性。时空最小单元划分必须遵循两个原则,分别是完备性原则与最小化原则。For the non-overlapping viewshed target matching problem, the same target may appear at multiple locations and at multiple times, resulting in greater uncertainty in the target matching problem. The present invention proposes the concept of the smallest unit of time and space. In any smallest unit of time and space, when the target corresponding to the disappearance observation value reappears, it is only possible within the space range determined by the smallest unit, and the time of appearance must be determined by the smallest unit. within a certain time frame. The division of the smallest space-time unit decomposes the entire monitoring network and the target matching problem in all time periods into independent pairwise matching problems of observations, thereby eliminating the uncertainty in target matching to the greatest extent. The division of the smallest space-time unit must follow two principles, namely, the principle of completeness and the principle of minimization.
假设对于某个包含k个节点的独立单元U(U={nodei|i=1,2,…,k}),U的空间最小单元划分必须满足下面两条原则:Assume that for an independent unit U (U={node i |i=1,2,…,k}) containing k nodes, the division of the smallest spatial unit of U must satisfy the following two principles:
①完备性原则:从单元U中任意节点nodei消失的目标,其再次出现时所在的节点nodej满足nodej∈U;同样的,在单元U中任意节点nodej出现的目标,其上一次消失时所在的节点满足nodei∈U。①Completeness principle: for a target that disappears from any node i in unit U, the node j where it reappears satisfies node j ∈ U; similarly, for a target that appears at any node j in unit U, its last The node where it disappeared satisfies node i ∈ U.
②最小化原则:在满足完备性的基础上,令每个单元包含最少数目的节点。②Minimization principle: On the basis of satisfying completeness, let each unit contain the least number of nodes.
时间最小单元划分中的完备性原则与最小化原则与空间最小单元划分的完备性原则与最小化原则类似。The principle of completeness and minimization in the division of the smallest unit of time is similar to the principle of completeness and minimization in the division of the smallest unit of space.
如图4所示为本发明中的空间最小单元划分示意图,分割过程如下:首先去掉图中的虚线连接,将原先的连通图变成非连通图,此时图中的每个连通子图就表示一个空间最小单元。由于每个单元都是连通图,对单元中的任意节点而言,与其连通的节点必然仍在该单元内,这样就满足了单元分割的完备性原则。如果将单元内任意节点分离出去,那么对与该节点连通的其他节点来说,与其连通的节点不在同一单元内,就违反了完备性原则。因而,单元内没有任何节点可以分离出去,这样就满足了单元分割的最小化原则。对于图4所示的拓扑结构,可以得到两个独立单元,分别是{3,8,10,11,14}以及{4,7,9}。As shown in Figure 4, it is a schematic diagram of the division of the smallest spatial unit in the present invention. The division process is as follows: first remove the dotted line connection in the figure, and the original connected graph becomes a non-connected graph. At this time, each connected subgraph in the figure is Represents a minimum unit of space. Since each unit is a connected graph, for any node in the unit, the connected nodes must still be in the unit, thus satisfying the completeness principle of unit segmentation. If any node in the unit is separated, then for other nodes connected to the node, the connected nodes are not in the same unit, which violates the completeness principle. Therefore, no node in the unit can be separated, which satisfies the minimum principle of unit division. For the topology shown in Figure 4, two independent units can be obtained, which are {3,8,10,11,14} and {4,7,9} respectively.
如图5所示为本发明中的时间最小单元划分示意图,分割过程如下:首先,确定图中的消失观测值与出现观测值。假设观测值O4 1表示在t1时刻节点4检测到的第1个观测值,将其分为O4 1-以及O4 1+,分为位于图的两侧。按照相同的方法添加完所有观测值后,利用拓扑节点间的时间可达性列出不同观测值的连通情况,并将每个子连通空间作为一个时间最小单元。以图5为例,两个最小时间单元分别是{t1-t1:t1-t4}以及{t2-t5:t6-t8}。FIG. 5 is a schematic diagram of the division of the smallest time unit in the present invention. The division process is as follows: First, determine the disappearance observation value and the appearance observation value in the figure. Assuming that the observation value O 4 1 represents the first observation value detected by node 4 at time t1, it is divided into O 4 1- and O 4 1+ , which are located on both sides of the graph. After adding all observations in the same way, the connectivity of different observations is listed using the time accessibility between topological nodes, and each sub-connected space is regarded as a time minimum unit. Taking FIG. 5 as an example, the two minimum time units are {t1-t1:t1-t4} and {t2-t5:t6-t8} respectively.
观测值之间的连通情况是通过拓扑节点的时间可达性估计得到的,假设节点4与节点9的转移时间分布服从N(μ,σ2),那么对于观测值O4 1-与O9 1+,连通情况可以如下定义:The connectivity between observations is estimated by the time accessibility of topological nodes. Assuming that the transition time distribution of nodes 4 and 9 obeys N(μ,σ 2 ), then for observations O 4 1- and O 9 1+ , connectivity can be defined as follows:
其中,t4-t9表示目标通过节点的时间,阈值3σ是根据高斯分布中的3σ准则得到的。Among them, t4-t9 represents the time when the target passes through the node, and the threshold 3σ is obtained according to the 3σ criterion in the Gaussian distribution.
4)构造最大后验概率问题4) Construct the maximum a posteriori probability problem
无重叠视域目标匹配就是通过特定的数据融合方法将目标观测模型与监控网络时空约束融合起来,从而得到准确的目标匹配信息。贝叶斯推断是一种常用的数据融合方法,本发明利用贝叶斯推断准则将无重叠视域目标匹配问题表示成一个最大后验概率问题。假设在某个时空最小单元内,检测到Mi个消失的观测值,记作Oi={Oi 1,Oi 2,…,Oi Mi},其中Oi m表示该最小单元中第i个摄像机捕捉到的第m个目标的信息,包括目标的观测模型Oi m(app)与时空约束关系Oi m(st)。由于目标的观测模型与目标在监控网络中出现的时间与位置无关,因此可以认为Oi m(app)与Oi m(st)相互独立。同理,检测到Nj个出现的观测值,记作Oj={Oj 1,Oj 2,…,Oj Nj}。令表示观测值对(Oi m,Oj n)相互关联,那么,无重叠视域目标匹配问题可以表示为寻找一个匹配集合满足以下条件:Non-overlapping visual domain target matching is to fuse the target observation model with the space-time constraints of the monitoring network through a specific data fusion method, so as to obtain accurate target matching information. Bayesian inference is a commonly used data fusion method. The invention uses the Bayesian inference criterion to express the non-overlapping view target matching problem as a maximum posterior probability problem. Assuming that in a certain spatio-temporal smallest unit, Mi disappearing observations are detected, recorded as O i ={O i 1 ,O i 2 ,…,O i Mi }, where O i m represents the i-th observation value in the smallest unit The information of the mth target captured by a camera includes the target observation model O i m (app) and the space-time constraint relationship O i m (st). Since the observation model of the target has nothing to do with the time and position of the target in the monitoring network, it can be considered that O im (app) and O im ( st ) are independent of each other. Similarly, when N j observed values are detected, it is denoted as O j ={O j 1 ,O j 2 ,...,O j Nj }. make Indicates that the observation pairs (O i m , O j n ) are related to each other, then the non-overlapping viewshed target matching problem can be expressed as finding a matching set The following conditions:
①∈A当且仅当Oi m及Oj n属于同一目标;① ∈A if and only if O i m and O j n belong to the same target;
②一个观测值至多只有一个前继及一个后继,即对所有有②An observation value has at most one predecessor and one successor, that is, for all Have
假设是目标匹配问题的一个解,并且所有的匹配相互独立,那么目标匹配问题可以表示为:suppose is a solution to the target matching problem, and all matches are independent of each other, then the target matching problem can be expressed as:
其中,表示检测到观测值Oi m与Oj n之后,匹配发生的概率。通过贝叶斯推断可以得到:in, Indicates that after the observed values O i m and O j n are detected, match probability of occurrence. Through Bayesian inference, we can get:
可以将目标的观测模型Oi m(app)及时空约束信息Oi m(st)引入到目标匹配问题中,定义如下:The target observation model O i m (app) and the space-time constraint information O i m (st) can be introduced into the target matching problem, defined as follows:
其中,表示目标的观测模型匹配率, 表示拓扑结构的时空约束信息,表示目标从摄像机Ci到Cj的转移概率,p(Oi m,Oj n)使用一个常数尺度因子表示(假定目标出现属于均匀分布)。in, Indicates the observation model matching rate of the target, Represents the spatiotemporal constraint information of the topology, Represents the transition probability of the target from camera C i to C j , p(O i m , O j n ) is represented by a constant scale factor (assuming that the target appearance belongs to a uniform distribution).
无重叠视野目标匹配问题就是寻找令后验概率最大时的解A*:The non-overlapping field of view target matching problem is to find the solution A* that maximizes the posterior probability:
5)计算自适应阈值5) Calculate the adaptive threshold
在传统的加权二部图构造过程中,仅仅根据拓扑结构的时空约束向二部图中添加连通边,边的权重为待匹配观测值对的相似度。但是,某些错误的观测值对也可能符合时空约束条件,因此被添加到加权二部图中,并最终导致关联错误。本发明提出自适应阈值来控制加权二部图中连通边的添加,该阈值必须满足下述两个标准:In the traditional weighted bipartite graph construction process, only connected edges are added to the bipartite graph according to the spatiotemporal constraints of the topology, and the weight of the edges is the similarity of the pairs of observations to be matched. However, some erroneous observation pairs may also meet the spatio-temporal constraints and thus be added to the weighted bipartite graph and eventually lead to association errors. The present invention proposes an adaptive threshold to control the addition of connected edges in a weighted bipartite graph, and the threshold must satisfy the following two criteria:
①如果待匹配观测值的相似度普遍较大,说明坏境的改变对目标匹配的影响较小,此时应该增加阈值,从而排除错误的观测值对;反之,如果待匹配观测值的相似度普遍较小,此时应该减小阈值,从而防止排除正确的匹配关系。① If the similarity of the observations to be matched is generally large, it means that the change of the environment has little impact on the target matching. At this time, the threshold should be increased to exclude the wrong pairs of observations; Generally small, the threshold should be reduced at this time, so as to prevent the correct matching relationship from being excluded.
②如果某个最小单元中消失观测值数目与出现观测值数目不一致,说明可能发生了误检测或盲区替换,此时应该增加阈值,从而排除非理想观测值对。②If the number of disappearing observations in a minimum unit is inconsistent with the number of appearing observations, it means that false detection or blind area replacement may have occurred. At this time, the threshold should be increased to exclude non-ideal pairs of observations.
基于上述两条原则,本发明提出的自适应阈值的计算过程如下所述:对于任意的消失观测值,计算其与出现观测值的相似度,并将最大相似度添加到集合U中。对每个消失观测值进行类似处理后,将U降序排列得到一个包含n个元素的集合。假设正确的观测值对大于总观测值对数目的一半(误检测或盲区替换的发生概率都比较小),取出U中前一半元素{u0,u1,…,um},动态阈值Φ定义如下:Based on the above two principles, the calculation process of the adaptive threshold proposed by the present invention is as follows: for any disappearing observed value, calculate its similarity with the appearing observed value, and add the maximum similarity to the set U. After similar processing for each vanishing observed value, arrange U in descending order to obtain a set containing n elements. Assuming that the correct observation value pair is more than half of the total number of observation value pairs (the probability of false detection or blind area replacement is relatively small), take out the first half of elements in U {u 0 ,u 1 ,…,u m }, dynamic threshold Φ It is defined as follows:
其中,表示选中元素的均值,用来表示环境因素对于动态阈值设定的影响。n表示最小单元中消失目标观测值数目。Pa表示自适应参数,用来调整阈值,如果最小单元中消失目标的观测值数目与出现目标的观测值数目不一致,需要增加Pa的值以提高阈值。in, Indicates the mean value of the selected elements, which is used to represent the influence of environmental factors on the dynamic threshold setting. n represents the number of vanishing target observations in the smallest unit. P a represents an adaptive parameter, which is used to adjust the threshold. If the number of observations of the disappearing target in the smallest unit is inconsistent with the number of observations of the appearance of the target, it is necessary to increase the value of P a to increase the threshold.
6)构造修正加权二部图6) Construct a modified weighted bipartite graph
在理想条件(时空最小单元中的消失观测值与出现观测值一一对应)下,将最大后验概率问题转化为加权二部图的描述过程如下:假设在某个最小单元中,消失观测值与出现观测值分别构成加权二部图的两个点集,表示为X={x1,x2,…,xm}与V={v1,v2,…,vm}。利用监控网络的时空约束判断任意观测值对xi-vj的连通性,从而构成加权二部图的边。对于某条连通的边xi-vj,计算xi与vj的观测模型匹配率,作为边的权重。如此,就将原始最大后验概率问题转化为加权二部图问题。对于本发明提出的修正加权二部图的构造,并未简单将满足时空约束的观测值对添加到加权二部图中,而是按照1)中的方法计算自适应阈值Φ,然后将观测模型匹配率≥Φ的观测值对添加到加权二部图中,从而构造出修正加权二部图。本发明所构造的修正加权二部图示意图如图6所示,该图包含4对消失、出现观测值,黑色连线表示加权二部图的边,边权wij表示xi与vj的观测模型匹配率。Under ideal conditions (one-to-one correspondence between the disappearance observation value and the appearance observation value in the smallest space-time unit), the description process of transforming the maximum a posteriori probability problem into a weighted bipartite graph is as follows: Assume that in a certain smallest unit, the disappearance observation value Two point sets of the weighted bipartite graph are constituted with the observed values, expressed as X={x 1 ,x 2 ,...,x m } and V={v 1 ,v 2 ,...,v m }. Use the space-time constraints of the monitoring network to judge the connectivity of any observation pair x i -v j , thus forming the edges of the weighted bipartite graph. For a connected edge x i -v j , calculate the matching rate of the observation model between x i and v j as the weight of the edge. In this way, the original maximum a posteriori probability problem is transformed into a weighted bipartite graph problem. For the construction of the modified weighted bipartite graph proposed by the present invention, the observation value pairs satisfying the space-time constraints are not simply added to the weighted bipartite graph, but the adaptive threshold Φ is calculated according to the method in 1), and then the observation model Observation pairs with a matching rate ≥ Φ are added to the weighted bipartite graph to construct a modified weighted bipartite graph. The schematic diagram of the corrected weighted bipartite graph constructed by the present invention is shown in Figure 6. This graph contains 4 pairs of disappearance and appearance observation values. The black connecting line represents the edge of the weighted bipartite graph, and the edge weight w ij represents the relationship between x i and v j Observe the model matching rate.
7)MH方法近似求解加权二部图的最大权匹配7) The MH method approximates the maximum weight matching of the weighted bipartite graph
利用MH方法计算二部图的最大权匹配,首先要将加权二部图问题转化成MH方法所需要的形式,即定义目标状态x、“提议函数”Q(x,·)以及概率分布函数P(x)。假设对于加权二部图G={U,V,E},选取点集U进行匹配。令x表示当前的状态空间,x∈{0,1}|E|,xij表示边(ui,vj)的匹配关系,xij=1表示ui与vj匹配,即ui与vj之间有连通边;反之,xij=0表示ui与vj不匹配。令Q(x’|xt)表示“提议函数”,即如何从当前状态xt下推断出下一时刻的可能状态x’。由于此处要解决的是二部图加权匹配问题,因此本发明定义“提议函数”为任取两对观测值并交换它们的连通关系,即:取出ui与vj、um与vn,当前状态为xij=1并且xmn=1,经过“提议函数”处理后xin=1并且xmj=1,其余点的连通关系保持不变。在计算接受概率时,本发明综合考虑了“提议函数”Q(x’|xt)与概率分布函数P(x),即令f(x’)=Q(x’|xt)P(x’)。f(x)的定义如下:Using the MH method to calculate the maximum weight matching of a bipartite graph, the weighted bipartite graph problem must first be transformed into the form required by the MH method, that is, the target state x, the "proposed function" Q(x, ) and the probability distribution function P (x). Assume that for the weighted bipartite graph G={U, V, E}, the point set U is selected for matching. Let x represent the current state space, x∈{0,1}|E|, xij represents the matching relationship of the edge (ui,vj), xij=1 means that ui and vj match, that is, there is a connected edge between ui and vj; Conversely, xij=0 means that ui does not match vj. Let Q(x'|xt) represent the "proposed function", that is, how to infer the possible state x' at the next moment from the current state xt. Since what is to be solved here is the bipartite graph weighted matching problem, the present invention defines a "proposed function" as randomly taking two pairs of observations and exchanging their connected relations, that is: take out ui and vj, um and vn, and the current state is xij=1 and xmn=1, xin=1 and xmj=1 after being processed by the "proposed function", and the connectivity relations of the other points remain unchanged. When calculating the acceptance probability, the present invention comprehensively considers the "proposed function" Q(x'|xt) and the probability distribution function P(x), that is, f(x')=Q(x'|xt)P(x') . f(x) is defined as follows:
①当任意选取的两对点包含共享点时(即一侧的两个点与另一侧的相同点连接),f(x)=0;① When the randomly selected two pairs of points contain shared points (that is, two points on one side are connected to the same point on the other side), f(x)=0;
②当任意选取的两对点不包含共享点时,即当前状态的加权值。②When the randomly selected two pairs of points do not contain shared points, That is, the weighted value of the current state.
在实际计算时,对于第②步中的计算当前状态的加权值,只需要考虑变换连通关系的那两对观测值对整体加权值的影响即可。如果选中的某对点是不可能连通的,那么令该对点的权值为-1,从而对错误的连接进行惩罚。方法2给出了基于MH采样近似的二部图最大权匹配流程。In the actual calculation, for the calculation of the weighted value of the current state in step ②, it is only necessary to consider the influence of the two pairs of observations that transform the connectivity relationship on the overall weighted value. If the selected pair of points is impossible to connect, then the weight of the pair of points is -1 to punish the wrong connection. Method 2 presents a bipartite graph maximum weight matching process based on MH sampling approximation.
方法1(基于MH采样近似的二部图最大权匹配)Method 1 (Bipartite Graph Maximum Weight Matching Based on MH Sampling Approximation)
1:Initializing:x0,t=0,f(xt)=0;1: Initializing: x 0 , t=0, f(x t )=0;
2:loop2:loop
3:Sample x’from Q(x’|xt);3:Sample x'from Q(x'|x t );
4:Sample U from U(0,1);4: Sample U from U(0,1);
5:Calculate the acceptance ratio 5: Calculate the acceptance ratio
6:if U≤α(xt|x’)then6:if U≤α(x t |x')then
7: xt+1=x’;7: x t+1 = x';
8:else8: else
9:xt+1=xt;9: x t+1 = x t ;
10:end if10: end if
11:t=t+1;11:t=t+1;
12:end loop 。12: end loop.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410305768.XA CN104065932B (en) | 2014-06-30 | 2014-06-30 | A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410305768.XA CN104065932B (en) | 2014-06-30 | 2014-06-30 | A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104065932A CN104065932A (en) | 2014-09-24 |
CN104065932B true CN104065932B (en) | 2019-08-13 |
Family
ID=51553437
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410305768.XA Active CN104065932B (en) | 2014-06-30 | 2014-06-30 | A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104065932B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11765562B2 (en) | 2021-10-11 | 2023-09-19 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for matching objects in collaborative perception messages |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106448138B (en) * | 2016-11-09 | 2019-01-25 | 中国科学技术大学苏州研究院 | Optimal multi-vehicle scheduling method for taxi service system based on active allocation |
CN107689054B (en) * | 2017-08-24 | 2020-09-22 | 北京航空航天大学 | Multi-camera topological connectivity graph establishing and cross-camera target tracking method |
CN107705327B (en) * | 2017-08-29 | 2020-08-11 | 电子科技大学 | Candidate object extraction method for spatiotemporal model of multi-camera network |
CN108717714B (en) * | 2018-05-23 | 2021-06-11 | 曜科智能科技(上海)有限公司 | Multi-camera calibration method, calibration system, storage medium and electronic device |
CN113450872B (en) * | 2021-07-02 | 2022-12-02 | 南昌大学 | Method for predicting phosphorylation site specific kinase |
US12062282B2 (en) | 2021-10-11 | 2024-08-13 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for matching objects in collaborative perception messages using multiple adaptive thresholds |
CN116596238A (en) * | 2023-05-16 | 2023-08-15 | 上海理工大学 | A Passenger Getting on and Off Matching and OD Demand Estimation Method Based on Bus Door Video |
CN119559220A (en) * | 2024-11-11 | 2025-03-04 | 合肥极目行远科技有限公司 | Target detection and tracking method, system, device and medium based on laser point cloud |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170683A (en) * | 2006-10-27 | 2008-04-30 | 松下电工株式会社 | Target moving object tracking device |
CN102651128A (en) * | 2011-02-24 | 2012-08-29 | 南京大学 | Image set partitioning method based on sampling |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1722331B1 (en) * | 2004-03-03 | 2010-12-01 | NEC Corporation | Image similarity calculation system, image search system, image similarity calculation method, and image similarity calculation program |
-
2014
- 2014-06-30 CN CN201410305768.XA patent/CN104065932B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101170683A (en) * | 2006-10-27 | 2008-04-30 | 松下电工株式会社 | Target moving object tracking device |
CN102651128A (en) * | 2011-02-24 | 2012-08-29 | 南京大学 | Image set partitioning method based on sampling |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11765562B2 (en) | 2021-10-11 | 2023-09-19 | Toyota Motor Engineering & Manufacturing North America, Inc. | Systems and methods for matching objects in collaborative perception messages |
Also Published As
Publication number | Publication date |
---|---|
CN104065932A (en) | 2014-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104065932B (en) | A kind of non-overlapping visual field target matching method based on amendment weighting bigraph (bipartite graph) | |
CN109598268B (en) | An RGB-D salient object detection method based on single-stream deep network | |
CN109829449B (en) | RGB-D indoor scene labeling method based on super-pixel space-time context | |
CN105701467B (en) | A kind of more people's abnormal behaviour recognition methods based on human figure feature | |
CN107564062A (en) | Pose method for detecting abnormality and device | |
CN103258193B (en) | A kind of group abnormality Activity recognition method based on KOD energy feature | |
CN104010168B (en) | A kind of non-overlapping visual field multiple-camera monitors network topology adaptation learning method | |
CN106846378B (en) | A Cross-Camera Target Matching and Tracking Method Combined with Spatiotemporal Topology Estimation | |
CN103971384B (en) | Node cooperation target tracking method of wireless video sensor | |
CN109034066A (en) | Building identification method based on multi-feature fusion | |
CN108345900B (en) | Pedestrian re-identification method and system based on color texture distribution features | |
CN109918995B (en) | A Crowd Anomaly Detection Method Based on Deep Learning | |
CN106815576A (en) | Target tracking method based on consecutive hours sky confidence map and semi-supervised extreme learning machine | |
CN103500456B (en) | A kind of method for tracing object based on dynamic Bayesian network network and equipment | |
WO2023222643A1 (en) | Method for image segmentation matching | |
Aakroum et al. | Deep learning for inferring the surface solar irradiance from sky imagery | |
Atasever et al. | A new unsupervised change detection approach based on DWT image fusion and backtracking search optimization algorithm for optical remote sensing data | |
CN114913409A (en) | Camouflage target identification method for marine organisms | |
Shao et al. | Cast shadow detection based on the YCbCr color space and topological cuts | |
Pardamean et al. | RHC: a dataset for in-room and out-room human counting | |
WO2025060970A1 (en) | Remote sensing image change detection method | |
Sunderrajan et al. | Multiple view discriminative appearance modeling with imcmc for distributed tracking | |
Pajares et al. | Fuzzy cognitive maps applied to computer vision tasks | |
CN110035405A (en) | A kind of efficient fusion method of Multisensor Distributed based on random set theory | |
CN107564029B (en) | Moving target detection method based on Gaussian extreme value filtering and group sparse RPCA |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |