CN112115780A - Semi-supervised pedestrian re-identification method based on deep multi-model cooperation - Google Patents
Semi-supervised pedestrian re-identification method based on deep multi-model cooperation Download PDFInfo
- Publication number
- CN112115780A CN112115780A CN202010803514.6A CN202010803514A CN112115780A CN 112115780 A CN112115780 A CN 112115780A CN 202010803514 A CN202010803514 A CN 202010803514A CN 112115780 A CN112115780 A CN 112115780A
- Authority
- CN
- China
- Prior art keywords
- pseudo
- training
- data
- labels
- deep
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
本发明公开了一种基于深度多模型协同的半监督行人重识别方法,包括步骤:1)采用部分有标签的训练数据样本微调多个ImageNet上预训练过的深度神经网络,将这些模型作为初始模型;2)利用这些初始模型对没有标签的训练样本提取特征,接着提出一个自适应权重多视图聚类的方法对无标签训练样本产生伪标签,有标签的训练样本和带有伪标签的训练样本组合成整体的训练数据,采用这些训练数据微调训练多个结构相异构的深度神经网络;3)交替的迭代伪标签产生和深度网络训练直到伪标签不在改变。
The invention discloses a semi-supervised pedestrian re-identification method based on deep multi-model collaboration, comprising the steps of: 1) fine-tuning a plurality of pre-trained deep neural networks on ImageNet by using some labeled training data samples, and using these models as initial 2) Use these initial models to extract features for unlabeled training samples, and then propose an adaptive weighted multi-view clustering method to generate pseudo-labels for unlabeled training samples, labeled training samples and pseudo-labeled training samples The samples are combined into the overall training data, and these training data are used to fine-tune and train multiple deep neural networks with heterogeneous structures; 3) Alternate iterative pseudo-label generation and deep network training until the pseudo-labels do not change.
Description
技术领域technical field
本发明属于图像特征表示和半监督学习领域,具体涉及一种基于深度多模型协同的半监督行人重识别方法。The invention belongs to the field of image feature representation and semi-supervised learning, in particular to a semi-supervised pedestrian re-identification method based on deep multi-model collaboration.
背景技术Background technique
随着社会经济的不断发展与计算机视觉技术的不断进步,智能安防、智慧城市建设不断被推进,智能化处理不同场景的视频数据成为了计算机视觉领域广泛关注的难题。行人再识别是实现智能安防与智慧城市战略的一项关键技术,给定一个场景中的一幅或者多幅行人的图像,行人再识别技术要求能够在其他不相邻的场景中找到与该图像匹配的行人的所有图像。在不同的场景中,光照条件的不同、行人姿态的变化、图像背景的变动、成像质量的差异通常会导致同一行人类内的变化大于不同行人类间的变化,这给行人再识别任务带来了严峻的挑战。With the continuous development of the social economy and the continuous progress of computer vision technology, the construction of intelligent security and smart city has been continuously promoted, and the intelligent processing of video data of different scenes has become a problem of widespread concern in the field of computer vision. Pedestrian re-identification is a key technology for realizing smart security and smart city strategies. Given one or more images of pedestrians in a scene, pedestrian re-recognition technology requires the ability to find images that are related to the image in other non-adjacent scenes. Match all images of pedestrians. In different scenes, different lighting conditions, changes in pedestrian poses, changes in image background, and differences in imaging quality usually lead to changes within the same line of people being greater than changes between different people, which brings about pedestrian re-identification tasks. a serious challenge.
近年来,受益于深度卷积神经网络的强大性能,行人重识别技术在大规模数据集上达到了优秀的识别准确度。但是大多数方法都是在全监督学习的基础上提出的。由于需要大量人工标记的训练数据,监督学习在实际环境和工业场景中的价值本质是有限的。随着智能安防,智慧城市的需求日益迫切,在实际场景中对已有方法进行实现和应用成为了广泛关注的问题。而现实情境下,在所有应用环境(比如大型购物中心、城市社区街道)中都进行大量数据标记是十分困难的,可以想象标注人员通过观看一组参数不同的相机在不同时间和不同地点拍摄到的视频记录并搜索定位同一个行人有多困难。因此,实际场景中行人重识别首先需要面对的问题就是标记数据的严重缺乏。为了克服全监督学习方法对大规模数据的严重依赖,已经出现了一些半监督或无监督学习方法。半监督行人重识别方法通过结合少量标记数据和大量未标记数据,最大化利用监督信息并充分挖掘无监督信息,实现行人检索的目标。In recent years, benefiting from the powerful performance of deep convolutional neural networks, person re-identification techniques have achieved excellent recognition accuracy on large-scale datasets. But most methods are proposed on the basis of fully supervised learning. The value of supervised learning in real environments and industrial scenarios is inherently limited due to the need for large amounts of manually labeled training data. With the development of smart security, the need for smart cities is becoming more and more urgent, and the implementation and application of existing methods in practical scenarios has become a widespread concern. However, it is very difficult to label a large amount of data in all application environments (such as large shopping malls, urban community streets) in real situations. It is conceivable that the labelers can watch a set of cameras with different parameters to take pictures at different times and at different places. The video records and searches how difficult it is to locate the same pedestrian. Therefore, the first problem that pedestrian re-identification needs to face in the actual scene is the serious lack of labeled data. To overcome the heavy reliance of fully supervised learning methods on large-scale data, some semi-supervised or unsupervised learning methods have emerged. The semi-supervised person re-identification method combines a small amount of labeled data and a large amount of unlabeled data to maximize the use of supervised information and fully mine unsupervised information to achieve the goal of pedestrian retrieval.
发明内容SUMMARY OF THE INVENTION
本发明的目的在于针对上述现有技术的不足,提供了一种基于深度多模型协同的半监督行人重识别方法。The purpose of the present invention is to provide a semi-supervised pedestrian re-identification method based on deep multi-model collaboration in view of the above-mentioned shortcomings of the prior art.
本发明采用如下技术方案来实现的:The present invention adopts following technical scheme to realize:
一种基于深度多模型协同的半监督行人重识别方法,包括以下步骤:A semi-supervised person re-identification method based on deep multi-model collaboration, including the following steps:
1)采用部分有标签的训练数据样本微调多个ImageNet上预训练过的深度神经网络,将这些模型作为初始模型;1) Use some labeled training data samples to fine-tune multiple pre-trained deep neural networks on ImageNet, and use these models as initial models;
2)利用这些初始模型对没有标签的训练样本提取特征,接着提出一个自适应权重多视图聚类的方法对无标签训练样本产生伪标签,有标签的训练样本和带有伪标签的训练样本组合成整体的训练数据,采用这些训练数据微调训练多个结构相异构的深度神经网络;2) Use these initial models to extract features for unlabeled training samples, and then propose an adaptive weighted multi-view clustering method to generate pseudo-labels for unlabeled training samples, a combination of labeled training samples and pseudo-labeled training samples The whole training data is used to fine-tune the training of multiple deep neural networks with heterogeneous structures;
3)交替的迭代伪标签产生和深度网络训练直到伪标签不在改变。3) Alternate iterative pseudo-label generation and deep network training until the pseudo-label is no longer changing.
本发明进一步的改进在于,步骤1)的具体实现方法如下:A further improvement of the present invention is that the concrete realization method of step 1) is as follows:
101)首先训练多个结构不同的神经网络作为多个视图下的特征提取器;101) First train multiple neural networks with different structures as feature extractors under multiple views;
102)利用提出的自适应权重多视图聚类方法对多个异构神经网络的特征进行聚类,得到无标签数据的伪标签;102) Clustering the features of multiple heterogeneous neural networks using the proposed adaptive weight multi-view clustering method to obtain pseudo-labels of unlabeled data;
103)利用有标注数据和带有伪标签的无标注数据微调多个异构神经网络,多个深度网络的更新和伪标签的更新交替进行。103) Use labeled data and unlabeled data with pseudo-labels to fine-tune multiple heterogeneous neural networks, and the updates of multiple deep networks and pseudo-labels are alternately performed.
本发明进一步的改进在于,步骤2)的具体实现方法如下:The further improvement of the present invention is, the concrete realization method of step 2) is as follows:
201)采用部分带有标签的数据训练多个异构神经网络作为初始参数;201) using part of the labeled data to train multiple heterogeneous neural networks as initial parameters;
202)随后采用第一步训练的多个深度神经网络对无标签数据提取特征,接着利用自适应权重多视图聚类方法对无标签数据的特征聚类,得到无标签数据的初始伪标签;202) subsequently using the multiple deep neural networks trained in the first step to extract features from the unlabeled data, and then utilizing the adaptive weight multi-view clustering method to cluster the features of the unlabeled data to obtain the initial pseudo-label of the unlabeled data;
203)将第二步得到的带有伪标签的无标注数据和有标注的数据相融合再次训练多个深度神经网络,深度神经网络训练和聚类的训练交替进行直到伪标签不在改变,得到最终的伪标签。203) Integrate the unlabeled data with pseudo-labels obtained in the second step and the labeled data to train multiple deep neural networks again. The deep neural network training and clustering training are alternately performed until the pseudo-labels do not change, and the final result is obtained. pseudo-label.
本发明至少具有如下有益的技术效果:The present invention at least has the following beneficial technical effects:
1.本发明在训练深度神经网络的过程中只需要利用一部分准确标注的数据,随后通过对大量无标签数据打伪标签的方法来帮助网络训练。1. In the process of training a deep neural network, the present invention only needs to use a part of the accurately labeled data, and then helps network training by pseudo-labeling a large amount of unlabeled data.
2.本发明利用多个深度神经网络对无标签数据提取特征,发挥多个深度网络特征的多样性,并且利用提出的自适应权重多视图聚类方法对异构网络特征聚类,从而得到具有较好精确度的伪标签。2. The present invention uses multiple deep neural networks to extract features from unlabeled data, exerts the diversity of multiple deep network features, and uses the proposed adaptive weight multi-view clustering method to cluster heterogeneous network features, thereby obtaining Pseudo-labels with better accuracy.
附图说明Description of drawings
图1为本发明框架的流程图。Figure 1 is a flow chart of the framework of the present invention.
具体实施方式Detailed ways
以下结合附图和实施例对本发明做出进一步的说明。The present invention will be further described below with reference to the accompanying drawings and embodiments.
如图1所示,假设有M种形态的特征,υ=1,2,...,M.,的算法可以写成如下的形式:As shown in Figure 1, assuming that there are M types of features, the algorithm for υ = 1, 2, ..., M., can be written in the following form:
其中xl和xu分别代表有标签和无标签的训练样本。Nl和Nu分别代表有标签和无标签训练样本的数目。wυ代表第υ个深度神经网络中的参数,yl和yu分别代表有标签的标签和无标签数据的伪标签。为深度神经网络的损失函数,代表多视图估计伪标签损失函数。where x l and x u represent labeled and unlabeled training samples, respectively. N l and N u represent the number of labeled and unlabeled training samples, respectively. w υ represents the parameters in the υth deep neural network, and yl and yu represent labeled labels and pseudo-labels of unlabeled data, respectively. is the loss function of the deep neural network, Represents the multi-view estimation pseudo-label loss function.
深度神经网络的损失函数主要依据两个不同的任务,它们分别是识别任务和验证任务。的损失函数可以写成如下的形式:The loss function of the deep neural network is mainly based on two different tasks, which are the recognition task and the verification task. The loss function can be written in the following form:
对于基础的特征区分性学习,将识别任务看作是一个多分类的任务。可以表示为:For the basic feature discriminative learning, the recognition task is regarded as a multi-classification task. It can be expressed as:
其中为预测概率,p为目标概率。in is the predicted probability, and p is the target probability.
针对验证部分,没有采用对比损失函数,对比损失函数强制相同的类别尽可能的距离相近。当训练数据集属于每一类的样本较少时,这可能会使得深度神经网络倾向于过拟合。本文的验证损失是一个二值的逻辑回归损失函数,定义图像特征对为(φ(xa,w),φ(xb,w))。可以表示为:For the verification part, the contrastive loss function is not used, and the contrastive loss function forces the same categories to be as close as possible. This may make deep neural networks prone to overfitting when the training dataset has few samples of each class. The validation loss in this paper is a binary logistic regression loss function, which defines the image feature pair as (φ(x a , w), φ(x b , w)). It can be expressed as:
其中是预测概率。假如图像特征对预测结果为同一个人,那么q1=1,q2=0,反之q1=0,q2=1。in is the predicted probability. If the prediction result of the image feature pair is the same person, then q 1 =1, q 2 =0, otherwise q 1 =0, q 2 =1.
多视图伪标签估计部分是利用无标签数据多个异构的深度神经网络特征聚类来获得无标签数据的伪标签。一种最直接的方法是将无标签数据多个视图的特征拼接成一个特征,然后执行标准的聚类算法。但是,在这种情况下,重要视图下和次重要视图下的特征被同等对待,导致聚类算法结果不是最优的。理想情况是将不同视图下的特征同时聚类并且将每个视图下的结果联合起来得到最终的结果。为了达到这个目标,本发明的多视图伪标签估计损失函数可以写成如下形式:The multi-view pseudo-label estimation part uses multiple heterogeneous deep neural network feature clustering of unlabeled data to obtain pseudo-labels of unlabeled data. One of the most straightforward methods is to concatenate features from multiple views of unlabeled data into a single feature and then perform standard clustering algorithms. However, in this case, the features under the important view and the less important view are treated equally, resulting in suboptimal clustering algorithm results. The ideal situation is to cluster the features from different views at the same time and combine the results from each view to get the final result. To achieve this goal, the multi-view pseudo-label estimation loss function of the present invention can be written in the following form:
其中代表无标签数据组合成的矩阵,矩阵的每一列为一个无标签数据。代表第υ个视图的深度卷积网络特征。为第υ个视图下的中心点矩阵。满足1-of-Ku的形式。Ku为期望聚类的数目。αυ为第υ个视图下的权重因子。in Represents a matrix composed of unlabeled data, and each column of the matrix is an unlabeled data. Represents the deep convolutional network features of the vth view. is the center point matrix under the vth view. Satisfy the form of 1-of-K u . Ku is the number of expected clusters. α υ is the weighting factor under the υth view.
本发明方法的优化流程如下:The optimization process of the inventive method is as follows:
本发明采用交替优化迭代算法优化提出的模型,优化步骤如下:The present invention adopts the alternate optimization iterative algorithm to optimize the proposed model, and the optimization steps are as follows:
初始化:初始化wυ采用一小部分标记的数据训练多个不同结构的深度卷积网络。B通过单个视图的Kmeans聚类初始化,权重因子αυ=1/M。Initialization: Initialize w υ to train multiple deep convolutional networks of different architectures with a small fraction of labeled data. B is initialized by Kmeans clustering of a single view, with a weighting factor αν = 1/M.
更新B:通多最小化如下的子问题来更新B:Update B: Update B by minimizing the following subproblems:
为了优化式(6),将其写为:To optimize equation (6), write it as:
其中in
Hυ=Tr{(Φυ-CυBT)Dυ((Φυ-CυBT)T}, (8)H υ =Tr{(Φ υ -C υ BT)D υ ((Φ υ -C υ BT) T }, (8)
其中e(υ)i是如下矩阵的第i行:where e (υ)i is the ith row of the following matrix:
Eυ=(Φυ)T-B(Cυ)T. (10)E υ =(Φ υ ) T -B(C υ ) T . (10)
1)固定参数B,Dυ,αυ,更新每个视图下的聚类中心Cυ,对于J计算关于Cυ的倒数,可以得到:1) Fix parameters B, D υ , α υ , update the cluster center C υ under each view, and calculate the reciprocal of C υ for J, we can get:
其中in
令式(11)为零,Cυ有如下表示:Let formula (11) be zero, C υ can be expressed as follows:
2)固定参数Cυ,Dυ,αυ,更新聚类中心矩阵B:2) Fix the parameters C υ , D υ , α υ , update the cluster center matrix B:
为了优化式(14),固定i,向量最小化如下的问题:To optimize equation (14), fix i, the vector Minimize the following problems:
其中是对角矩阵的第i个元素,b满足1-of-Ku的形式,对于式(15)有Ku个后选值,每一个为矩阵的第ku列,具体来说,做一个详尽的搜索找出式(15)的最优解:in is a diagonal matrix The i-th element of , b satisfies the form of 1-of-K u . For formula (15), there are K u post-selection values, each of which is the k - th column of the matrix. Specifically, do an exhaustive search to find The optimal solution of formula (15) is:
其中ku为:where ku is:
3)固定参数Cυ,B,αυ,根据式(9)和(10)更新Dυ。3) Fix parameters C υ , B, α υ , and update D υ according to equations (9) and (10).
4)固定参数Cυ,B,Dυ,更新αυ。4) Fix parameters C υ , B, D υ , update α υ .
要使式(18)达到局部最小,αυ有如下表示:To make equation (18) achieve a local minimum, α υ is expressed as follows:
交替迭代Cυ,B,Dυ,αυ,并且重复以上过程直到式(6)收敛。Alternately iterate C υ , B, D υ , α υ , and repeat the above process until equation (6) converges.
更新yu:这一步骤用来更新无标签数据的伪标签。当得到B,就得到每一个无标签样本的伪标签ku。无标签样本总的类别数目为Ku,有标签样本总的类别数目为Kl。yu有如下表示:Update yu : This step is used to update the pseudo-labels of unlabeled data. When B is obtained, the pseudo - label ku of each unlabeled sample is obtained. The total number of categories of unlabeled samples is K u , and the total number of categories of labeled samples is K l . y u has the following representation:
yu=ku+Kl. (20)y u =k u +K l . (20)
更新wυ:利用有标签的数据以及带有伪标签的剩余数据训练深度卷积神经网络。更新wυ通过最小化下式:Update w υ : Train a deep convolutional neural network with the labeled data as well as the residual data with pseudo-labels. Update w υ by minimizing:
对于式(21),采用随机梯度下降优化wυ。For equation (21), stochastic gradient descent is used to optimize w υ .
本发明提出了一种基于深度多模型协同的半监督行人重识别算法,能够在一个端到端的学习过程中实现标记数据的特征学习和未标记数据的伪标签估计。为了提高伪标签估计的准确性,本发明提出了弱模型协同的学习策略,能够标记更多有质量的数据来提升特征学习的性能。The present invention proposes a semi-supervised pedestrian re-identification algorithm based on deep multi-model collaboration, which can realize feature learning of labeled data and pseudo-label estimation of unlabeled data in an end-to-end learning process. In order to improve the accuracy of pseudo-label estimation, the present invention proposes a learning strategy of weak model coordination, which can label more quality data to improve the performance of feature learning.
Claims (3)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010803514.6A CN112115780A (en) | 2020-08-11 | 2020-08-11 | Semi-supervised pedestrian re-identification method based on deep multi-model cooperation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010803514.6A CN112115780A (en) | 2020-08-11 | 2020-08-11 | Semi-supervised pedestrian re-identification method based on deep multi-model cooperation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112115780A true CN112115780A (en) | 2020-12-22 |
Family
ID=73804030
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010803514.6A Pending CN112115780A (en) | 2020-08-11 | 2020-08-11 | Semi-supervised pedestrian re-identification method based on deep multi-model cooperation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112115780A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255601A (en) * | 2021-06-29 | 2021-08-13 | 深圳市安软科技股份有限公司 | Training method and system for vehicle weight recognition model and related equipment |
CN113326826A (en) * | 2021-08-03 | 2021-08-31 | 新石器慧通(北京)科技有限公司 | Network model training method and device, electronic equipment and storage medium |
CN113903027A (en) * | 2021-10-26 | 2022-01-07 | 上海大参林医疗健康科技有限公司 | A picture classification device and method |
CN114186615A (en) * | 2021-11-22 | 2022-03-15 | 浙江华是科技股份有限公司 | Semi-supervised online training method and device for ship detection and computer storage medium |
CN115496131A (en) * | 2022-08-30 | 2022-12-20 | 北京华控智加科技有限公司 | Equipment health state classification method based on multiple pre-training neural networks |
CN116630880A (en) * | 2023-04-28 | 2023-08-22 | 苏州凌图科技有限公司 | A semi-supervised pedestrian re-identification method and system based on deep multi-model collaboration |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100077006A1 (en) * | 2008-09-22 | 2010-03-25 | University Of Ottawa | Re-identification risk in de-identified databases containing personal information |
CN110555390A (en) * | 2019-08-09 | 2019-12-10 | 厦门市美亚柏科信息股份有限公司 | pedestrian re-identification method, device and medium based on semi-supervised training mode |
CN111274958A (en) * | 2020-01-20 | 2020-06-12 | 福州大学 | A pedestrian re-identification method and system for network parameter self-correction |
CN111488760A (en) * | 2019-01-25 | 2020-08-04 | 复旦大学 | Few-shot person re-identification method based on deep multi-instance learning |
-
2020
- 2020-08-11 CN CN202010803514.6A patent/CN112115780A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100077006A1 (en) * | 2008-09-22 | 2010-03-25 | University Of Ottawa | Re-identification risk in de-identified databases containing personal information |
CN111488760A (en) * | 2019-01-25 | 2020-08-04 | 复旦大学 | Few-shot person re-identification method based on deep multi-instance learning |
CN110555390A (en) * | 2019-08-09 | 2019-12-10 | 厦门市美亚柏科信息股份有限公司 | pedestrian re-identification method, device and medium based on semi-supervised training mode |
CN111274958A (en) * | 2020-01-20 | 2020-06-12 | 福州大学 | A pedestrian re-identification method and system for network parameter self-correction |
Non-Patent Citations (1)
Title |
---|
XIAOMENG XIN,AND ETC: "Semi-supervised person re-identification using multi-view clustering", 《PATTERN RECOGNITION》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255601A (en) * | 2021-06-29 | 2021-08-13 | 深圳市安软科技股份有限公司 | Training method and system for vehicle weight recognition model and related equipment |
CN113326826A (en) * | 2021-08-03 | 2021-08-31 | 新石器慧通(北京)科技有限公司 | Network model training method and device, electronic equipment and storage medium |
CN113903027A (en) * | 2021-10-26 | 2022-01-07 | 上海大参林医疗健康科技有限公司 | A picture classification device and method |
CN113903027B (en) * | 2021-10-26 | 2024-10-29 | 广州天宸健康科技有限公司 | Picture classification device and method |
CN114186615A (en) * | 2021-11-22 | 2022-03-15 | 浙江华是科技股份有限公司 | Semi-supervised online training method and device for ship detection and computer storage medium |
CN114186615B (en) * | 2021-11-22 | 2022-07-08 | 浙江华是科技股份有限公司 | Semi-supervised online training method and device for ship detection and computer storage medium |
CN115496131A (en) * | 2022-08-30 | 2022-12-20 | 北京华控智加科技有限公司 | Equipment health state classification method based on multiple pre-training neural networks |
CN115496131B (en) * | 2022-08-30 | 2023-06-13 | 北京华控智加科技有限公司 | Equipment health state classification method based on multiple pre-training neural networks |
CN116630880A (en) * | 2023-04-28 | 2023-08-22 | 苏州凌图科技有限公司 | A semi-supervised pedestrian re-identification method and system based on deep multi-model collaboration |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112115780A (en) | Semi-supervised pedestrian re-identification method based on deep multi-model cooperation | |
CN113326731B (en) | Cross-domain pedestrian re-identification method based on momentum network guidance | |
Wang et al. | RSNet: The search for remote sensing deep neural networks in recognition tasks | |
CN111461258B (en) | Remote sensing image scene classification method of coupling convolution neural network and graph convolution network | |
CN110909673B (en) | Pedestrian re-identification method based on natural language description | |
CN114283345B (en) | Small sample city remote sensing image information extraction method based on meta-learning and attention | |
CN106709461B (en) | Activity recognition method and device based on video | |
CN111860678A (en) | An unsupervised cross-domain person re-identification method based on clustering | |
CN112115781B (en) | Unsupervised pedestrian re-identification method based on anti-attack sample and multi-view clustering | |
CN104217214A (en) | Configurable convolutional neural network based red green blue-distance (RGB-D) figure behavior identification method | |
CN109858390A (en) | The Activity recognition method of human skeleton based on end-to-end space-time diagram learning neural network | |
CN107679491A (en) | A kind of 3D convolutional neural networks sign Language Recognition Methods for merging multi-modal data | |
CN111079847A (en) | Remote sensing image automatic labeling method based on deep learning | |
CN113139468B (en) | Video abstract generation method fusing local target features and global features | |
CN113034545A (en) | Vehicle tracking method based on CenterNet multi-target tracking algorithm | |
CN113469186A (en) | Cross-domain migration image segmentation method based on small amount of point labels | |
CN115205570B (en) | Unsupervised cross-domain target re-identification method based on comparative learning | |
CN110175551A (en) | A kind of sign Language Recognition Method | |
CN105701482A (en) | Face recognition algorithm configuration based on unbalance tag information fusion | |
CN112131961A (en) | Semi-supervised pedestrian re-identification method based on single sample | |
CN114187655B (en) | Unsupervised pedestrian re-recognition method based on joint training strategy | |
Yang et al. | Local label descriptor for example based semantic image labeling | |
CN116452862A (en) | Image classification method based on domain generalization learning | |
CN111695531B (en) | Cross-domain pedestrian re-identification method based on heterogeneous convolution network | |
CN114780767A (en) | A large-scale image retrieval method and system based on deep convolutional neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201222 |