[go: up one dir, main page]

CN107154024A - Dimension self-adaption method for tracking target based on depth characteristic core correlation filter - Google Patents

Dimension self-adaption method for tracking target based on depth characteristic core correlation filter Download PDF

Info

Publication number
CN107154024A
CN107154024A CN201710355456.3A CN201710355456A CN107154024A CN 107154024 A CN107154024 A CN 107154024A CN 201710355456 A CN201710355456 A CN 201710355456A CN 107154024 A CN107154024 A CN 107154024A
Authority
CN
China
Prior art keywords
mrow
target
msup
scale
msub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710355456.3A
Other languages
Chinese (zh)
Inventor
刘忠耿
练智超
濮柯佳
李杨
张伟
李敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201710355456.3A priority Critical patent/CN107154024A/en
Publication of CN107154024A publication Critical patent/CN107154024A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4084Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Probability & Statistics with Applications (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of dimension self-adaption method for tracking target based on depth characteristic core correlation filter.This method comprises the following steps:The convolutional neural networks of pre-training completion are input an image into, depth convolution feature is extracted;Target tracking, using the model of training, estimates position and the yardstick of target;According to the target location of current detection and yardstick, core correlation filter is trained;Using the model update method of adaptive high confidence level, core correlation filter is updated.The present invention is extracted depth convolution feature, using adaptive scale method of estimation and the model modification strategy of adaptive high confidence level, improve the robustness of target following of the target in complex scene and cosmetic variation, target scale change can efficiently and accurately be handled, in addition, due to the model modification strategy using adaptive high confidence level, model following drift has been reduced as far as.

Description

基于深度特征核相关滤波器的尺度自适应目标跟踪方法Scale Adaptive Target Tracking Method Based on Deep Feature Kernel Correlation Filter

技术领域technical field

本发明涉及计算机视觉技术领域,特别是一种基于深度特征核相关滤波器的尺度自适应目标跟踪方法。The invention relates to the technical field of computer vision, in particular to a scale-adaptive target tracking method based on a deep feature kernel correlation filter.

背景技术Background technique

近年来,随着大规模标注数据集的出现,以及计算机计算能力的提升,深度学习方法尤其是卷积神经网络成功应用于图像分类、目标检测、目标识别以及语义分割等计算机视觉领域,这主要归功于卷积神经网络强大的目标表示能力。与传统的图像特征不同,深度卷积特征从大量的数千个类别的图像数据学习得到,所以高层的卷积特征表示目标的语义特征,适用于图像分类问题。由于高层的卷积特征分辨率很低,所以并不适合确定目标的位置,而且由于训练数据的缺失,在跟踪开始的前几帧训练一个深度模型困难重重。In recent years, with the emergence of large-scale labeled data sets and the improvement of computer computing power, deep learning methods, especially convolutional neural networks, have been successfully applied to computer vision fields such as image classification, target detection, target recognition, and semantic segmentation. Thanks to the powerful object representation ability of convolutional neural network. Different from traditional image features, deep convolutional features are learned from a large number of thousands of categories of image data, so high-level convolutional features represent the semantic features of the target and are suitable for image classification problems. Due to the low resolution of high-level convolutional features, it is not suitable for determining the position of the target, and due to the lack of training data, it is difficult to train a deep model in the first few frames of tracking.

最近,基于相关滤波器的判别式跟踪方法由于跟踪效果高效而准确,引起很多研究者的兴趣。基于相关滤波器的跟踪方法通过将输入特征回归为目标高斯分布在线训练一个相关滤波器,并在后续的跟踪过程中寻找相关滤波器输出相应图谱的峰值确定目标的位置。相关滤波器在运算中巧妙应用快速傅里叶变换,降低了计算复杂度,使得跟踪速度大幅度提升。但是,核相关滤波器算法使用传统的梯度方向直方图特征,当目标外观表示发生变化时,跟踪算法容易漂移;另外,该算法不能估计目标的尺度变化,而且采用简单的线性插值更新模型,这样当目标追踪发生错误的时候,由于更新机制的最终导致跟踪算法漂移。Recently, the discriminative tracking method based on correlation filter has aroused the interest of many researchers because of its efficient and accurate tracking effect. The tracking method based on the correlation filter trains a correlation filter online by regressing the input features to the Gaussian distribution of the target, and finds the peak of the corresponding spectrum output by the correlation filter in the subsequent tracking process to determine the position of the target. The correlation filter cleverly applies the fast Fourier transform in the operation, which reduces the computational complexity and greatly improves the tracking speed. However, the kernel correlation filter algorithm uses the traditional gradient direction histogram feature, and the tracking algorithm is prone to drift when the appearance representation of the target changes; in addition, the algorithm cannot estimate the scale change of the target, and uses simple linear interpolation to update the model, so When an error occurs in target tracking, the tracking algorithm drifts due to the update mechanism.

发明内容Contents of the invention

本发明的目的在于提供一种基于深度特征核相关滤波器的尺度自适应目标跟踪方法,从而提高目标在复杂场景和外观变化中的目标跟踪的鲁棒性,高效准确地处理目标尺度变化,尽可能地减少因错误检测而导致的模型跟踪漂移。The purpose of the present invention is to provide a scale-adaptive target tracking method based on the depth feature kernel correlation filter, thereby improving the robustness of target tracking in complex scenes and appearance changes, and efficiently and accurately processing target scale changes, as much as possible Potentially reduce model tracking drift due to false detections.

实现本发明目的的技术解决方案为:一种基于深度特征核相关滤波器的尺度自适应目标跟踪方法,包括以下几个步骤:The technical solution to realize the object of the present invention is: a scale-adaptive target tracking method based on depth feature kernel correlation filter, comprising the following steps:

步骤1,输入目标的初始位置p0和尺度s0,设置窗口大小为目标初始包围盒的2.0倍;Step 1, input the initial position p 0 and scale s 0 of the target, and set the window size to 2.0 times the initial bounding box of the target;

步骤2,根据第t-1帧的目标位置pt-1,获得目标区域xt-1,尺寸为窗口大小;Step 2, according to the target position p t-1 in the t-1th frame, obtain the target area x t-1 , whose size is the window size;

步骤3,提取目标区域xt-1的深度卷积特征,并进行快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 3, extract the deep convolution feature of the target area x t-1 , and perform fast Fourier transform to obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤4,根据特征图谱计算核自相关 Step 4, according to the feature map Computing Kernel Autocorrelation

步骤5,训练位置和尺度相关滤波器;Step 5, training location and scale correlation filters;

步骤6,根据第t-1帧目标的位置pt-1,获得目标在第t帧的候选区域zt,尺寸为窗口大小;Step 6, according to the position p t-1 of the target in the t-1th frame, obtain the candidate area z t of the target in the t-th frame, and the size is the window size;

步骤7,提取候选区域zt的深度卷积特征,并进行快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 7, extract the depth convolution feature of the candidate area z t , and perform fast Fourier transform to obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤8,根据目标前一帧的特征图谱计算核互相关 Step 8, according to the feature map of the previous frame of the target Computing Kernel Cross-Correlation

步骤9,分别检测位置滤波器和尺度滤波器的输出图谱中最大值对应的位置,来确定目标在当前帧的位置pt和尺度stStep 9, detecting the position corresponding to the maximum value in the output spectrum of the position filter and the scale filter respectively, to determine the position p t and scale st of the target in the current frame;

步骤10,采用自适应的模型更新策略更新核相关滤波器。In step 10, an adaptive model updating strategy is used to update the kernel correlation filter.

进一步地,步骤3和步骤7所述深度卷积特征的提取方法,具体如下:Further, the extraction method of the depth convolution feature described in step 3 and step 7 is as follows:

(3.1)预处理,将窗口区域I缩放到卷积神经网络规定的输入尺寸224×224;(3.1) Preprocessing, scaling the window area I to the input size 224×224 specified by the convolutional neural network;

(3.2)特征提取,提取卷积神经网络第3、4和5个卷积层的特征图谱;(3.2) Feature extraction, extracting the feature maps of the 3rd, 4th and 5th convolutional layers of the convolutional neural network;

(3.3)双线性插值,将提取的3层卷积特征上采样到同等尺寸大小。(3.3) Bilinear interpolation, upsampling the extracted 3-layer convolution features to the same size.

进一步地,步骤4所述计算核自相关和步骤8所述计算核互相关具体方法如下:Further, the calculation kernel autocorrelation described in step 4 Cross-correlate with the calculation kernel described in step 8 The specific method is as follows:

(4.1)采用高斯核,公式如下:(4.1) Using the Gaussian kernel, the formula is as follows:

其中,k(x,x′)表示两个特征图谱x和x′计算的高斯核,exp(.)表示e指数函数,σ是高斯函数的标准差,σ取值为0.5,||.||2表示向量或矩阵的2范式;Among them, k(x, x′) represents the Gaussian kernel calculated by two feature maps x and x′, exp(.) represents the e exponential function, σ is the standard deviation of the Gaussian function, and the value of σ is 0.5, ||.| | 2 means the 2 normal form of the vector or matrix;

(4.2)计算核相关,公式如下:(4.2) Calculate the kernel correlation, the formula is as follows:

其中,kxx′表示特征图谱x和x′的核相关,exp(.)表示e指数函数,σ是高斯函数的标准差,σ取值为0.5,||.||2表示向量或矩阵的2范式,表示离散傅立叶变换的逆变换,*表示复共轭,^表示离散傅立叶变换,⊙表示两个矩阵对应元素相乘。Among them, k xx' represents the kernel correlation of feature maps x and x', exp(.) represents the e exponential function, σ is the standard deviation of the Gaussian function, and the value of σ is 0.5, and ||.|| 2 represents the vector or matrix 2 paradigms, Represents the inverse transform of the discrete Fourier transform, * represents the complex conjugate, ^ represents the discrete Fourier transform, and ⊙ represents the multiplication of the corresponding elements of the two matrices.

进一步地,步骤5所述的训练位置和尺度相关滤波器,具体如下:Further, the training location and scale correlation filters described in step 5 are as follows:

根据步骤3所提取的深度卷积特征,对每层特征图谱分别训练1个核相关滤波器,采用如下公式训练模型:According to the deep convolution features extracted in step 3, a kernel correlation filter is trained for each layer of feature maps, and the following formula is used to train the model:

其中,表示根据第l层深度卷积特征图谱求得的相关滤波器模型,表示特征图谱的核自相关,^表示离散傅立叶变换,λ是正则化参数,防止训练的模型过拟合,λ取值为0.001。in, Indicates that the feature map is convolved according to the depth of the l layer The obtained correlation filter model, Represents a feature map Kernel autocorrelation of , ^ means discrete Fourier transform, λ is a regularization parameter to prevent overfitting of the trained model, and the value of λ is 0.001.

进一步地,步骤9所述分别检测位置滤波器和尺度滤波器的输出图谱中最大值对应的位置,来确定目标在当前帧的位置pt和尺度st,具体如下:Further, in step 9, respectively detect the position corresponding to the maximum value in the output spectrum of the position filter and the scale filter to determine the position p t and scale st of the target in the current frame, as follows:

(9.1)从第t帧图像,以位置pt-1和窗口大小选取候选区域zt,trans(9.1) From the tth frame image, select the candidate area z t,trans with the position p t-1 and the window size;

(9.2)提取候选区域zt,trans的3层深度卷积特征图谱 (9.2) Extract the 3-layer deep convolution feature map of the candidate area z t, trans

(9.3)对第l层特征图谱计算位置滤波器相关输出置信度图谱ft,trans (l)(9.3) Calculate the position filter correlation output confidence map f t,trans (l) for the l-th layer feature map:

其中,ft (l)表示第l层特征图谱的位置滤波器的输出相应图谱,表示特征图谱的核互相关,是前一帧训练得到的并且更新过的位置滤波器,表示离散傅立叶变换的逆变换,^表示离散傅立叶变换,表示两个矩阵对应元素相乘;Among them, f t (l) represents the output corresponding map of the position filter of the l-th layer feature map, Represents a feature map with The nuclear cross-correlation of is the position filter trained and updated in the previous frame, Represents the inverse transform of the discrete Fourier transform, ^ represents the discrete Fourier transform, Indicates that the corresponding elements of the two matrices are multiplied;

(9.4)从输出相应图谱ft (3)开始由粗到细估计目标位置,第l层的目标位置pt (l)为输出相应图谱ft,trans (l)最大值对应的位置;(9.4) Estimating the target position from coarse to fine from the output corresponding map f t (3) , the target position p t (l) of the first layer is the position corresponding to the maximum value of the output corresponding map f t,trans (l) ;

(9.5)从第t帧图像中,以位置pt和尺度st-1提取尺度估计的候选区域zt,sacle,构建尺度金字塔;(9.5) From the t-th frame image, extract the candidate region z t,sacle for scale estimation with the position p t and scale s t-1 , and construct a scale pyramid;

(9.6)提取候选区域的梯度方向直方图特征计算尺度滤波器相关输出置信度图谱ft,sacle(9.6) Extract the gradient direction histogram feature of the candidate area Calculate scale filter correlation output confidence map f t,sacle ;

(9.7)第t帧检测到的目标尺度st为输出相应图谱ft,sacle最大值对应的尺度。(9.7) The target scale s t detected in the tth frame is the scale corresponding to the maximum value of the output corresponding map f t,sacle .

进一步地,步骤10所述采用自适应的模型更新策略更新核相关滤波器,具体如下:Further, as described in step 10, an adaptive model update strategy is used to update the kernel correlation filter, specifically as follows:

(10.1)在完成估计目标位置和尺度之后,计算两种跟踪结果置信度度量准则,其一是相关相应输出图谱的峰值:(10.1) After estimating the target position and scale, calculate two tracking result confidence metrics, one of which is the peak value of the corresponding output spectrum:

fmax=maxff max = maxf

其中,f为核相关滤波器与候选区域相关输出相应图谱,fmax是该图谱的峰值;Wherein, f is the corresponding map output by the kernel correlation filter and the candidate region, and f max is the peak value of the map;

另一种是平均峰值相关能量APCE:The other is the average peak correlation energy APCE:

其中,fmax和fmin分别是输出图谱f的最大值与最小值,mean(.)表示求均值函数,fi,j表示输出图谱f的第i行第j列数值;Among them, f max and f min are the maximum value and minimum value of the output spectrum f respectively, mean(.) represents the mean function, and f i, j represent the value of the i-th row and the j-th column of the output spectrum f;

(10.2)如果fmax和APCE均大于fmax和APCE历史平均值,则模型进行更新,否则,不进行更新;对每一深度卷积层,使用线性插值方法更新,公式如下:(10.2) If f max and APCE are both greater than f max and the historical average value of APCE, the model is updated, otherwise, no update is performed; for each deep convolutional layer, the linear interpolation method is used to update, the formula is as follows:

其中,分别表示第l层的前一帧的特征图谱和相关滤波器,η为学习率,η越大模型更新越快,η取值为0.02。in, with respectively represent the feature map and correlation filter of the previous frame of the first layer, η is the learning rate, the larger η is, the faster the model is updated, and the value of η is 0.02.

本发明与现有技术相比,其显著优点为:(1)使用深度卷积特征,深度卷积特征是从大量的数千类别的图像数据中学习得到,具有强大的目标表示能力,使得算法在目标外观发生变化时以及外部因素如光照变化时具有较强的鲁棒性;(2)采用自适应尺度估计方法,该方法与位置估计类似,训练一个单独的尺度滤波器,巧妙使用快速傅里叶变换,实现高效,尺度估计准确,可以结合到任何判别式跟踪算法框架中;(3)采用自适应的高置信度的模型更新策略,当在目标追踪阶段发生错误时,目标检测的置信度较低,此时模型不应更新,这种策略有效地降低了跟踪算法漂移的危险。Compared with the prior art, the present invention has the following significant advantages: (1) using deep convolution features, which are learned from a large number of thousands of categories of image data, and have powerful target representation capabilities, making the algorithm It has strong robustness when the appearance of the target changes and external factors such as illumination changes; (2) adopts the adaptive scale estimation method, which is similar to the position estimation method, trains a separate scale filter, and cleverly uses the fast Fu Lie transform, which achieves high efficiency and accurate scale estimation, can be combined into any discriminative tracking algorithm framework; (3) adopts an adaptive high-confidence model update strategy, when an error occurs in the target tracking phase, the confidence of target detection At this time, the model should not be updated, and this strategy effectively reduces the risk of tracking algorithm drift.

附图说明Description of drawings

图1为本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法的流程图。FIG. 1 is a flow chart of the scale-adaptive target tracking method based on the deep feature kernel correlation filter of the present invention.

图2为提取深度卷积特征示意图。Figure 2 is a schematic diagram of extracting deep convolution features.

图3为目标位置和尺度估计示意图,其中(a)是由粗到细的目标位置估计示意图,(b)是自适应的目标尺度估计示意图。Fig. 3 is a schematic diagram of target position and scale estimation, where (a) is a schematic diagram of target position estimation from coarse to fine, and (b) is a schematic diagram of adaptive target scale estimation.

图4为自适应的高置信度的模型更新示意图。Fig. 4 is a schematic diagram of an adaptive high-confidence model update.

图5为本发明在标准视觉跟踪数据集上评测结果图,其中(a)是OTB50数据集的准确度绘图,(b)是OTB50数据集的正确率绘图,(c)是OTB100数据集的准确度绘图,(d)是OTB100数据集的正确率绘图。Fig. 5 is the evaluation result diagram of the present invention on the standard visual tracking data set, wherein (a) is the accuracy drawing of the OTB50 data set, (b) is the accuracy drawing of the OTB50 data set, (c) is the accurate drawing of the OTB100 data set Degree drawing, (d) is the correct rate drawing of the OTB100 dataset.

图6为本发明实际视频目标跟踪结果图,其中(a)是OTB100数据集上Human测试视频结果图,(b)是OTB100数据集上Walking测试视频结果图,(c)是OTB50数据集上Tiger测试视频结果图,(d)是OTB50数据集上Dog测试视频结果图。Fig. 6 is the actual video target tracking result diagram of the present invention, wherein (a) is the Human test video result diagram on the OTB100 dataset, (b) is the Walking test video result diagram on the OTB100 dataset, and (c) is the Tiger test video diagram on the OTB50 dataset Test video result graph, (d) is the Dog test video result graph on the OTB50 dataset.

具体实施方式detailed description

为使本发明的结构特征以及所达成的功效有进一步的了解与认识,用以较佳的实施例及附图配合详细的说明,说明如下:In order to further understand and understand the structural features of the present invention and the achieved effects, a detailed description is provided in conjunction with preferred embodiments and accompanying drawings, as follows:

结合图1,本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法,包括以下几个步骤:In conjunction with Fig. 1, the scale adaptive target tracking method based on the depth feature kernel correlation filter of the present invention includes the following steps:

步骤1,输入目标的初始位置p0和尺度s0,设置窗口大小为目标初始包围盒的2.0倍;Step 1, input the initial position p 0 and scale s 0 of the target, and set the window size to 2.0 times the initial bounding box of the target;

步骤2,根据第t-1帧的目标位置pt-1,获得目标区域xt-1,尺寸为窗口大小;Step 2, according to the target position p t-1 in the t-1th frame, obtain the target area x t-1 , whose size is the window size;

步骤3,提取目标区域xt-1的深度卷积特征,并进行快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 3, extract the deep convolution feature of the target area x t-1 , and perform fast Fourier transform to obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤4,根据特征图谱计算核自相关 Step 4, according to the feature map Computing Kernel Autocorrelation

步骤5,训练位置和尺度相关滤波器;Step 5, training location and scale correlation filters;

步骤6,根据第t-1帧目标的位置pt-1,获得目标在第t帧的候选区域zt,尺寸为窗口大小;Step 6, according to the position p t-1 of the target in the t-1th frame, obtain the candidate area z t of the target in the t-th frame, and the size is the window size;

步骤7,提取候选区域zt的深度卷积特征,并进行快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 7, extract the depth convolution feature of the candidate area z t , and perform fast Fourier transform to obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤8,根据目标前一帧的特征图谱计算核互相关 Step 8, according to the feature map of the previous frame of the target Computing Kernel Cross-Correlation

步骤9,分别检测位置滤波器和尺度滤波器的输出图谱中最大值对应的位置,来确定目标在当前帧的位置pt和尺度stStep 9, detecting the position corresponding to the maximum value in the output spectrum of the position filter and the scale filter respectively, to determine the position p t and scale st of the target in the current frame;

步骤10,采用自适应的模型更新策略更新核相关滤波器。In step 10, an adaptive model updating strategy is used to update the kernel correlation filter.

作为一种具体示例,步骤3和步骤7所述深度卷积特征的提取方法,具体如下:As a specific example, the extraction method of the depth convolution feature described in step 3 and step 7 is as follows:

(3.1)预处理,将窗口区域I缩放到卷积神经网络规定的输入尺寸224×224;(3.1) Preprocessing, scaling the window area I to the input size 224×224 specified by the convolutional neural network;

(3.2)特征提取,提取卷积神经网络第3、4和5个卷积层的特征图谱;(3.2) Feature extraction, extracting the feature maps of the 3rd, 4th and 5th convolutional layers of the convolutional neural network;

(3.3)双线性插值,将提取的3层卷积特征上采样到同等尺寸大小。(3.3) Bilinear interpolation, upsampling the extracted 3-layer convolution features to the same size.

作为一种具体示例,步骤4所述计算核自相关和步骤8所述计算核互相关具体方法如下:As a specific example, the calculation kernel autocorrelation described in step 4 Cross-correlate with the calculation kernel described in step 8 The specific method is as follows:

(4.1)采用高斯核,公式如下:(4.1) Using the Gaussian kernel, the formula is as follows:

其中,k(x,x′)表示两个特征图谱x和x′计算的高斯核,exp(.)表示e指数函数,σ是高斯函数的标准差,σ取值为0.5,||.||2表示向量或矩阵的2范式;Among them, k(x, x′) represents the Gaussian kernel calculated by two feature maps x and x′, exp(.) represents the e exponential function, σ is the standard deviation of the Gaussian function, and the value of σ is 0.5, ||.| | 2 means the 2 normal form of the vector or matrix;

(4.2)计算核相关,公式如下:(4.2) Calculate the kernel correlation, the formula is as follows:

其中,kxx′表示特征图谱x和x′的核相关,exp(.)表示e指数函数,σ是高斯函数的标准差,σ取值为0.5,||.||2表示向量或矩阵的2范式,表示离散傅立叶变换的逆变换,*表示复共轭,^表示离散傅立叶变换,表示两个矩阵对应元素相乘。Among them, k xx' represents the kernel correlation of feature maps x and x', exp(.) represents the e exponential function, σ is the standard deviation of the Gaussian function, and the value of σ is 0.5, and ||.|| 2 represents the vector or matrix 2 paradigms, Represents the inverse transform of the discrete Fourier transform, * represents the complex conjugate, ^ represents the discrete Fourier transform, Represents the multiplication of corresponding elements of two matrices.

作为一种具体示例,步骤5所述的训练位置和尺度相关滤波器,具体如下:As a specific example, the training position and scale correlation filter described in step 5 is as follows:

根据步骤3所提取的深度卷积特征,对每层特征图谱分别训练1个核相关滤波器,采用如下公式训练模型:According to the deep convolution features extracted in step 3, a kernel correlation filter is trained for each layer of feature maps, and the following formula is used to train the model:

其中,表示根据第l层深度卷积特征图谱求得的相关滤波器模型,表示特征图谱的核自相关,^表示离散傅立叶变换,λ是正则化参数,防止训练的模型过拟合,λ取值为0.001。in, Indicates that the feature map is convolved according to the depth of the l layer The obtained correlation filter model, Represents a feature map Kernel autocorrelation of , ^ means discrete Fourier transform, λ is a regularization parameter to prevent overfitting of the trained model, and the value of λ is 0.001.

作为一种具体示例,步骤9所述分别检测位置滤波器和尺度滤波器的输出图谱中最大值对应的位置,来确定目标在当前帧的位置pt和尺度st,具体如下:As a specific example, in step 9, respectively detect the position corresponding to the maximum value in the output spectrum of the position filter and the scale filter to determine the position p t and scale st of the target in the current frame, as follows:

(9.1)从第t帧图像,以位置pt-1和窗口大小选取候选区域zt,trans(9.1) From the tth frame image, select the candidate area z t,trans with the position p t-1 and the window size;

(9.2)提取候选区域zt,trans的3层深度卷积特征图谱 (9.2) Extract the 3-layer deep convolution feature map of the candidate area z t, trans

(9.3)对第l层特征图谱计算位置滤波器相关输出置信度图谱ft,trans (l)(9.3) Calculate the position filter correlation output confidence map f t,trans (l) for the l-th layer feature map:

其中,ft (l)表示第l层特征图谱的位置滤波器的输出相应图谱,表示特征图谱的核互相关,是前一帧训练得到的并且更新过的位置滤波器,表示离散傅立叶变换的逆变换,^表示离散傅立叶变换,表示两个矩阵对应元素相乘;Among them, f t (l) represents the output corresponding map of the position filter of the l-th layer feature map, Represents a feature map with The nuclear cross-correlation of is the position filter trained and updated in the previous frame, Represents the inverse transform of the discrete Fourier transform, ^ represents the discrete Fourier transform, Indicates that the corresponding elements of the two matrices are multiplied;

(9.4)从输出相应图谱ft (3)开始由粗到细估计目标位置,第l层的目标位置pt (l)为输出相应图谱ft,trans (l)最大值对应的位置;(9.4) Estimating the target position from coarse to fine from the output corresponding map f t (3) , the target position p t (l) of the first layer is the position corresponding to the maximum value of the output corresponding map f t,trans (l) ;

(9.5)从第t帧图像中,以位置pt和尺度st-1提取尺度估计的候选区域zt,sacle,构建尺度金字塔;(9.5) From the t-th frame image, extract the candidate region z t,sacle for scale estimation with the position p t and scale s t-1 , and construct a scale pyramid;

(9.6)提取候选区域的梯度方向直方图特征计算尺度滤波器相关输出置信度图谱ft,sacle(9.6) Extract the gradient direction histogram feature of the candidate area Calculate scale filter correlation output confidence map f t,sacle ;

(9.7)第t帧检测到的目标尺度st为输出相应图谱ft,sacle最大值对应的尺度。(9.7) The target scale s t detected in the tth frame is the scale corresponding to the maximum value of the output corresponding map f t,sacle .

作为一种具体示例,步骤10所述采用自适应的模型更新策略更新核相关滤波器,具体如下:As a specific example, in step 10, an adaptive model update strategy is used to update the kernel correlation filter, specifically as follows:

(10.1)在完成估计目标位置和尺度之后,计算两种跟踪结果置信度度量准则,其一是相关相应输出图谱的峰值:(10.1) After estimating the target position and scale, calculate two tracking result confidence metrics, one of which is the peak value of the corresponding output spectrum:

fmax=maxff max = maxf

其中,f为核相关滤波器与候选区域相关输出相应图谱,fmax是该图谱的峰值;Wherein, f is the corresponding map output by the kernel correlation filter and the candidate region, and f max is the peak value of the map;

另一种是平均峰值相关能量(average peak-to-correlation energy,APCE):The other is the average peak-to-correlation energy (APCE):

其中,fmax和fmin分别是输出图谱f的最大值与最小值,mean(.)表示求均值函数,fi,j表示输出图谱f的第i行第j列数值;Among them, f max and f min are the maximum value and minimum value of the output spectrum f respectively, mean(.) represents the mean function, and f i, j represent the value of the i-th row and the j-th column of the output spectrum f;

(10.2)如果fmax和APCE均大于fmax和APCE历史平均值,则模型进行更新,否则,不进行更新;对每一深度卷积层,使用线性插值方法更新,公式如下:(10.2) If f max and APCE are both greater than f max and the historical average value of APCE, the model is updated, otherwise, no update is performed; for each deep convolutional layer, the linear interpolation method is used to update, the formula is as follows:

其中,分别表示第l层的前一帧的特征图谱和相关滤波器,η为学习率,η越大模型更新越快,η取值为0.02。in, with respectively represent the feature map and correlation filter of the previous frame of the first layer, η is the learning rate, the larger η is, the faster the model is updated, and the value of η is 0.02.

本发明解决了由目标形变、光照变化、目标旋转和尺度变化等外观变化以及目标遮挡等导致跟踪失败的缺陷。借助深度卷积特征的强大目标表示能力,提高了目标在复杂场景和外观变化中的目标跟踪的鲁棒性;另外,本发明能够高效准确地处理目标尺度变化,最后,由于采用自适应的高置信度的模型更新策略,减少了因错误检测而导致的模型跟踪漂移。The invention solves the defect of tracking failure caused by object deformation, illumination change, object rotation, scale change and other appearance changes, as well as object occlusion. With the help of the powerful target representation ability of deep convolution features, the robustness of target tracking in complex scenes and appearance changes is improved; in addition, the present invention can efficiently and accurately deal with target scale changes. Finally, due to the use of adaptive high Confidence-based model update strategy that reduces model tracking drift due to false detections.

下面结合具体实施例对本发明做进一步详细说明。The present invention will be described in further detail below in conjunction with specific embodiments.

实施例1Example 1

本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法,该方法主要分为四步,第一步为提取深度卷积特征;第二步训练核相关滤波器;第三步估计目标在当前帧的位置和尺度;第四步,采用自适应的高置信度的模型更新策略。The present invention is based on the scale adaptive target tracking method of the deep feature kernel correlation filter, the method is mainly divided into four steps, the first step is to extract the depth convolution feature; the second step is to train the kernel correlation filter; the third step is to estimate the target at The position and scale of the current frame; the fourth step is to adopt an adaptive high-confidence model update strategy.

步骤1,输入目标的初始位置p0和尺度s0,设置窗口大小为目标初始包围盒的2.0倍;Step 1, input the initial position p 0 and scale s 0 of the target, and set the window size to 2.0 times the initial bounding box of the target;

步骤2,根据第t-1帧的目标位置pt-1,获得目标区域xt-1,尺寸为窗口大小;Step 2, according to the target position p t-1 of the t-1th frame, obtain the target area x t-1 , whose size is the window size;

步骤3,提取目标区域xt-1的深度卷积特征,快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 3, extract the deep convolution feature of the target area x t-1 , fast Fourier transform, and obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤4,根据特征图谱计算核自相关 Step 4, according to the feature map Computing Kernel Autocorrelation

步骤5,训练位置和尺度相关滤波器;Step 5, training location and scale correlation filters;

步骤6,根据第t-1帧目标的位置pt-1,获得目标在第t帧的候选区域zt,尺寸为窗口大小;Step 6, according to the position p t-1 of the target in the t-1th frame, obtain the candidate area z t of the target in the t-th frame, and the size is the window size;

步骤7,提取候选区域zt的深度卷积特征,并进行快速傅立叶变换,得到特征图谱其中^表示离散傅立叶变换;Step 7, extract the depth convolution feature of the candidate area z t , and perform fast Fourier transform to obtain the feature map Wherein ^ represents the discrete Fourier transform;

步骤8,根据目标前一帧的特征图谱计算核互相关 Step 8, according to the feature map of the previous frame of the target Computing Kernel Cross-Correlation

步骤9,分别检测位置滤波器和尺度滤波器的输出相应图谱的最大值对应的位置来确定目标在当前帧的位置pt和尺度stStep 9, respectively detecting the position corresponding to the maximum value of the output corresponding map of the position filter and the scale filter to determine the position p t and scale st of the target in the current frame;

步骤10,采用自适应的高置信度的模型更新策略更新核相关滤波器。In step 10, an adaptive high-confidence model update strategy is used to update the kernel correlation filter.

如图2所示,给出了基于深度特征核相关滤波器的尺度自适应目标跟踪方法的提取深度卷积特征的示意图。本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法,其特征在于提取深度卷积特征,深度卷积特征是从大量的数千类别的图像数据中学习得到,具有强大的目标表示能力,使得算法在目标外观发生变化时以及外部因素如光照变化时具有较强的鲁棒性,具体步骤如下:As shown in Figure 2, a schematic diagram of extracting deep convolutional features of the scale-adaptive target tracking method based on deep feature kernel correlation filters is given. The present invention is based on the scale-adaptive target tracking method of the deep feature kernel correlation filter, which is characterized in that the deep convolution feature is extracted, and the deep convolution feature is learned from a large number of thousands of categories of image data, and has a strong target representation ability , which makes the algorithm more robust when the appearance of the target changes and external factors such as illumination changes. The specific steps are as follows:

(3.1)预处理,将窗口区域I缩放到卷积神经网络规定的输入尺寸224×224;(3.1) Preprocessing, scaling the window area I to the input size 224×224 specified by the convolutional neural network;

(3.2)特征提取,提取卷积神经网络第3、4和5个卷积层的特征图谱;(3.2) Feature extraction, extracting the feature maps of the 3rd, 4th and 5th convolutional layers of the convolutional neural network;

(3.3)双线性插值,将提取的3层卷积特征上采样到同等尺寸大小。(3.3) Bilinear interpolation, upsampling the extracted 3-layer convolution features to the same size.

如图3所示,给出了基于深度特征核相关滤波器的尺度自适应目标跟踪方法的目标位置和尺度估计示意图,其中图3(a)是由粗到细的目标位置估计示意图,图3(b)是自适应的目标尺度估计示意图。本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法,其特征在于由粗到细的分层位置估计方法和自适应的目标尺度估计方法。传统的核相关滤波方法目标的模型尺寸固定,无法处理目标尺度的变化,因此容易导致跟踪失败。本发明提出了一种自适应尺度估计方法,它主要思想是训练一个独立的尺度滤波器,通过尺度滤波器相关响应最大时对应的尺度来估计,这种方法巧妙应用快速傅立叶变换,简单而高效,可以集成到传统的判别式目标跟踪方法中,具体步骤如下:As shown in Figure 3, a schematic diagram of the target position and scale estimation of the scale-adaptive target tracking method based on the deep feature kernel correlation filter is given, where Figure 3(a) is a schematic diagram of target position estimation from coarse to fine, and Figure 3 (b) is a schematic diagram of adaptive target scale estimation. The scale-adaptive target tracking method based on the deep feature kernel correlation filter of the present invention is characterized by a coarse-to-fine layered position estimation method and an adaptive target scale estimation method. The model size of the target in the traditional kernel correlation filtering method is fixed, which cannot deal with the change of the target scale, so it is easy to cause tracking failure. The present invention proposes an adaptive scale estimation method, the main idea of which is to train an independent scale filter and estimate the scale corresponding to the maximum correlation response of the scale filter. This method cleverly uses fast Fourier transform, which is simple and efficient , which can be integrated into the traditional discriminative target tracking method, the specific steps are as follows:

(9.1)从第t帧图像,以位置pt-1和窗口大小选取候选区域zt,trans(9.1) From the tth frame image, select the candidate area z t,trans with the position p t-1 and the window size;

(9.2)提取候选区域zt,trans的3层深度卷积特征图谱 (9.2) Extract the 3-layer deep convolution feature map of the candidate area z t, trans

(9.3)对第l层特征图谱计算位置滤波器相关输出置信度图谱ft,trans (l)(9.3) Calculate the position filter correlation output confidence map f t,trans (l) for the l-th layer feature map:

其中,ft (l)表示第l层特征图谱的位置滤波器的输出相应图谱,表示特征图谱的核互相关,是前一帧训练得到的并且更新过的位置滤波器,表示离散傅立叶变换的逆变换,^表示离散傅立叶变换,表示两个矩阵对应元素相乘。Among them, f t (l) represents the output corresponding map of the position filter of the l-th layer feature map, Represents a feature map with The nuclear cross-correlation of is the position filter trained and updated in the previous frame, Represents the inverse transform of the discrete Fourier transform, ^ represents the discrete Fourier transform, Represents the multiplication of corresponding elements of two matrices.

(9.4)从输出相应图谱ft (3)开始由粗到细估计目标位置,第l层的目标位置pt (l)为输出相应图谱ft,trans (l)最大值对应的位置;(9.4) Estimating the target position from coarse to fine from the output corresponding map f t (3) , the target position p t (l) of the first layer is the position corresponding to the maximum value of the output corresponding map f t,trans (l) ;

(9.5)从第t帧图像中,以位置pt和尺度st-1提取尺度估计的候选区域zt,sacle,构建尺度金字塔;(9.5) From the t-th frame image, extract the candidate region z t,sacle for scale estimation with the position p t and scale s t-1 , and construct a scale pyramid;

(9.6)提取候选区域的梯度方向直方图特征计算尺度滤波器相关输出置信度图谱ft,sacle(9.6) Extract the gradient direction histogram feature of the candidate area Calculate scale filter correlation output confidence map f t,sacle ;

(9.7)第t帧检测到的目标尺度st为输出相应图谱ft,sacle最大值对应的尺度。(9.7) The target scale s t detected in the tth frame is the scale corresponding to the maximum value of the output corresponding map f t,sacle .

如图4所示,给出了基于深度特征核相关滤波器的尺度自适应目标跟踪方法的自适应的高置信度的模型更新示意图。本发明基于深度特征核相关滤波器的尺度自适应目标跟踪方法,其特征在于自适应的高置信度的模型更新策略。传统的核相关滤波模型更新方法采用简单线性插值方法,没有进行跟踪错误检测,当跟踪时目标检测出现错误,模型更新会污染核相关滤波器,导致跟踪算法漂移。本发明提出了两种检测置信度准则,当确定目标跟踪正确时,则模型更新,否则,模型不更新,具体步骤如下:As shown in Figure 4, a schematic diagram of the adaptive high-confidence model update of the scale-adaptive target tracking method based on the deep feature kernel correlation filter is given. The scale adaptive target tracking method based on the depth feature kernel correlation filter of the present invention is characterized by an adaptive high-confidence model update strategy. The traditional nuclear correlation filter model update method uses a simple linear interpolation method without tracking error detection. When the target detection error occurs during tracking, the model update will pollute the nuclear correlation filter and cause the tracking algorithm to drift. The present invention proposes two detection confidence criteria. When it is determined that the target is tracked correctly, the model is updated; otherwise, the model is not updated. The specific steps are as follows:

(10.1)在完成估计目标位置和尺度之后,计算两种跟踪结果置信度度量准则,其一是相关相应输出图谱的峰值:(10.1) After estimating the target position and scale, calculate two tracking result confidence metrics, one of which is the peak value of the corresponding output spectrum:

fmax=maxff max = maxf

其中,f为核相关滤波器与候选区域相关输出相应图谱,fmax是该图谱的峰值。另一种是平均峰值相关能量(average peak-to-correlation energy,APCE):Among them, f is the corresponding map output by the kernel correlation filter and the candidate region, and f max is the peak value of the map. The other is the average peak-to-correlation energy (APCE):

其中,fmax和fmin是输出图谱f的最大值与最小值,mean(.)表示求均值函数。fi,j表示输出图谱f的第i行第j列数值。Among them, f max and f min are the maximum and minimum values of the output spectrum f, and mean(.) represents the mean function. f i, j represents the value of row i and column j of the output spectrum f.

(10.2)如果fmax和APCE均大于fmax和APCE历史平均值,则模型进行更新,否则,不进行更新。对每一深度卷积层,使用线性插值方法更新,公式如下:(10.2) If both f max and APCE are greater than f max and APCE historical average, the model is updated, otherwise, no update is performed. For each deep convolutional layer, it is updated using the linear interpolation method, the formula is as follows:

其中,分别表示第l层的前一帧的特征图谱和相关滤波器,η为学习率,η越大模型更新地越快,在发明中η取值为0.02。in, with Respectively represent the feature map and the correlation filter of the previous frame of the first layer, η is the learning rate, the larger the η, the faster the model is updated, and the value of η is 0.02 in the invention.

如图5所示,展示了本发明在标准视觉追踪数据集OTB50和OTB100上评测结果图,其中(a)是OTB50数据集的准确度绘图,(b)是OTB50数据集的正确率绘图,(c)是OTB100数据集的准确度绘图,(d)是OTB100数据集的正确率绘图。OTB50数据集有50个视频序列,总共拥有29000帧,而OTB100数据集拥有100个视频序列,总共拥有58897帧,它们每帧都有目标的标记。评测指标主要有两种:准确度和成功率。在准确度绘图(a)和(c)中,准确度定义为算法检测位置与目标标定位置之间的距离不超过20像素的帧数占总评测帧数的百分比;在成功率绘图(b)和(d)中,重叠率指的是算法检测目标包围盒与目标标定包围盒两者之间重叠部分面积(交运算)占总面积(并运算)的百分比超过50%的帧数占总评测帧数的百分比。从评测结果可以看出,与经典的目标跟踪算法相比,本发明(fSCF2)在目标追踪任务中表现出色。As shown in Figure 5, it shows the evaluation results of the present invention on the standard visual tracking data sets OTB50 and OTB100, wherein (a) is the accuracy drawing of the OTB50 data set, (b) is the correct rate drawing of the OTB50 data set, ( c) is the accuracy plot of the OTB100 dataset, and (d) is the accuracy plot of the OTB100 dataset. The OTB50 dataset has 50 video sequences with a total of 29,000 frames, while the OTB100 dataset has 100 video sequences with a total of 58,897 frames, each of which has a target label. There are two main evaluation indicators: accuracy and success rate. In the accuracy plots (a) and (c), the accuracy is defined as the percentage of the number of frames whose distance between the algorithm detection position and the target calibration position does not exceed 20 pixels to the total number of evaluation frames; in the success rate plot (b) In (d) and (d), the overlap rate refers to the percentage of the overlapping area (intersection operation) between the algorithm detection target bounding box and the target calibration bounding box to the total area (combination operation) and the number of frames exceeding 50% of the total evaluation Percentage of frames. It can be seen from the evaluation results that, compared with the classic target tracking algorithms, the present invention (fSCF2) performs well in the target tracking task.

如图6所示,展示了本发明与近年来一些出色的算法在实际视频中目标追踪结果比较图,其中(a)是OTB100数据集上Human测试视频结果图,(b)是OTB100数据集上Walking测试视频结果图,(c)是OTB50数据集上Tiger测试视频结果图,(d)是OTB50数据集上Dog测试视频结果图。总体来看,与经典的目标跟踪算法相比,本发明(fSCF2)追踪效果最好,由于使用深度卷积特征,具有很强的目标表示能力,自适应尺度估计机制以及自适应的高置信度的模型更新策略使本发明能够在目标发生遮挡,尺度变化,目标变形以及目标快速运动等不利因素条件下仍然能够准确追踪目标。As shown in Figure 6, it shows the comparison between the present invention and some excellent algorithms in recent years in the actual video target tracking results, where (a) is the Human test video result graph on the OTB100 dataset, (b) is the OTB100 dataset The results of the Walking test video, (c) is the result of the Tiger test video on the OTB50 dataset, and (d) is the result of the Dog test video on the OTB50 dataset. Overall, compared with the classic target tracking algorithm, the present invention (fSCF2) has the best tracking effect. Due to the use of deep convolution features, it has strong target representation ability, adaptive scale estimation mechanism and adaptive high confidence The model updating strategy enables the present invention to track the target accurately even under adverse conditions such as target occlusion, scale change, target deformation and fast movement of the target.

Claims (6)

1. A scale self-adaptive target tracking method based on a depth feature kernel correlation filter is characterized by comprising the following steps:
step 1, inputting an initial position p of a target0Sum scale s0Setting the window size to be 2.0 times of the target initial bounding box;
step 2, according to the target position p of the t-1 th framet-1Obtaining a target area xt-1The size is the window size;
step 3, extracting a target area xt-1Is characterized by deep convolution ofPerforming fast Fourier transform to obtain characteristic spectrumWherein ^ represents a discrete Fourier transform;
step 4, according to the characteristic mapCompute kernel autocorrelation
Step 5, training a position and scale correlation filter;
step 6, according to the position p of the t-1 frame targett-1Obtaining a candidate region z of the target in the t-th frametThe size is the window size;
step 7, extracting candidate region ztAnd performing fast Fourier transform to obtain a feature mapWherein ^ represents a discrete Fourier transform;
step 8, according to the characteristic map of the previous frame of the targetComputing kernel cross-correlation
Step 9, respectively detecting the positions corresponding to the maximum values in the output maps of the position filter and the scale filter to determine the position p of the target in the current frametSum scale st
And step 10, updating the kernel correlation filter by adopting a self-adaptive model updating strategy.
2. The method for tracking scale-adaptive targets based on the depth feature kernel correlation filter according to claim 1, wherein the method for extracting depth convolution features in steps 3 and 7 specifically comprises the following steps:
(3.1) preprocessing, scaling the window area I to the input size 224 × 224 specified by the convolutional neural network;
(3.2) extracting features, namely extracting feature maps of the 3 rd, 4 th and 5 th convolutional layers of the convolutional neural network;
and (3.3) carrying out bilinear interpolation, and upsampling the extracted 3-layer convolution characteristics to the same size.
3. The method for scale-adaptive target tracking based on depth feature kernel correlation filter according to claim 1, wherein the step 4 is to calculate kernel autocorrelationAnd the step 8 of calculating the kernel cross correlationThe specific method comprises the following steps:
(4.1) using a Gaussian kernel, the formula is as follows:
<mrow> <mi>k</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>,</mo> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mi>exp</mi> <mrow> <mo>(</mo> <mo>-</mo> <mfrac> <mn>1</mn> <msup> <mi>&amp;sigma;</mi> <mn>2</mn> </msup> </mfrac> <mo>|</mo> <mo>|</mo> <mi>x</mi> <mo>-</mo> <msup> <mi>x</mi> <mo>&amp;prime;</mo> </msup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow>
wherein k (x, x ') represents a Gaussian kernel calculated by two feature maps x and x', exp (eta) represents an e index function, sigma is a standard deviation of the Gaussian function, the value of sigma is 0.5, | | |22-normal form representing a vector or matrix;
(4.2) calculating the kernel correlation, the formula is as follows:
wherein k isxx′Representing the nuclear correlation of the characteristic maps x and x', exp (eta) representing an e index function, sigma is the standard deviation of a Gaussian function, the value of sigma is 0.5, | | |. | tory2A 2-normal form representing a vector or matrix,represents the inverse of the discrete fourier transform, denotes the complex conjugate, denotes the discrete fourier transform, and ⊙ denotes the multiplication of the corresponding elements of the two matrices.
4. The method for tracking a scale-adaptive target based on a depth feature kernel correlation filter according to claim 1, wherein the training position and scale correlation filter of step 5 is specifically as follows:
respectively training 1 kernel correlation filter for each layer of feature spectrum according to the deep convolution features extracted in the step 3, and training a model by adopting the following formula:
<mrow> <msup> <mover> <mi>&amp;alpha;</mi> <mo>^</mo> </mover> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mfrac> <mover> <mi>y</mi> <mo>^</mo> </mover> <mrow> <msup> <mover> <mi>k</mi> <mo>^</mo> </mover> <mrow> <msup> <mi>xx</mi> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </mrow> </msup> <mo>+</mo> <mi>&amp;lambda;</mi> </mrow> </mfrac> </mrow>
wherein,representing a feature map convolved according to the l-th layer depthThe obtained correlation filter model is used for calculating the correlation filter model,representation feature mapRepresents discrete Fourier transform, lambda is a regularization parameter to prevent overfitting of the trained model, and lambda is 0.001.
5. The method according to claim 1, wherein the step 9 detects the position corresponding to the maximum value in the output maps of the position filter and the scale filter to determine the position p of the target in the current frametSum scale stThe method comprises the following steps:
(9.1) from the t-th frame image, with position pt-1And selecting candidate area z by window sizet,trans
(9.2) extracting candidate region zt,trans3 layers depth convolution characteristic map
(9.3) calculating a position filter correlation output confidence coefficient map f for the first layer characteristic mapt,trans (l)
Wherein f ist (l)The output corresponding map of the position filter representing the characteristic map of the l-th layer,representation feature mapAndthe cross-correlation of the kernels of (a),is the position filter trained and updated from the previous frame,represents the inverse of the discrete Fourier transform, represents the discrete Fourier transform, ⊙ represents the multiplication of the corresponding elements of the two matrices;
(9.4) outputting the corresponding map ft (3)Starting to estimate the target position from coarse to fine, the target position p of the l-th layert (l)For outputting the corresponding map ft,trans (l)The position corresponding to the maximum value;
(9.5) from the t-th frame image, with position ptSum scale st-1Extracting candidate region z of scale estimatet,sacleConstructing a scale pyramid;
(9.6) extracting gradient direction histogram feature of candidate regionComputing a scale filter correlation output confidence map ft,sacle
(9.7) target dimension s detected at the t-th frametFor outputting the corresponding map ft,sacleThe maximum value corresponds to the scale.
6. The method for scale-adaptive target tracking based on the depth feature kernel correlation filter according to claim 1, wherein the step 10 updates the kernel correlation filter by using an adaptive model update strategy, specifically as follows:
(10.1) after completing the estimation of the target position and scale, calculating two tracking result confidence measure criteria, one of which is the peak value of the relevant corresponding output map:
fmax=maxf
wherein f is a corresponding map output by the correlation between the kernel correlation filter and the candidate region, fmaxIs the peak of the map;
the other is the average peak correlation energy APCE:
<mrow> <mi>A</mi> <mi>P</mi> <mi>C</mi> <mi>E</mi> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <msub> <mi>f</mi> <mrow> <mi>m</mi> <mi>a</mi> <mi>x</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>f</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mrow> <mi>m</mi> <mi>e</mi> <mi>a</mi> <mi>n</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <msup> <mrow> <mo>(</mo> <msub> <mi>f</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>-</mo> <msub> <mi>f</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
wherein f ismaxAnd fminRespectively, of the output map fLarge and minimum values, mean () representing the averaging function, fi,jThe ith row and jth column of values representing the output map f;
(10.2) if fmaxAnd APCE are both greater than fmaxAnd APCE historical average value, updating the model, otherwise, not updating; for each depth convolution layer, updating using a linear interpolation method, the formula is as follows:
<mrow> <msup> <msub> <mover> <mi>&amp;alpha;</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;eta;</mi> <mo>)</mo> </mrow> <msup> <msub> <mover> <mi>&amp;alpha;</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>&amp;eta;</mi> <msup> <msub> <mover> <mi>&amp;alpha;</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </mrow>2
<mrow> <msup> <msub> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>=</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>-</mo> <mi>&amp;eta;</mi> <mo>)</mo> </mrow> <msup> <msub> <mover> <mi>x</mi> <mo>^</mo> </mover> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> <mo>+</mo> <mi>&amp;eta;</mi> <msup> <msub> <mover> <mi>x</mi> <mo>^</mo> </mover> <mi>t</mi> </msub> <mrow> <mo>(</mo> <mi>l</mi> <mo>)</mo> </mrow> </msup> </mrow>
wherein,andthe feature map and the related filter of the previous frame of the ith layer are respectively represented, η is the learning rate, the larger η is, the faster model is updated, and η takes the value of 0.02.
CN201710355456.3A 2017-05-19 2017-05-19 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter Pending CN107154024A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710355456.3A CN107154024A (en) 2017-05-19 2017-05-19 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710355456.3A CN107154024A (en) 2017-05-19 2017-05-19 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter

Publications (1)

Publication Number Publication Date
CN107154024A true CN107154024A (en) 2017-09-12

Family

ID=59794201

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710355456.3A Pending CN107154024A (en) 2017-05-19 2017-05-19 Dimension self-adaption method for tracking target based on depth characteristic core correlation filter

Country Status (1)

Country Link
CN (1) CN107154024A (en)

Cited By (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107644217A (en) * 2017-09-29 2018-01-30 中国科学技术大学 Method for tracking target based on convolutional neural networks and correlation filter
CN107730536A (en) * 2017-09-15 2018-02-23 北京飞搜科技有限公司 A kind of high speed correlation filtering object tracking method based on depth characteristic
CN108053424A (en) * 2017-12-15 2018-05-18 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN108182388A (en) * 2017-12-14 2018-06-19 哈尔滨工业大学(威海) A kind of motion target tracking method based on image
CN108280808A (en) * 2017-12-15 2018-07-13 西安电子科技大学 The method for tracking target of correlation filter is exported based on structuring
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108345885A (en) * 2018-01-18 2018-07-31 浙江大华技术股份有限公司 A kind of method and device of target occlusion detection
CN108550126A (en) * 2018-04-18 2018-09-18 长沙理工大学 A kind of adaptive correlation filter method for tracking target and system
CN108573499A (en) * 2018-03-16 2018-09-25 东华大学 A Visual Object Tracking Method Based on Scale Adaptation and Occlusion Detection
CN108665481A (en) * 2018-03-27 2018-10-16 西安电子科技大学 Adaptive anti-occlusion infrared target tracking method based on multi-layer deep feature fusion
CN108734151A (en) * 2018-06-14 2018-11-02 厦门大学 Robust long-range method for tracking target based on correlation filtering and the twin network of depth
CN108830878A (en) * 2018-04-13 2018-11-16 上海大学 A kind of method for tracking target based on FPN neural network
CN108846345A (en) * 2018-06-06 2018-11-20 安徽大学 Moving object scale estimation method in monitoring scene
CN109035300A (en) * 2018-07-05 2018-12-18 桂林电子科技大学 A kind of method for tracking target based on depth characteristic Yu average peak correlation energy
CN109035290A (en) * 2018-07-16 2018-12-18 南京信息工程大学 A kind of track algorithm updating accretion learning based on high confidence level
CN109064491A (en) * 2018-04-12 2018-12-21 江苏省基础地理信息中心 A kind of nuclear phase pass filter tracking method of adaptive piecemeal
CN109146917A (en) * 2017-12-29 2019-01-04 西安电子科技大学 A kind of method for tracking target of elasticity more new strategy
CN109146928A (en) * 2017-12-29 2019-01-04 西安电子科技大学 A kind of method for tracking target that Grads threshold judgment models update
CN109166106A (en) * 2018-08-02 2019-01-08 山东大学 A kind of target detection aligning method and apparatus based on sliding window
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109410246A (en) * 2018-09-25 2019-03-01 深圳市中科视讯智能系统技术有限公司 The method and device of vision tracking based on correlation filtering
CN109410247A (en) * 2018-10-16 2019-03-01 中国石油大学(华东) A kind of video tracking algorithm of multi-template and adaptive features select
CN109410251A (en) * 2018-11-19 2019-03-01 南京邮电大学 Method for tracking target based on dense connection convolutional network
CN109461172A (en) * 2018-10-25 2019-03-12 南京理工大学 Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A Correlation Filtering Target Tracking Method Fusion Multi-layer Convolution Features
CN109785360A (en) * 2018-12-18 2019-05-21 南京理工大学 A kind of adaptive method for real time tracking excavated based on online sample
CN109801311A (en) * 2019-01-31 2019-05-24 长安大学 A kind of visual target tracking method based on depth residual error network characterization
CN109858454A (en) * 2019-02-15 2019-06-07 东北大学 A Dual-Model Adaptive Kernel Correlation Filtering Tracking Method
CN109859244A (en) * 2019-01-22 2019-06-07 西安微电子技术研究所 A kind of visual tracking method based on convolution sparseness filtering
CN109858326A (en) * 2018-12-11 2019-06-07 中国科学院自动化研究所 Based on classification semantic Weakly supervised online visual tracking method and system
CN109858455A (en) * 2019-02-18 2019-06-07 南京航空航天大学 A kind of piecemeal detection scale adaptive tracking method for circular target
CN109886996A (en) * 2019-01-15 2019-06-14 东华大学 A visual tracking optimization method
CN109934098A (en) * 2019-01-24 2019-06-25 西北工业大学 A camera intelligent system with privacy protection and its realization method
CN110033006A (en) * 2019-04-04 2019-07-19 中设设计集团股份有限公司 Vehicle detecting and tracking method based on color characteristic Nonlinear Dimension Reduction
CN110197126A (en) * 2019-05-06 2019-09-03 深圳岚锋创视网络科技有限公司 A kind of target tracking method, device and portable terminal
CN110211149A (en) * 2018-12-25 2019-09-06 湖州云通科技有限公司 A kind of dimension self-adaption nuclear phase pass filter tracking method based on context-aware
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti-Occlusion Pedestrian Tracking Method Based on Multi-Peak Detection
CN110544267A (en) * 2019-07-24 2019-12-06 中国地质大学(武汉) A Correlation Filtering Tracking Method Based on Adaptive Feature Selection
CN110555870A (en) * 2019-09-09 2019-12-10 北京理工大学 DCF tracking confidence evaluation and classifier updating method based on neural network
CN110633595A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device by utilizing bilinear interpolation
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 A Visual Object Tracking Method Based on Dense Convolutional Network Features
CN110807473A (en) * 2019-10-12 2020-02-18 浙江大华技术股份有限公司 Target detection method, device and computer storage medium
CN110889863A (en) * 2019-09-03 2020-03-17 河南理工大学 A target tracking method based on target-aware correlation filtering
CN111161323A (en) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 A method and system for target tracking in complex scenes based on correlation filtering
CN111192288A (en) * 2018-11-14 2020-05-22 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN111221770A (en) * 2019-12-31 2020-06-02 中国船舶重工集团公司第七一七研究所 Kernel correlation filtering target tracking method and system
CN111428740A (en) * 2020-02-28 2020-07-17 深圳壹账通智能科技有限公司 Detection method and device for network-shot photo, computer equipment and storage medium
CN111476819A (en) * 2020-03-19 2020-07-31 重庆邮电大学 A long-term target tracking method based on multi-correlation filtering model
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN111696132A (en) * 2020-05-15 2020-09-22 深圳市优必选科技股份有限公司 Target tracking method and device, computer readable storage medium and robot
CN112053386A (en) * 2020-08-31 2020-12-08 西安电子科技大学 Target tracking method based on depth convolution characteristic self-adaptive integration
CN112200833A (en) * 2020-09-17 2021-01-08 天津城建大学 Relevant filtering video tracking algorithm based on residual error network and short-term visual memory
CN112560695A (en) * 2020-12-17 2021-03-26 中国海洋大学 Underwater target tracking method, system, storage medium, equipment, terminal and application
CN113379804A (en) * 2021-07-12 2021-09-10 闽南师范大学 Unmanned aerial vehicle target tracking method, terminal equipment and storage medium
CN114708300A (en) * 2022-03-02 2022-07-05 北京理工大学 Anti-blocking self-adaptive target tracking method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015163830A1 (en) * 2014-04-22 2015-10-29 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Target localization and size estimation via multiple model learning in visual tracking
CN106570486A (en) * 2016-11-09 2017-04-19 华南理工大学 Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015163830A1 (en) * 2014-04-22 2015-10-29 Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi Target localization and size estimation via multiple model learning in visual tracking
CN106570486A (en) * 2016-11-09 2017-04-19 华南理工大学 Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张雷: "复杂场景下实时目标跟踪算法及实现技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *

Cited By (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107730536A (en) * 2017-09-15 2018-02-23 北京飞搜科技有限公司 A kind of high speed correlation filtering object tracking method based on depth characteristic
CN107730536B (en) * 2017-09-15 2020-05-12 苏州飞搜科技有限公司 High-speed correlation filtering object tracking method based on depth features
CN107644217A (en) * 2017-09-29 2018-01-30 中国科学技术大学 Method for tracking target based on convolutional neural networks and correlation filter
CN107644217B (en) * 2017-09-29 2020-06-26 中国科学技术大学 Target tracking method based on convolutional neural network and related filter
CN108182388A (en) * 2017-12-14 2018-06-19 哈尔滨工业大学(威海) A kind of motion target tracking method based on image
CN108053424B (en) * 2017-12-15 2020-06-16 深圳云天励飞技术有限公司 Target tracking method and device, electronic equipment and storage medium
CN108053424A (en) * 2017-12-15 2018-05-18 深圳云天励飞技术有限公司 Method for tracking target, device, electronic equipment and storage medium
CN108280808A (en) * 2017-12-15 2018-07-13 西安电子科技大学 The method for tracking target of correlation filter is exported based on structuring
CN109146917A (en) * 2017-12-29 2019-01-04 西安电子科技大学 A kind of method for tracking target of elasticity more new strategy
CN109146928A (en) * 2017-12-29 2019-01-04 西安电子科技大学 A kind of method for tracking target that Grads threshold judgment models update
CN109146917B (en) * 2017-12-29 2020-07-28 西安电子科技大学 A Target Tracking Method for Elastic Update Policy
CN109146928B (en) * 2017-12-29 2021-09-24 西安电子科技大学 A Target Tracking Method Based on Gradient Threshold Judging Model Update
CN108345885A (en) * 2018-01-18 2018-07-31 浙江大华技术股份有限公司 A kind of method and device of target occlusion detection
CN108346159A (en) * 2018-01-28 2018-07-31 北京工业大学 A kind of visual target tracking method based on tracking-study-detection
CN108346159B (en) * 2018-01-28 2021-10-15 北京工业大学 A Tracking-Learning-Detection-Based Visual Object Tracking Method
CN108573499A (en) * 2018-03-16 2018-09-25 东华大学 A Visual Object Tracking Method Based on Scale Adaptation and Occlusion Detection
CN108573499B (en) * 2018-03-16 2021-04-02 东华大学 A Visual Object Tracking Method Based on Scale Adaptive and Occlusion Detection
CN108665481A (en) * 2018-03-27 2018-10-16 西安电子科技大学 Adaptive anti-occlusion infrared target tracking method based on multi-layer deep feature fusion
CN108665481B (en) * 2018-03-27 2022-05-31 西安电子科技大学 Adaptive anti-occlusion infrared target tracking method based on multi-layer deep feature fusion
CN109064491A (en) * 2018-04-12 2018-12-21 江苏省基础地理信息中心 A kind of nuclear phase pass filter tracking method of adaptive piecemeal
CN108830878A (en) * 2018-04-13 2018-11-16 上海大学 A kind of method for tracking target based on FPN neural network
CN108830878B (en) * 2018-04-13 2021-02-23 上海大学 A target tracking method based on FPN neural network
CN108550126A (en) * 2018-04-18 2018-09-18 长沙理工大学 A kind of adaptive correlation filter method for tracking target and system
CN108846345B (en) * 2018-06-06 2021-09-17 安徽大学 Moving object scale estimation method in monitoring scene
CN108846345A (en) * 2018-06-06 2018-11-20 安徽大学 Moving object scale estimation method in monitoring scene
CN108734151B (en) * 2018-06-14 2020-04-14 厦门大学 Robust long-range target tracking method based on correlation filtering and deep Siamese network
CN108734151A (en) * 2018-06-14 2018-11-02 厦门大学 Robust long-range method for tracking target based on correlation filtering and the twin network of depth
CN110633595B (en) * 2018-06-21 2022-12-02 北京京东尚科信息技术有限公司 Target detection method and device by utilizing bilinear interpolation
CN110633595A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Target detection method and device by utilizing bilinear interpolation
CN109035300A (en) * 2018-07-05 2018-12-18 桂林电子科技大学 A kind of method for tracking target based on depth characteristic Yu average peak correlation energy
CN109035300B (en) * 2018-07-05 2021-03-26 桂林电子科技大学 Target tracking method based on depth feature and average peak correlation energy
CN109035290A (en) * 2018-07-16 2018-12-18 南京信息工程大学 A kind of track algorithm updating accretion learning based on high confidence level
CN109166106A (en) * 2018-08-02 2019-01-08 山东大学 A kind of target detection aligning method and apparatus based on sliding window
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109255304B (en) * 2018-08-17 2021-07-27 西安电子科技大学 Target Tracking Method Based on Distributed Field Features
CN109410246A (en) * 2018-09-25 2019-03-01 深圳市中科视讯智能系统技术有限公司 The method and device of vision tracking based on correlation filtering
CN109410246B (en) * 2018-09-25 2021-06-11 杭州视语智能视觉系统技术有限公司 Visual tracking method and device based on correlation filtering
CN109410247A (en) * 2018-10-16 2019-03-01 中国石油大学(华东) A kind of video tracking algorithm of multi-template and adaptive features select
CN109461172A (en) * 2018-10-25 2019-03-12 南京理工大学 Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN111192288A (en) * 2018-11-14 2020-05-22 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN111192288B (en) * 2018-11-14 2023-08-04 天津大学青岛海洋技术研究院 Target tracking algorithm based on deformation sample generation network
CN109584271A (en) * 2018-11-15 2019-04-05 西北工业大学 High speed correlation filtering tracking based on high confidence level more new strategy
CN109410251A (en) * 2018-11-19 2019-03-01 南京邮电大学 Method for tracking target based on dense connection convolutional network
CN109741366A (en) * 2018-11-27 2019-05-10 昆明理工大学 A Correlation Filtering Target Tracking Method Fusion Multi-layer Convolution Features
CN109858326A (en) * 2018-12-11 2019-06-07 中国科学院自动化研究所 Based on classification semantic Weakly supervised online visual tracking method and system
CN109785360A (en) * 2018-12-18 2019-05-21 南京理工大学 A kind of adaptive method for real time tracking excavated based on online sample
CN110211149B (en) * 2018-12-25 2022-08-12 湖州云通科技有限公司 Scale self-adaptive kernel correlation filtering tracking method based on background perception
CN110211149A (en) * 2018-12-25 2019-09-06 湖州云通科技有限公司 A kind of dimension self-adaption nuclear phase pass filter tracking method based on context-aware
CN109886996A (en) * 2019-01-15 2019-06-14 东华大学 A visual tracking optimization method
CN109886996B (en) * 2019-01-15 2023-06-06 东华大学 A Visual Tracking Optimization Method
CN109859244B (en) * 2019-01-22 2022-07-08 西安微电子技术研究所 Visual tracking method based on convolution sparse filtering
CN109859244A (en) * 2019-01-22 2019-06-07 西安微电子技术研究所 A kind of visual tracking method based on convolution sparseness filtering
CN109934098A (en) * 2019-01-24 2019-06-25 西北工业大学 A camera intelligent system with privacy protection and its realization method
CN111507999A (en) * 2019-01-30 2020-08-07 北京四维图新科技股份有限公司 FDSST algorithm-based target tracking method and device
CN111507999B (en) * 2019-01-30 2023-07-18 北京四维图新科技股份有限公司 Target tracking method and device based on FDSST algorithm
CN109801311A (en) * 2019-01-31 2019-05-24 长安大学 A kind of visual target tracking method based on depth residual error network characterization
CN109801311B (en) * 2019-01-31 2021-07-16 长安大学 A Visual Object Tracking Method Based on Deep Residual Network Features
CN109858454A (en) * 2019-02-15 2019-06-07 东北大学 A Dual-Model Adaptive Kernel Correlation Filtering Tracking Method
CN109858454B (en) * 2019-02-15 2023-04-07 东北大学 Adaptive kernel correlation filtering tracking method based on dual models
CN109858455A (en) * 2019-02-18 2019-06-07 南京航空航天大学 A kind of piecemeal detection scale adaptive tracking method for circular target
CN110033006A (en) * 2019-04-04 2019-07-19 中设设计集团股份有限公司 Vehicle detecting and tracking method based on color characteristic Nonlinear Dimension Reduction
CN110197126A (en) * 2019-05-06 2019-09-03 深圳岚锋创视网络科技有限公司 A kind of target tracking method, device and portable terminal
CN110544267A (en) * 2019-07-24 2019-12-06 中国地质大学(武汉) A Correlation Filtering Tracking Method Based on Adaptive Feature Selection
CN110544267B (en) * 2019-07-24 2022-03-15 中国地质大学(武汉) A Correlation Filter Tracking Method for Adaptive Feature Selection
CN110414439B (en) * 2019-07-30 2022-03-15 武汉理工大学 Anti-occlusion pedestrian tracking method based on multi-peak detection
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti-Occlusion Pedestrian Tracking Method Based on Multi-Peak Detection
CN110889863A (en) * 2019-09-03 2020-03-17 河南理工大学 A target tracking method based on target-aware correlation filtering
CN110889863B (en) * 2019-09-03 2023-03-24 河南理工大学 Target tracking method based on target perception correlation filtering
CN110555870B (en) * 2019-09-09 2021-07-27 北京理工大学 A Neural Network-based DCF Tracking Confidence Evaluation and Classifier Update Method
CN110555870A (en) * 2019-09-09 2019-12-10 北京理工大学 DCF tracking confidence evaluation and classifier updating method based on neural network
CN110689559B (en) * 2019-09-30 2022-08-12 长安大学 A Visual Object Tracking Method Based on Dense Convolutional Network Features
CN110689559A (en) * 2019-09-30 2020-01-14 长安大学 A Visual Object Tracking Method Based on Dense Convolutional Network Features
CN110807473B (en) * 2019-10-12 2023-01-03 浙江大华技术股份有限公司 Target detection method, device and computer storage medium
CN110807473A (en) * 2019-10-12 2020-02-18 浙江大华技术股份有限公司 Target detection method, device and computer storage medium
CN111161323A (en) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 A method and system for target tracking in complex scenes based on correlation filtering
CN111221770A (en) * 2019-12-31 2020-06-02 中国船舶重工集团公司第七一七研究所 Kernel correlation filtering target tracking method and system
CN111161323B (en) * 2019-12-31 2023-11-28 北京理工大学重庆创新中心 Complex scene target tracking method and system based on correlation filtering
CN111428740A (en) * 2020-02-28 2020-07-17 深圳壹账通智能科技有限公司 Detection method and device for network-shot photo, computer equipment and storage medium
WO2021169625A1 (en) * 2020-02-28 2021-09-02 深圳壹账通智能科技有限公司 Method and apparatus for detecting reproduced network photograph, computer device, and storage medium
CN111476819A (en) * 2020-03-19 2020-07-31 重庆邮电大学 A long-term target tracking method based on multi-correlation filtering model
WO2021227519A1 (en) * 2020-05-15 2021-11-18 深圳市优必选科技股份有限公司 Target tracking method and apparatus, and computer-readable storage medium and robot
CN111696132A (en) * 2020-05-15 2020-09-22 深圳市优必选科技股份有限公司 Target tracking method and device, computer readable storage medium and robot
CN111696132B (en) * 2020-05-15 2023-12-29 深圳市优必选科技股份有限公司 Target tracking method, device, computer readable storage medium and robot
CN112053386A (en) * 2020-08-31 2020-12-08 西安电子科技大学 Target tracking method based on depth convolution characteristic self-adaptive integration
CN112053386B (en) * 2020-08-31 2023-04-18 西安电子科技大学 Target tracking method based on depth convolution characteristic self-adaptive integration
CN112200833A (en) * 2020-09-17 2021-01-08 天津城建大学 Relevant filtering video tracking algorithm based on residual error network and short-term visual memory
CN112560695B (en) * 2020-12-17 2023-03-24 中国海洋大学 Underwater target tracking method, system, storage medium, equipment, terminal and application
CN112560695A (en) * 2020-12-17 2021-03-26 中国海洋大学 Underwater target tracking method, system, storage medium, equipment, terminal and application
CN113379804B (en) * 2021-07-12 2023-05-09 闽南师范大学 A UAV target tracking method, terminal equipment and storage medium
CN113379804A (en) * 2021-07-12 2021-09-10 闽南师范大学 Unmanned aerial vehicle target tracking method, terminal equipment and storage medium
CN114708300A (en) * 2022-03-02 2022-07-05 北京理工大学 Anti-blocking self-adaptive target tracking method and system
CN114708300B (en) * 2022-03-02 2024-07-23 北京理工大学 An adaptive target tracking method and system capable of resisting occlusion

Similar Documents

Publication Publication Date Title
CN107154024A (en) Dimension self-adaption method for tracking target based on depth characteristic core correlation filter
CN108734151B (en) Robust long-range target tracking method based on correlation filtering and deep Siamese network
CN107316316A (en) The method for tracking target that filtering technique is closed with nuclear phase is adaptively merged based on multiple features
CN109816689B (en) Moving target tracking method based on adaptive fusion of multilayer convolution characteristics
CN104574445B (en) A kind of method for tracking target
CN105741316B (en) Robust method for tracking target based on deep learning and multiple dimensioned correlation filtering
CN108665481B (en) Adaptive anti-occlusion infrared target tracking method based on multi-layer deep feature fusion
CN111080675B (en) Target tracking method based on space-time constraint correlation filtering
CN104268594B (en) A kind of video accident detection method and device
CN112364931B (en) Few-sample target detection method and network system based on meta-feature and weight adjustment
CN106952288B (en) Based on convolution feature and global search detect it is long when block robust tracking method
WO2020062433A1 (en) Neural network model training method and method for detecting universal grounding wire
CN107680119A (en) A kind of track algorithm based on space-time context fusion multiple features and scale filter
CN110175649B (en) A fast multi-scale estimation object tracking method for re-detection
CN108062531A (en) A kind of video object detection method that convolutional neural networks are returned based on cascade
CN111462191B (en) A deep learning-based method for unsupervised optical flow estimation with non-local filters
CN107369166A (en) A kind of method for tracking target and system based on multiresolution neutral net
CN109584271A (en) High speed correlation filtering tracking based on high confidence level more new strategy
CN109461172A (en) Manually with the united correlation filtering video adaptive tracking method of depth characteristic
CN104573731A (en) Rapid target detection method based on convolutional neural network
CN103295242A (en) Multi-feature united sparse represented target tracking method
CN107424177A (en) Positioning amendment long-range track algorithm based on serial correlation wave filter
CN109410251B (en) Target tracking method based on dense connection convolution network
CN110245587B (en) A target detection method for optical remote sensing images based on Bayesian transfer learning
CN110287760A (en) A method for occlusion detection of facial facial features based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170912

RJ01 Rejection of invention patent application after publication