[go: up one dir, main page]

CN109285179A - A moving target tracking method based on multi-feature fusion - Google Patents

A moving target tracking method based on multi-feature fusion Download PDF

Info

Publication number
CN109285179A
CN109285179A CN201810834082.8A CN201810834082A CN109285179A CN 109285179 A CN109285179 A CN 109285179A CN 201810834082 A CN201810834082 A CN 201810834082A CN 109285179 A CN109285179 A CN 109285179A
Authority
CN
China
Prior art keywords
target
feature
frame
filter
scale
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810834082.8A
Other languages
Chinese (zh)
Other versions
CN109285179B (en
Inventor
尚振宏
益争祝玛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunming University of Science and Technology
Original Assignee
Kunming University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunming University of Science and Technology filed Critical Kunming University of Science and Technology
Priority to CN201810834082.8A priority Critical patent/CN109285179B/en
Publication of CN109285179A publication Critical patent/CN109285179A/en
Application granted granted Critical
Publication of CN109285179B publication Critical patent/CN109285179B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及一种基于多特征融合的运动目标跟踪方法,属于计算机视觉领域。本发明首先在第一帧图像中,初始化目标区域,分别利用方向直方图和颜色特征训练两个位置滤波器;其次在后续帧目标周围提取两种特征的检测样本,分别计算两个检测样本与上一步中训练得到的各自位置滤波器的相关得分,即得到不同特征的响应图;再次根据不同特征响应图的峰值旁瓣比,加权融合两种特征响应值,选取响应值最大的点作为目标当前中心位置;然后利用方向梯度直方图特征构建尺度金字塔训练尺度滤波器,并得到响应值最大点作为目标当前尺度;最后根据每帧最终响应图的峰值旁边比,判断是否发生遮挡,遮挡情况下,不更新位置滤波器。

The invention relates to a moving target tracking method based on multi-feature fusion, and belongs to the field of computer vision. The present invention firstly initializes the target area in the first frame image, and uses the direction histogram and the color feature to train two position filters respectively; secondly, the detection samples of the two features are extracted around the target of the subsequent frame, and the two detection samples are calculated respectively with The correlation scores of the respective position filters obtained from the training in the previous step are obtained, that is, the response maps of different features are obtained; again, according to the peak sidelobe ratio of different feature response maps, the two feature response values are weighted and fused, and the point with the largest response value is selected as the target. The current center position; then use the directional gradient histogram feature to build a scale pyramid to train the scale filter, and obtain the point with the largest response value as the current scale of the target; finally, according to the peak-to-side ratio of the final response map of each frame, determine whether occlusion occurs, and in the case of occlusion , the position filter is not updated.

Description

A kind of motion target tracking method based on multi-feature fusion
Technical field
The invention discloses a kind of motion target tracking methods based on multi-feature fusion, belong to computer vision field.
Background technique
Target following is a hot spot of computer vision field, is widely used in video monitoring, robot learning, work Industry intelligence etc..Its essence is that position and the state of target are found in one section of continuous videos sequence image.Although at present Target following has been achieved with remarkable progress, but because being blocked, many factors such as illumination variation and dimensional variation influence, it is still It is a challenging problem.
In recent years, due to the remarkable result of correlation filter algorithm, many scholars by correlation filter be introduced into target with In track frame.The selection of feature influences the performance of tracking very big in correlation filter target tracking algorism.Wherein, Bolme etc. The minimum output square error and (Minimun Output Sum of Square Error, MOSSE) algorithm of proposition, are only adopted It is tracked with gray feature, Henriques etc. proposes previous single channel gray feature being extended to multichannel, using direction ladder Spend (the Kernel Correlation of feature (Histogram of Oriented Gridients, HOG) tracking target Filter, KCF) algorithm, improve the accuracy of tracking.Danelljan etc. is added color characteristic in the algorithm, and using it is main at Analysis (Principal Component Analysis, PCA) is to color characteristic CN dimension-reduction treatment, in color image sequence Application effect is pretty good.Danelljan M etc. proposes to carry out mesh using HOG feature construction scale pyramid on the basis of MOSSE The DSST algorithm of scale estimation.Above-mentioned algorithm is all used only single features and describes target, can not more comprehensively express target, Tracking performance has larger difference under different scenes.In addition, above-mentioned algorithm all using frame by frame it is fixed update filter model by the way of, But every frame tracking situation is different, is easy for the information of mistake to be added in object module, and subsequent tracking is caused to lose.
Summary of the invention
The technical problem to be solved in the present invention is to provide a kind of motion target tracking method based on multi-feature fusion, to It solves existing single feature and describes target, can not comprehensively express target, the defect that tracking performance differs greatly under different scenes, And it when solving fixed update filter model frame by frame, is easy that error message is added in object module and tracking is caused to fail The problem of.
The technical solution adopted by the present invention is that: a kind of motion target tracking method based on multi-feature fusion, method is by mesh Target is described using single features in mark tracking and conventional model update method is improved to multiple features fusion and selective updating mould Type method.First in first frame image, initialized target region is utilized respectively direction histogram (Histogram of Oriented Gradient, HOG) and color (Color Name, CN) feature two position filtering devices of training;Secondly new one Two kinds of features are extracted in frame target area respectively and obtain two detection samples, calculates separately and is instructed in two detection samples and previous step The relevance scores of the respective position filtering device got are to get the response diagram for arriving different characteristic;It is responded again according to different characteristic The peak sidelobe ratio of figure, two kinds of characteristic response values of Weighted Fusion choose the maximum point of response and are used as the current center of target; Then using HOG feature construction scale pyramid training scaling filter, and response maximum point is obtained as the current ruler of target Degree;Finally according to ratio beside the peak value of the final response diagram of every frame, judges whether to block, under circumstance of occlusion, do not update position Filter.
Specific step is as follows for the method:
Step1, initialized target simultaneously choose target area;
Step2, target area histograms of oriented gradients feature is extracted as training sample, while extracting target area color Feature is as another training sample.With two training sample training respective positions filter models;
Step3, it two kinds of features is extracted respectively in new frame target area obtains two detection samples, calculate separately two The relevance scores for the respective position filtering device that training obtains in detection sample and previous step are to get the response diagram for arriving different characteristic;
Step4, calculate different characteristic response diagram peak sidelobe ratio, according to two kinds of characteristic response values of its Weighted Fusion, It chooses and is used as target current location at maximum value;
Step5, scale is obtained by HOG feature construction scale pyramid training scaling filter in current goal region Maximum value is responded as target current scale;
Step6, scaling filter model is updated;
Step7, according to the peak sidelobe ratio of every frame final position response diagram, judge whether target blocks, if it is Step 3 is repeated to 6, if not occurring blocking enters step 8;
Step8, position filtering device model is updated;
Step9, step 3 is repeated to 8 until tracking terminates.
Specific step is as follows by the step Step1:
Step1.1, according to the input picture first frame, centered on target position, acquire one having a size of 2 times of target The image block P of size.
The specific steps of the Step2 are as follows:
Step2.1, train the application mode of obtained position filtering device identical with principle using target different characteristic.Below It will be all described by taking HOG feature as an example.The HOG feature f of P is extracted as training sample, wherein the dimension of feature is d dimension, fl It is l dimension therein, l ∈ { 1 ..., d }.Sample training is to make input sample in order to find optimal position filtering device h and need Square error is minimum between exporting.The filter h that h is trained by each dimension of featurelComposition, h are square by minimizing as follows Difference acquires:
In formula, g indicates the desired output of filter h, and τ is regularization parameter.Training sample f shares d dimension, flIt is therein Dimension, l ∈ { 1 ..., d }.* indicate that circulation is related.(1) solution of the minimum value of formula in frequency domain is as follows:
In formula, Hl, G, F be respectively be hl, g, f frequency domain description,Respectively indicate the conjugate complex number of G, F.FkIt is F Kth dimension, whereinIt is FkConjugate complex number.Al, B be filter h molecule denominator.
The specific steps of the Step3 are as follows:
Step3.1, above-mentioned calculation method obtain position filtering device model, complete the training process of position filtering device.This Target is detected at place, extracts HOG feature as detection sample z in new frame target area, calculates z and use HOG before The relevance scores y of the trained filter h of feature is to get the response diagram for arriving this feature:
In formulaIndicate AlConjugate complex number, Z be z frequency domain description, ZlIt is the l dimension of Z, wherein l ∈ { 1 ..., d }. Using CN and HOG feature track respectively target obtain filter response be denoted as yT, cnAnd yT, hog
The specific steps of the Step4 are as follows:
Step4.1, in t frame, calculate separately the peak sidelobe ratio of CN feature and HOG characteristic response figure, be denoted as PSRT, cn And PSRT, hog
Step4.2, CN feature and HOG feature the normalized weight w in t frame are calculated separatelyT, cnAnd wT, hog,
Step4.3, response level carry out Fusion Features, in t frame, respectively using CN and HOG feature training obtain two A position filtering device response, is denoted as yT, cnAnd yT, hog, fused response y is obtained using following method of weightingt,
yt=wT, cn×yT, cn+wT, hog×yT, hog (6)
Step4.4, y is calculatedtMaximum value, obtain the target position final in t frame.
The specific steps of the Step5 are as follows:
Step5.1, after determining target position, centered on the new position of target, pyramidal 33 figures of building scale are intercepted As layer, and extract one scaling filter H of HOG feature training of these image layerss, to estimate target scale.And scale Filter is identical with principle with the application mode of position filtering device h, so HsIt can be obtained by formula (2) calculation method;
Step5.2, in a new frame, target scale, seeks y using formula (3) in order to obtainsAnd its maximum value is obtained, determine mesh Mark current scale.
The specific steps of the Step6 are as follows:
Step6.1, it is updated with fixing learning rate η to scaling filter model, more new formula are as follows:
In formula, each frame scale separated method device is all updated.In formulaBT, sIndicate the scaling filter the in t frame The molecule and denominator of l dimension.BT-1, sFor the molecule denominator of previous frame scaling filter model.Indicate ruler when t frame Spend the conjugate complex number of the frequency domain description of filter desired output.The training sample of training scaling filter the when indicating t frame The frequency domain description of l dimension.Indicate the conjugate complex of the frequency domain description of the training sample kth dimension of training scaling filter when t frame It counts, wherein k ∈ { 1 ..., d }.
The specific steps of the Step7 are as follows:
Step7.1, PSR value are target occlusion judgment basis, for determining whether position filtering device model needs to update.Such as Fruit occurs circumstance of occlusion and does not update position filtering device model then, is otherwise updated to position filter model, and reduction is blocked pair The influence of target following.
The specific steps of the Step8 are as follows:
Step8.1, when judging that target is not blocked, with fix learning rate η to position filter model carry out more Newly, more new formula are as follows:
In formulaBtIt indicates in t frame, the molecule and denominator of position filtering device h l dimension,Bt-1For previous frame The molecule denominator of position filtering device model,Indicate the conjugate complex number of the frequency domain description of t frame position filter h desired output,Indicate the frequency domain description of the training sample l dimension of training position filtering device h when t frame,Training position filtering when t frame The conjugate complex number of the frequency domain description of the training sample kth dimension of device h, wherein k ∈ { 1 ..., d }.
The specific steps of the Step9 are as follows:
So far, the second frame end of run, target position, scale and all filter models are all for Step9.1, algorithm operation It is updated completion, next frame reruns step 3 to 8 until video terminates.
The beneficial effects of the present invention are:
1, using the motion target tracking method of multiple features fusion
If describing target only with single features (HOG feature or color characteristic).HOG feature is that the part of image is special Sign well adapts to ability to the subtle deformation of target, illumination variation etc., but if target occurs biggish deformation and blocks When, it may occur that it is wrong with or leakage with;And the mankind identify that the important Perception Features color characteristic of image is that one kind based on pixel is complete Office feature, to target rotation, translation and dimensional variation it is insensitive, but color characteristic cannot describe very well target local feature and It can not adapt to illumination variation.For this purpose, both Fusion Features are got up to describe object module by the present invention, it is global special obtaining target While sign, also available target local feature, improves the accuracy of target detection.
2, method for tracking target is realized using selective updating model strategy
For the present invention based on correlation filter target tracking algorism, general correlation filter target tracking algorism uses mesh Model fixed more new strategy frame by frame is marked, if target is blocked, mould can be added to for incorrect information by continuing more new model In type, the failure of target following will lead to.So that tracking performance is improved, the plan updated when proposing only to meet certain condition Slightly, by judge target whether block decide whether progress model modification, reduce the influence blocked to target following, To improve the stability of algorithm
3, target scale is estimated by building scale pyramid training scaling filter
Target following frame when target becomes larger, can only obtain the partial information of target if it is fixed in motion process, When target becomes smaller, it is readily incorporated the background information of interference, will affect the tracking precision of algorithm.To solve this problem, originally Invention estimates target scale by building scale pyramid training scaling filter, solves the problems, such as that moving target dimensional variation is very big Ground reduces in object tracking process because of the fixed bring error message of tracking box.
In short, motion target tracking method based on multi-feature fusion, combines the attribute information of multiple features, using more Feature describes target and selective updating model method.First mesh can be being obtained with a more complete description target using multiple features While marking global characteristics, also available target local feature, improves the accuracy of target detection.Secondly by building ruler Spend the adaptive update target scale of pyramid.The peak sidelobe ratio adaptive updates object module for finally utilizing response diagram, mentions The high validity of model.
Detailed description of the invention
Fig. 1 is method flow diagram in the present invention.
Specific embodiment
In the following with reference to the drawings and specific embodiments, the present invention is further illustrated.
Embodiment 1: as shown in Figure 1, motion target tracking method based on multi-feature fusion, the specific steps of the method It is as follows:
Step1, initialized target simultaneously choose target area;
Step2, target area histograms of oriented gradients (Histogram of Oriented Gradient, HOG) is extracted Feature extracts target area color (Color Name, CN) feature as another training sample as training sample.With Two training sample training respective positions filter models;
Step3, it two kinds of features is extracted respectively in new frame target area obtains two detection samples, calculate separately two The relevance scores for the respective position filtering device that training obtains in detection sample and previous step are to get the response diagram for arriving different characteristic;
Step4, calculate different characteristic response diagram peak sidelobe ratio, according to two kinds of characteristic response values of its Weighted Fusion, It chooses and is used as target current location at maximum value;
Step5, scale is obtained by HOG feature construction scale pyramid training scaling filter in current goal region Maximum value is responded as target current scale;
Step6, scaling filter model is updated;
Step7, according to the peak sidelobe ratio of every frame final position response diagram, judge whether target blocks, if it is Step 3 is repeated to 6, if not occurring blocking enters step 8;
Step8, position filtering device model is updated;
Step9, step 3 is repeated to 8 until tracking terminates.
Specific step is as follows by the step Step1:
Step1.1, according to the input picture first frame, centered on target position, acquire one having a size of 2 times of target The image block P of size.
The specific steps of the Step2 are as follows:
Step2.1, train the application mode of obtained position filtering device identical with principle using target different characteristic.HOG Feature (27 dimension Gradient Features, in addition one-dimensional gray feature, 28 is tieed up totally) and CN feature (by 11 dimensional feature dimensionality reductions to 2 dimensions).Below It will be all described by taking HOG feature as an example.The HOG feature f of P is extracted as training sample, wherein the dimension of feature is d dimension, fl It is l dimension therein, l ∈ { 1 ..., d }.Sample training is to make input sample in order to find optimal position filtering device h and need Square error is minimum between exporting.The filter h that h is trained by each dimension of featurelComposition, h are square by minimizing as follows Difference acquires:
In formula, g indicates the desired output of filter h, and τ is regularization parameter.Training sample f shares d dimension, flIt is therein Dimension, l ∈ { 1 ..., d }.* indicate that circulation is related.(1) solution of the minimum value of formula in frequency domain is as follows:
In formula, Hl, G, F be respectively be hl, g, f frequency domain description,Respectively indicate the conjugate complex number of G, F.FkIt is F Kth dimension, whereinIt is FkConjugate complex number.Al, B be filter h molecule denominator.
The specific steps of the Step3 are as follows:
Step3.1, above-mentioned calculation method obtain position filtering device model, complete the training process of position filtering device.This Target is detected at place, extracts HOG feature as detection sample z in new frame target area, calculates z and use HOG before The relevance scores y of the trained filter h of feature is to get the response diagram for arriving this feature:
In formulaIndicate AlConjugate complex number, ZlIt is the l dimension of z, wherein l ∈ { 1 ..., d }.Use CN and HOG feature Tracking target obtains filter response and is denoted as y respectivelyT, cnAnd yT, hog
The specific steps of the Step4 are as follows:
Step4.1, in t frame, calculate separately the peak sidelobe ratio of CN feature and HOG characteristic response figure, be denoted as PSRT, cn And PSRT, hog
Step4.2, CN feature and HOG feature the normalized weight w in t frame are calculated separatelyT, cnAnd wT, hog,
Step4.3, target is tracked respectively using CN and HOG feature and is obtained in t frame in response level progress Fusion Features It is responded to position filter, is denoted as yT, cnAnd yT, hog, fused response y is obtained using following method of weightingt,
Yt=wT, cn×yT, cn+wT, hog×yT, hog (6)
Step4.4, y is calculatedtMaximum value, obtain the final position of target.
The specific steps of the Step5 are as follows:
Step5.1, after determining target position, centered on the new position of target, pyramidal 33 figures of building scale are intercepted As layer, and extract one scaling filter H of HOG feature training of these image layerss, to estimate target scale.And scale Filter is identical with principle with the application mode of position filtering device h, so HsIt can be obtained by formula (2) calculation method;
Step5.2, in a new frame, target scale, seeks y using formula (3) in order to obtainsAnd its maximum value is obtained, determine mesh Mark current scale.
The specific steps of the Step6 are as follows:
Step6.1, it is updated with fixing learning rate η to scaling filter model, more new formula are as follows:
In formula, each frame scale separated method device is all updated.In formulaBT, sIndicate the scaling filter the in t frame The molecule and denominator of l dimension.BT-1, sFor the molecule denominator of previous frame scaling filter model.Indicate ruler when t frame Spend the conjugate complex number of the frequency domain description of filter desired output.The training sample of training scaling filter the when indicating t frame The frequency domain description of l dimension.Indicate the conjugate complex of the frequency domain description of the training sample kth dimension of training scaling filter when t frame It counts, wherein k ∈ { 1 ..., d }.
The specific steps of the Step7 are as follows:
Step7.1, shadowing judge whether target blocks for determining whether that updating position filters according to PSR value Wave device model reduces and blocks influence to target following, when t frame, PSR calculation method:
In formula, PSRtIndicate peak value side ratio, y when t frameT, maxFor the peak value of t frame response diagram, μtAnd σtWhen being t frame The mean value and standard deviation of peak response position peripheral region.PSRtBigger, peak strength is higher in response distribution, then target confidence It spends higher.
The specific steps of the Step8 are as follows:
Step8.1, it when judging that target is not blocked, is updated with fixing learning rate η to position filter model, More new formula are as follows:
In formulaBtIt indicates in t frame, the molecule and denominator of position filtering device h l dimension,Bt-1For previous frame The molecule denominator of position filtering device model,Indicate the conjugate complex number of the frequency domain description of t frame position filter h desired output,Indicate the frequency domain description of the training sample l dimension of training position filtering device h when t frame,Training position filtering when t frame The conjugate complex number of the frequency domain description of the training sample kth dimension of device h, wherein k ∈ { 1 ..., d }.
The specific steps of the Step9 are as follows:
So far, the second frame end of run, target position, scale and all filter models are all for Step9.1, algorithm operation It is updated completion, next frame reruns step 3 to 8 until video terminates.
The present invention gets up to describe target mould with the heterogeneity of HOG feature and color characteristic CN, by both Fusion Features Type, while obtaining target global characteristics, also available target local feature, improves the accuracy of target detection.Together When according to the peak sidelobe ratio of every frame final goal response diagram, judge whether target blocks to decide whether to carry out model more Newly, the influence blocked to target following is reduced, to improve the stability of algorithm.
Above in conjunction with attached drawing, the embodiment of the present invention is explained in detail, but the present invention is not limited to above-mentioned Embodiment within the knowledge of a person skilled in the art can also be before not departing from present inventive concept Put that various changes can be made.

Claims (11)

1.一种基于多特征融合的运动目标跟踪方法,其特征在于:包括如下步骤:首先在第一帧图像中,初始化目标区域,分别利用方向直方图和颜色特征训练两个位置滤波器;其次在后续帧目标周围提取两种特征的检测样本,分别计算两个检测样本与上一步中训练得到的各自位置滤波器的相关得分,即得到不同特征的响应图;再次根据不同特征响应图的峰值旁瓣比,加权融合两种特征响应值,选取响应值最大的点作为目标当前中心位置;然后利用方向梯度直方图特征构建尺度金字塔训练尺度滤波器,并得到响应值最大点作为目标当前尺度;最后根据每帧最终响应图的峰值旁边比,判断是否发生遮挡,遮挡情况下,不更新位置滤波器。1. a moving target tracking method based on multi-feature fusion, is characterized in that: comprise the steps: at first in the first frame image, initialize the target area, utilize the direction histogram and the color feature to train two position filters respectively; The detection samples of two kinds of features are extracted around the target in the subsequent frame, and the correlation scores of the two detection samples and the respective position filters trained in the previous step are calculated respectively, that is, the response maps of different features are obtained; Sidelobe ratio, weighted fusion of two feature response values, select the point with the largest response value as the current center position of the target; then use the directional gradient histogram feature to build a scale pyramid to train the scale filter, and obtain the point with the largest response value as the target current scale; Finally, according to the peak-to-side ratio of the final response map of each frame, it is judged whether occlusion occurs. In the case of occlusion, the position filter is not updated. 2.根据权利要求1所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述方法的具体步骤如下:2. the moving target tracking method based on multi-feature fusion according to claim 1, is characterized in that: the concrete steps of described method are as follows: Step1、初始化目标并选取目标区域;Step1, initialize the target and select the target area; Step2、提取目标区域方向梯度直方图特征作为训练样本,同时提取目标区域颜色特征作为另一个训练样本,用两个训练样本训练各自的位置滤波器模型;Step 2. Extract the directional gradient histogram feature of the target area as a training sample, and extract the color feature of the target area as another training sample, and use the two training samples to train the respective position filter models; Step3、在新一帧目标区域中分别提取两种特征得到两个检测样本,分别计算两个检测样本与上一步中训练得到的各自位置滤波器的相关得分,即得到不同特征的响应图;Step 3. Extract two types of features in the target area of a new frame to obtain two detection samples, respectively calculate the correlation scores between the two detection samples and the respective position filters trained in the previous step, that is, to obtain response maps of different features; Step4、计算不同特征的响应图的峰值旁瓣比,根据其加权融合两种特征响应值,选取最大值处作为目标当前位置;Step4: Calculate the peak-to-side lobe ratio of the response maps of different features, fuse the two feature response values according to their weights, and select the maximum value as the current position of the target; Step5、在当前目标位置区域,通过HOG特征构建尺度金字塔训练尺度滤波器,获取尺度响应最大值作为目标当前尺度;Step 5. In the current target location area, build a scale pyramid to train a scale filter through the HOG feature, and obtain the maximum scale response as the target current scale; Step6、更新尺度滤波器模型;Step6, update the scale filter model; Step7、根据每帧最终位置响应图的峰值旁瓣比,判断目标是否发生遮挡,如果是重复步骤3至6,如果没有出现遮挡进入步骤8;Step7. According to the peak sidelobe ratio of the final position response map of each frame, determine whether the target is occluded. If it is, repeat steps 3 to 6. If there is no occlusion, go to step 8; Step8、更新位置滤波器模型;Step8, update the position filter model; Step9、重复步骤3至8直到跟踪结束。Step9. Repeat steps 3 to 8 until the tracking ends. 3.根据权利要求2所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述步骤Step1的具体步骤如下:3. the moving target tracking method based on multi-feature fusion according to claim 2, is characterized in that: the concrete steps of described step Step1 are as follows: Step1.1、根据所述输入图像第一帧,以目标位置为中心,采集一个尺寸为目标2倍大小的图像块P。Step1.1. According to the first frame of the input image, taking the target position as the center, collect an image block P whose size is twice the size of the target. 4.根据权利要求3所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述步骤Step2的具体步骤如下:4. the moving target tracking method based on multi-feature fusion according to claim 3, is characterized in that: the concrete steps of described step Step2 are as follows: Step2.1、采用目标不同特征训练得到的位置滤波器的应用方式和原理相同,以下都将以HOG特征为例进行描述,提取P的HOG特征f作为训练样本,其中特征的维度为d维,fl是其中的第l维,l∈{1,…,d},样本训练是为了找到最优的位置滤波器h,使输入样本和需要输出之间平方误差最小,h由特征每个维度训练的滤波器hl组成,h通过如下最小化均方差求得:Step2.1. The application method and principle of the position filter obtained by training with different features of the target are the same. The following will take the HOG feature as an example to describe, extract the HOG feature f of P as a training sample, and the dimension of the feature is d dimension, f l is the lth dimension, l∈{1,...,d}, the sample training is to find the optimal position filter h, so as to minimize the squared error between the input sample and the desired output, h is determined by each dimension of the feature The trained filters hl are composed of h l , and h is obtained by minimizing the mean square error as follows: 式中,g表示滤波器h的期望输出,τ为正则化参数,训练样本f共有d维,fl是其中的第l维,l∈{1,…,d},*表示循环相关,(1)式的最小值在频域里的解如下:In the formula, g represents the expected output of the filter h, τ is the regularization parameter, the training sample f has a total of d dimensions, f l is the lth dimension, l∈{1,...,d}, * denotes the cyclic correlation, ( The solution of the minimum value of Eq. 1) in the frequency domain is as follows: 式中,Hl、G、F分别为是hl、g、f的频域描述,分别表示G、F的共轭复数,Fk是F的第k维,其中k∈{1,…,d},是Fk的共轭复数,Al、B为滤波器h的分子分母。where H l , G and F are the frequency domain descriptions of h l , g and f, respectively, represent the complex conjugates of G and F, respectively, and F k is the kth dimension of F, where k∈{1,…,d}, is the complex conjugate of F k , and A l and B are the numerator and denominator of the filter h. 5.根据权利要求4所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step3的具体步骤为:5. the moving target tracking method based on multi-feature fusion according to claim 4, is characterized in that: the concrete steps of described Step3 are: Step3.1、上述计算方法得到位置滤波器模型,完成了位置滤波器的训练过程,此处对目标进行检测,在新一帧目标区域提取HOG特征作为检测样本z,计算z与之前采用HOG特征训练好的滤波器h的相关得分y,即得到该特征的响应图:Step3.1. The above calculation method obtains the position filter model, and the training process of the position filter is completed. Here, the target is detected, and the HOG feature is extracted in the target area of a new frame as the detection sample z, and the calculation z is the same as the previous HOG feature. The correlation score y of the trained filter h, that is, the response map of the feature is obtained: 式中表示Al的共轭复数,Z是z的频域描述,Zl是Z的第l维,其中l∈{1,…,d},使用CN和HOG特征分别跟踪目标得到滤波器响应记为yt,cn和yt,hogin the formula Represents the conjugate complex number of A l , Z is the frequency domain description of z, Z l is the lth dimension of Z, where l∈{1,...,d}, using CN and HOG features to track the target separately to obtain the filter response denoted as y t,cn and y t,hog . 6.根据权利要求5所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step4的具体步骤为:6. the moving target tracking method based on multi-feature fusion according to claim 5, is characterized in that: the concrete steps of described Step4 are: Step4.1、在t帧时,分别计算CN特征和HOG特征响应图的峰值旁瓣比,记为PSRt,cn和PSRt,hogStep4.1, at the time of frame t, calculate the peak sidelobe ratio of CN feature and HOG feature response map respectively, denoted as PSR t,cn and PSR t,hog ; Step4.2、分别计算CN特征和HOG特征在第t帧时归一化权重wt,cn和wt,hogStep4.2. Calculate the normalized weights wt ,cn and wt ,hog of CN feature and HOG feature at frame t respectively: Step4.3、在响应层面进行特征融合,在t帧时,分别使用CN和HOG特征训练得到两个位置滤波器响应,记为yt,cn和yt,hog,利用如下加权方法得到融合后的响应ytStep4.3. Perform feature fusion at the response level. At frame t, use CN and HOG features to train to obtain two position filter responses, denoted as y t,cn and y t,hog , and use the following weighting method to obtain the fusion the response y t , yt=wt,cn×yt,cn+wt,hog×yt,hog (6)y t =w t,cn ×y t,cn +w t,hog ×y t,hog (6) Step4.4、计算yt的最大值,得到目标在第t帧最终的位置。Step4.4. Calculate the maximum value of y t to get the final position of the target in the t-th frame. 7.根据权利要求6所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step5的具体步骤为:7. the moving target tracking method based on multi-feature fusion according to claim 6, is characterized in that: the concrete steps of described Step5 are: Step5.1、确定目标位置后,以目标新位置为中心,截取构建尺度金字塔的33个图像层,并提取这些图像层的HOG特征训练一个尺度滤波器Hs,用以估计目标尺度,并且尺度滤波器和位置滤波器h的应用方式和原理相同,所以Hs由式(2)计算方法可得;Step5.1. After determining the target position, take the new target position as the center, intercept the 33 image layers that build the scale pyramid, and extract the HOG features of these image layers to train a scale filter H s to estimate the target scale, and the scale The application method and principle of the filter and the position filter h are the same, so H s can be obtained by the calculation method of formula (2); Step5.2、在新一帧中,为了得到目标尺度,利用式(3)计算方法求ys并得其最大值,确定目标当前尺度。Step5.2. In a new frame, in order to obtain the target scale, use the calculation method of formula (3) to find y s and obtain its maximum value to determine the current scale of the target. 8.根据权利要求7所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step6的具体步骤为:8. the moving target tracking method based on multi-feature fusion according to claim 7, is characterized in that: the concrete steps of described Step6 are: Step6.1、以固定学习率η对尺度滤波器模型进行更新,更新公式为:Step6.1. Update the scale filter model with a fixed learning rate η. The update formula is: 式中,每一帧中尺度滤波器都进行更新,式中Bt,s表示在t帧时尺度滤波器第l维的分子和分母,Bt-1,s为上一帧尺度滤波器模型的分子分母,表示第t帧时尺度滤波器期望输出的频域描述的共轭复数,表示第t帧时训练尺度滤波器的训练样本第l维的频域描述,表示第t帧时训练尺度滤波器的训练样本第k维的频域描述的共轭复数,其中k∈{1,…,d}。In the formula, the mesoscale filter is updated in each frame, where B t,s represents the numerator and denominator of the lth dimension of the scale filter at frame t, B t-1,s is the numerator and denominator of the scale filter model of the previous frame, is a complex conjugate number representing the frequency-domain description of the expected output of the time-scale filter at frame t, represents the frequency domain description of the l-th dimension of the training sample of the training scale filter at the t-th frame, A complex conjugate representing the frequency-domain description of the kth dimension of the training samples of the training scale filter at frame t, where k∈{1,…,d}. 9.根据权利要求8所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step7的具体步骤为:9. the moving target tracking method based on multi-feature fusion according to claim 8, is characterized in that: the concrete steps of described Step7 are: Step7.1、PSR值为目标遮挡判断依据,用来确定位置滤波器模型是否需要更新,如果发生遮挡情况则不更新位置滤波器模型,否则对位置滤波器模型进行更新,减小遮挡对目标跟踪的影响。Step7.1. The PSR value is the basis for judging the target occlusion, which is used to determine whether the position filter model needs to be updated. If the occlusion occurs, the position filter model is not updated. Otherwise, the position filter model is updated to reduce the occlusion and target tracking. Impact. 10.根据权利要求9所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step8的具体步骤为:10. The moving target tracking method based on multi-feature fusion according to claim 9, is characterized in that: the concrete steps of described Step8 are: Step8.1、当判断目标没有出现遮挡时,以固定学习率η对位置滤波器模型进行更新,更新公式为:Step8.1. When it is judged that the target is not occluded, update the position filter model with a fixed learning rate η. The update formula is: 式中Bt表示在第t帧时,位置滤波器h第l维的分子和分母,Bt-1为上一帧位置滤波器模型的分子分母,表示第t帧位置滤波器h期望输出的频域描述的共轭复数,表示第t帧时训练位置滤波器h的训练样本第l维的频域描述,第t帧时训练位置滤波器h的训练样本第k维的频域描述的共轭复数,其中k∈{1,…,d}。in the formula B t represents the numerator and denominator of the l-th dimension of the position filter h at the t-th frame, B t-1 is the numerator and denominator of the position filter model of the previous frame, is a complex conjugate number representing the frequency-domain description of the expected output of the filter h at the t-th frame position, represents the frequency domain description of the l-th dimension of the training sample of the training position filter h at the t-th frame, The complex conjugate of the frequency-domain description of the k-th dimension of the training sample for training position filter h at frame t, where k ∈ {1,…,d}. 11.根据权利要求10所述的基于多特征融合的运动目标跟踪方法,其特征在于:所述Step9的具体步骤为:11. the moving target tracking method based on multi-feature fusion according to claim 10, is characterized in that: the concrete steps of described Step9 are: Step9.1、算法运行至此,第二帧运行结束,目标位置、尺度以及所有滤波器模型都已经更新完成,下一帧重复运行步骤3至8直至视频结束。Step9.1. The algorithm is running so far, the second frame is finished, the target position, scale and all filter models have been updated. Repeat steps 3 to 8 for the next frame until the video ends.
CN201810834082.8A 2018-07-26 2018-07-26 A moving target tracking method based on multi-feature fusion Active CN109285179B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810834082.8A CN109285179B (en) 2018-07-26 2018-07-26 A moving target tracking method based on multi-feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810834082.8A CN109285179B (en) 2018-07-26 2018-07-26 A moving target tracking method based on multi-feature fusion

Publications (2)

Publication Number Publication Date
CN109285179A true CN109285179A (en) 2019-01-29
CN109285179B CN109285179B (en) 2021-05-14

Family

ID=65182784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810834082.8A Active CN109285179B (en) 2018-07-26 2018-07-26 A moving target tracking method based on multi-feature fusion

Country Status (1)

Country Link
CN (1) CN109285179B (en)

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009664A (en) * 2019-04-11 2019-07-12 南京航空航天大学金城学院 A method and device for infrared target tracking based on response graph fusion
CN110246155A (en) * 2019-05-17 2019-09-17 华中科技大学 One kind being based on the alternate anti-shelter target tracking of model and system
CN110348492A (en) * 2019-06-24 2019-10-18 昆明理工大学 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti- based on multi-peak detection blocks pedestrian tracting method
CN110428450A (en) * 2019-08-01 2019-11-08 中国矿业大学 Dimension self-adaption method for tracking target applied to the mobile inspection image of mine laneway
CN110533690A (en) * 2019-08-08 2019-12-03 河海大学 The core correlation filtering Method for Underwater Target Tracking of Fusion Features and learning rate optimization
CN110569723A (en) * 2019-08-02 2019-12-13 西安工业大学 A Target Tracking Method Combining Feature Fusion and Model Update
CN110660077A (en) * 2019-09-12 2020-01-07 昆明理工大学 Multi-scale target tracking method fusing multiple features
CN110929620A (en) * 2019-11-15 2020-03-27 浙江大华技术股份有限公司 Target tracking method and device and storage device
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 A Target Tracking Method Based on Adaptive Multimodal Update Strategy
CN111260689A (en) * 2020-01-16 2020-06-09 东华大学 An Effective Confidence Enhanced Correlation Filtering Visual Tracking Algorithm
CN111340846A (en) * 2020-02-25 2020-06-26 重庆邮电大学 An anti-occlusion target tracking method based on multi-feature fusion
CN111814734A (en) * 2020-07-24 2020-10-23 南方电网数字电网研究院有限公司 How to identify the state of the knife switch
CN111860532A (en) * 2019-04-25 2020-10-30 四川大学 Adaptive Target Tracking Method Based on Two Complementary Tracking Algorithms
CN111931722A (en) * 2020-09-23 2020-11-13 杭州视语智能视觉系统技术有限公司 Correlated filtering tracking method combining color ratio characteristics
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN112364891A (en) * 2020-10-20 2021-02-12 西安理工大学 Sample classification-based self-adaptive decontamination method
CN112598710A (en) * 2020-12-25 2021-04-02 杭州电子科技大学 Space-time correlation filtering target tracking method based on feature online selection
CN112767437A (en) * 2020-12-30 2021-05-07 大连海事大学 Water surface unmanned ship tracking method, system and storage medium based on KCF self-adaptive multi-feature fusion filtering
CN112785622A (en) * 2020-12-30 2021-05-11 大连海事大学 Long-time tracking method and device for unmanned ship on water surface and storage medium
CN113920168A (en) * 2021-11-02 2022-01-11 中音讯谷科技有限公司 Image tracking method in audio and video control equipment
CN114612508A (en) * 2022-02-28 2022-06-10 桂林电子科技大学 Anti-occlusion related filtering target tracking method for multi-feature online learning
CN115049706A (en) * 2022-06-30 2022-09-13 北京理工大学 Long-term target tracking method and system based on improved Stacke
CN115661197A (en) * 2022-10-14 2023-01-31 东华大学 Target tracking method and system based on parallel three-branch correlation filter
CN116109975A (en) * 2023-02-08 2023-05-12 广州宝立科技有限公司 Power grid safety operation monitoring image processing method and intelligent video monitoring system

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence
CN106570486A (en) * 2016-11-09 2017-04-19 华南理工大学 Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification
CN106887011A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of multi-template method for tracking target based on CNN and CF
CN107209854A (en) * 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 Systems and methods for supporting smooth object following
WO2017170876A1 (en) * 2016-03-30 2017-10-05 株式会社エクォス・リサーチ Image recognition device, mobile device and image recognition program
CN107316316A (en) * 2017-05-19 2017-11-03 南京理工大学 The method for tracking target that filtering technique is closed with nuclear phase is adaptively merged based on multiple features
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion
CN107644430A (en) * 2017-07-27 2018-01-30 孙战里 Target following based on self-adaptive features fusion
CN108062764A (en) * 2017-11-30 2018-05-22 极翼机器人(上海)有限公司 A kind of object tracking methods of view-based access control model

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107209854A (en) * 2015-09-15 2017-09-26 深圳市大疆创新科技有限公司 Systems and methods for supporting smooth object following
WO2017170876A1 (en) * 2016-03-30 2017-10-05 株式会社エクォス・リサーチ Image recognition device, mobile device and image recognition program
CN106485732A (en) * 2016-09-09 2017-03-08 南京航空航天大学 A kind of method for tracking target of video sequence
CN106570486A (en) * 2016-11-09 2017-04-19 华南理工大学 Kernel correlation filtering target tracking method based on feature fusion and Bayesian classification
CN106887011A (en) * 2017-01-20 2017-06-23 北京理工大学 A kind of multi-template method for tracking target based on CNN and CF
CN107316316A (en) * 2017-05-19 2017-11-03 南京理工大学 The method for tracking target that filtering technique is closed with nuclear phase is adaptively merged based on multiple features
CN107644430A (en) * 2017-07-27 2018-01-30 孙战里 Target following based on self-adaptive features fusion
CN107578423A (en) * 2017-09-15 2018-01-12 杭州电子科技大学 The correlation filtering robust tracking method of multiple features hierarchical fusion
CN108062764A (en) * 2017-11-30 2018-05-22 极翼机器人(上海)有限公司 A kind of object tracking methods of view-based access control model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YANG RUAN等: "Extended kernelised correlation filter tracking", 《ELECTRONIC LETTERS》 *
鲁国智等: "多特征分层融合的相关滤波鲁棒跟踪", 《中国图像图形学报》 *

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009664A (en) * 2019-04-11 2019-07-12 南京航空航天大学金城学院 A method and device for infrared target tracking based on response graph fusion
CN110009664B (en) * 2019-04-11 2021-08-27 南京航空航天大学金城学院 Infrared target tracking method and device based on response map fusion
CN111860532A (en) * 2019-04-25 2020-10-30 四川大学 Adaptive Target Tracking Method Based on Two Complementary Tracking Algorithms
CN110246155A (en) * 2019-05-17 2019-09-17 华中科技大学 One kind being based on the alternate anti-shelter target tracking of model and system
CN110246155B (en) * 2019-05-17 2021-05-18 华中科技大学 Anti-occlusion target tracking method and system based on model alternation
CN110348492A (en) * 2019-06-24 2019-10-18 昆明理工大学 A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN110414439A (en) * 2019-07-30 2019-11-05 武汉理工大学 Anti- based on multi-peak detection blocks pedestrian tracting method
CN110414439B (en) * 2019-07-30 2022-03-15 武汉理工大学 Anti-blocking pedestrian tracking method based on multi-peak detection
CN110428450B (en) * 2019-08-01 2021-11-16 中国矿业大学 Scale-adaptive target tracking method applied to mine tunnel mobile inspection image
CN110428450A (en) * 2019-08-01 2019-11-08 中国矿业大学 Dimension self-adaption method for tracking target applied to the mobile inspection image of mine laneway
CN110569723A (en) * 2019-08-02 2019-12-13 西安工业大学 A Target Tracking Method Combining Feature Fusion and Model Update
CN110533690B (en) * 2019-08-08 2022-02-11 河海大学 Nuclear correlation filtering underwater target tracking method based on feature fusion and learning rate optimization
CN110533690A (en) * 2019-08-08 2019-12-03 河海大学 The core correlation filtering Method for Underwater Target Tracking of Fusion Features and learning rate optimization
CN110660077A (en) * 2019-09-12 2020-01-07 昆明理工大学 Multi-scale target tracking method fusing multiple features
CN110929620B (en) * 2019-11-15 2023-04-07 浙江大华技术股份有限公司 Target tracking method and device and storage device
CN110929620A (en) * 2019-11-15 2020-03-27 浙江大华技术股份有限公司 Target tracking method and device and storage device
CN111161324B (en) * 2019-11-20 2023-06-23 山东工商学院 A Target Tracking Method Based on Adaptive Multimodal Update Strategy
CN111161324A (en) * 2019-11-20 2020-05-15 山东工商学院 A Target Tracking Method Based on Adaptive Multimodal Update Strategy
CN111260689A (en) * 2020-01-16 2020-06-09 东华大学 An Effective Confidence Enhanced Correlation Filtering Visual Tracking Algorithm
CN111260689B (en) * 2020-01-16 2022-10-11 东华大学 Confidence enhancement-based correlation filtering visual tracking method
CN111340846B (en) * 2020-02-25 2023-02-17 重庆邮电大学 Multi-feature fusion anti-occlusion target tracking method
CN111340846A (en) * 2020-02-25 2020-06-26 重庆邮电大学 An anti-occlusion target tracking method based on multi-feature fusion
CN111814734A (en) * 2020-07-24 2020-10-23 南方电网数字电网研究院有限公司 How to identify the state of the knife switch
CN111814734B (en) * 2020-07-24 2024-01-26 南方电网数字电网研究院有限公司 How to identify the status of the knife gate
CN112200829A (en) * 2020-09-07 2021-01-08 慧视江山科技(北京)有限公司 Target tracking method and device based on correlation filtering method
CN111931722A (en) * 2020-09-23 2020-11-13 杭州视语智能视觉系统技术有限公司 Correlated filtering tracking method combining color ratio characteristics
CN111931722B (en) * 2020-09-23 2021-02-12 杭州视语智能视觉系统技术有限公司 Correlated filtering tracking method combining color ratio characteristics
CN112364891A (en) * 2020-10-20 2021-02-12 西安理工大学 Sample classification-based self-adaptive decontamination method
CN112598710B (en) * 2020-12-25 2024-03-12 杭州电子科技大学 Spatiotemporal correlation filter target tracking method based on online feature selection
CN112598710A (en) * 2020-12-25 2021-04-02 杭州电子科技大学 Space-time correlation filtering target tracking method based on feature online selection
CN112785622A (en) * 2020-12-30 2021-05-11 大连海事大学 Long-time tracking method and device for unmanned ship on water surface and storage medium
CN112767437A (en) * 2020-12-30 2021-05-07 大连海事大学 Water surface unmanned ship tracking method, system and storage medium based on KCF self-adaptive multi-feature fusion filtering
CN112785622B (en) * 2020-12-30 2024-04-05 大连海事大学 Method and device for tracking unmanned captain on water surface and storage medium
CN113920168A (en) * 2021-11-02 2022-01-11 中音讯谷科技有限公司 Image tracking method in audio and video control equipment
CN113920168B (en) * 2021-11-02 2024-09-03 中音讯谷科技有限公司 Image tracking method in audio and video control device
CN114612508A (en) * 2022-02-28 2022-06-10 桂林电子科技大学 Anti-occlusion related filtering target tracking method for multi-feature online learning
CN115049706A (en) * 2022-06-30 2022-09-13 北京理工大学 Long-term target tracking method and system based on improved Stacke
CN115049706B (en) * 2022-06-30 2025-04-01 北京理工大学 A long-term target tracking method and system based on improved Staple
CN115661197A (en) * 2022-10-14 2023-01-31 东华大学 Target tracking method and system based on parallel three-branch correlation filter
CN115661197B (en) * 2022-10-14 2025-08-15 东华大学 Target tracking method and system based on parallel three-branch correlation filter
CN116109975A (en) * 2023-02-08 2023-05-12 广州宝立科技有限公司 Power grid safety operation monitoring image processing method and intelligent video monitoring system
CN116109975B (en) * 2023-02-08 2023-10-20 广州宝立科技有限公司 Power grid safety operation monitoring image processing method and intelligent video monitoring system

Also Published As

Publication number Publication date
CN109285179B (en) 2021-05-14

Similar Documents

Publication Publication Date Title
CN109285179A (en) A moving target tracking method based on multi-feature fusion
CN103077531B (en) Based on the gray scale Automatic Target Tracking method of marginal information
CN112288758B (en) Infrared and visible light image registration method for power equipment
CN110335270A (en) Transmission line of electricity defect inspection method based on the study of hierarchical regions Fusion Features
CN114581486A (en) Template update target tracking algorithm based on multi-layer features of fully convolutional Siamese network
CN106611421B (en) SAR image segmentation method based on feature learning and sketch line segment constraints
CN106991693B (en) Binocular Stereo Matching Method Based on Fuzzy Support Weight
CN109242884A (en) Remote sensing video target tracking method based on JCFNet network
CN105046206B (en) Based on the pedestrian detection method and device for moving prior information in video
CN109740483A (en) A kind of rice growing season detection method based on deep-neural-network
CN104700421A (en) Edge detection algorithm based on canny self-adaptive threshold value
CN108230367A (en) A kind of quick method for tracking and positioning to set objective in greyscale video
CN109708658B (en) A visual odometry method based on convolutional neural network
CN106611423B (en) SAR image segmentation method based on ridge ripple filter and deconvolution structural model
CN113223055B (en) Image target tracking model establishing method and image target tracking method
CN109859101B (en) Method and system for thermal infrared image recognition of crop canopy
CN110136202A (en) A kind of multi-targets recognition and localization method based on SSD and dual camera
CN113313047B (en) A lane line detection method and system based on lane structure prior
CN112733581B (en) Vehicle attribute recognition method and system
CN110348492A (en) A kind of correlation filtering method for tracking target based on contextual information and multiple features fusion
CN116245949B (en) A high-precision visual SLAM method based on improved quadtree feature point extraction
CN114511627A (en) Target fruit positioning and dividing method and system
CN108364305B (en) Vehicle camera video target tracking method based on improved DSST
CN110660077A (en) Multi-scale target tracking method fusing multiple features
CN106780727A (en) A kind of headstock detection model method for reconstructing and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant