Learning Background-Suppressed Dual-Regression Correlation Filters for Visual Tracking
<p>Overall framework of the proposed BSDCF tracker.</p> "> Figure 2
<p>Comparison between the baseline BACF and the proposed BSDCF. The center figure shows their position error curves in the Basketball sequence.</p> "> Figure 3
<p>Evaluation results in terms of precision and success plots on OTB100 benchmark.</p> "> Figure 4
<p>Evaluation results in terms of precision and success plots on the TC128 benchmark.</p> "> Figure 5
<p>Evaluation results in terms of precision and success plots on UAVDT benchmark.</p> "> Figure 6
<p>Qualitative evaluation of the proposed BSDCF with top-5 trackers in 6 challenging video sequences, including DragonBaby, Bolt2, Carchasing_ce1, Freeman3, Lemming, and Shanking (from top to bottom).</p> "> Figure 7
<p>The precision and success rate scores of different key parameters in OTB100.</p> ">
Abstract
:1. Introduction
- We proposed a novel background-suppressed dual-regression correlation filter (BSDCF) for visual tracking, which adopts the overall strategy of exploiting the target response to restrict the background response’s change rate for addressing the response aberration due to background interference;
- Using the alternating direction method of multipliers (ADMM) algorithm [23], the proposed BSDCF can efficiently solve the closed-form solutions;
- The overall experimental results at the OTB100, TC128, and UAVDT benchmarks show that the performance of the proposed BSDCF is competitive compared to other 13 state-of-the-art (SOTA) trackers, and it can perform better tracking performance in complex tracking scenarios.
2. Related Works
2.1. Tacking with DCF
2.2. DCF Tacking with Boundary Effects
2.3. DCF Tacking with Background Suppression
3. Proposed Method
3.1. Short Review of the DCF Tracker
3.2. The Proposed BSDCF Tracker
3.3. Optimization
3.3.1. Subproblem
3.3.2. Subproblem
3.3.3. Lagrange Multiplier
3.3.4. Algorithm Complexity Analysis
3.4. Object Detection and Model Update
Algorithm 1. Background-suppressed dual-regression correlation filters (BSDCF). | |
Input: The first frame target state (including target position and scale information ); | |
Output: The t-th frame target position and scale information ; | |
1 | Initialize model hyperparameters. |
2 | fort = 1: end do |
3 | Training |
4 | Determining the search region and extracting global features |
5 | Calculate target mask matrix and obtain target features |
6 | for Iter = 1: L do |
7 | Optimize the filter model via Equations (10) and (15)–(17). |
8 | Obtained target response via the target filter correlated with the target appearance model . |
9 | Optimize the filter model via Equations (10) and (15)–(17). |
10 | Obtained target response via the target filter correlated with the target appearance model . |
11 | end for |
12 | Detecting |
13 | Cropping the search region at N different scales, centered on the target position obtained at frame (t−1). |
14 | Crop multi-scale search regions centered at with S scales based on the bounding box at frame t. |
15 | Extract multi-scale global feature maps . |
16 | Calculate target mask matrix and obtain multi-scale target features |
17 | Use Equation (18) to final response map . |
18 | Prediction of the target location and best scale based on the maximum peak of the response map. |
19 | Use Equation (19) to update the appearance model. |
20 | end for |
4. Experiments
4.1. Implementation Details
4.2. Overall Performance
4.3. Attribute Evaluation
4.4. Qualitative Evaluation
- (1)
- Fast motion: Representative sequences of this attribute include DragonBaby, Shanking, and Lemming. Fast motion challenges are often accompanied by target blurring, deformation, etc., and are extremely susceptible to tracking failure, such as frame 500 in the Lemming sequence and frames 66 and 243 in shaking. In the DragonBaby sequence, the target moves its body quickly and changes position drastically in the view, which both pose a great challenge. However, the proposed BSDCF is still able to track accurately.
- (2)
- Scale change: The Carchasing_ce1 and Freeman3 both belong to this attribute. Due to the Dual model and spatial regularization to reinforced target regions, the proposed can quickly adapt to the target’s scale change. Similar to the 380th and 436th frames in the Carchasing_ce1 sequence, as well as the 220th and 440th frames in the Freeman3 sequence, the proposed BSDCF can forecast the target state well when the target itself makes a large turn or makes a U-turn behavior.
- (3)
- Occlusion: The DragonBaby, Carchasing_ce1, and Lemming all belong to this attribute. By suppressing the response aberration through the target response and thus highlighting the target region, the occlusion challenge is effectively addressed. For example, the proposed BSDCF can locate the target at 370 frames in the Lemming sequence, 160 frames in the Carchasing_ce1 sequence, and 55 frames in the DragonBaby sequence.
- (4)
- Others: The proposed is relatively well adapted to illumination changes and background clutter challenges. In a representative shaking sequence, the target in a backlit environment is accompanied by a bright light that prevents the target’s appearance from being visible in the view. In the Bolt2 sequence, the interference of similar objects makes the interested target less prominent. Despite these challenges, the proposed BSDCF can still achieve relatively satisfactory tracking performance.
4.5. Effectiveness Evaluation
4.5.1. Key Parameters Analysis
4.5.2. Ablation Study
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Javed, S.; Danelljan, M.; Khan, F.S.; Khan, M.H.; Felsberg, M.; Matas, J. Visual Object Tracking with Discriminative Filters and Siamese Networks: A Survey and Outlook. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 1–20. [Google Scholar] [CrossRef] [PubMed]
- Marvasti-Zadeh, S.M.; Cheng, L.; Ghanei-Yakhdan, H.; Kasaei, S. Deep learning for visual tracking: A comprehensive survey. IEEE Trans. Intell. Transp. Syst. 2022, 23, 3943–3968. [Google Scholar] [CrossRef]
- Zhang, J.; Sun, J.; Wang, J.; Li, Z.; Chen, X. An object tracking framework with recapture based on correlation filters and Siamese networks. Comput. Electr. Eng. 2022, 98, 107730. [Google Scholar] [CrossRef]
- Shi, H.; Liu, C. A new cast shadow detection method for traffic surveillance video analysis using color and statistical modeling. Image Vis. Comput. 2020, 94, 103863. [Google Scholar] [CrossRef]
- An, Z.; Wang, X.; Li, B.; Xiang, Z.; Zhang, B. Robust visual tracking for UAVs with dynamic feature weight selection. Appl. Intell. 2023, 53, 3836–3849. [Google Scholar] [CrossRef]
- Danelljan, M.; Hager, G.; Shahbaz Khan, F.; Felsberg, M. Learning spatially regularized correlation filters for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 4310–4318. [Google Scholar]
- Li, F.; Tian, C.; Zuo, W.; Zhang, L.; Yang, M. Learning spatial-temporal regularized correlation filters for visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4904–4913. [Google Scholar]
- Dai, K.; Wang, D.; Lu, H.; Sun, C.; Li, J. Visual tracking via adaptive spatially-regularized correlation filters. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 4670–4679. [Google Scholar]
- Kiani Galoogahi, H.; Sim, T.; Lucey, S. Correlation filters with limited boundaries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4630–4638. [Google Scholar]
- Kiani Galoogahi, H.; Fagg, A.; Lucey, S. Learning background-aware correlation filters for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1135–1143. [Google Scholar]
- Lukezic, A.; Vojir, T.; Cehovin Zajc, L.; Matas, J.; Kristan, M. Discriminative correlation filter with channel and spatial reliability. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6309–6318. [Google Scholar]
- Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 583–596. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wang, M.; Liu, Y.; Huang, Z. Large margin object tracking with circulant feature maps. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4021–4029. [Google Scholar]
- Xu, T.; Feng, Z.; Wu, X.; Kittler, J. Learning adaptive discriminative correlation filters via temporal consistency preserving spatial feature selection for robust visual object tracking. IEEE Trans. Image Process. 2019, 28, 5596–5609. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Fu, C.; Ding, F.; Huang, Z.; Lu, G. AutoTrack: Towards high-performance visual tracking for UAV with automatic spatio-temporal regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11923–11932. [Google Scholar]
- Lin, F.; Fu, C.; He, Y.; Guo, F.; Tang, Q. BiCF: Learning bidirectional incongruity-aware correlation filter for efficient UAV object tracking. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 2365–2371. [Google Scholar]
- Huang, H.; Zha, Y.; Zheng, M.; Zhang, P. ACFT: Adversarial correlation filter for robust tracking. IET Image Process. 2019, 13, 2687–2693. [Google Scholar] [CrossRef]
- Ji, Y.; He, J.; Sun, X.; Bai, Y.; Wei, Z.; Ghazali, K.H.B. Learning Augmented Memory Joint Aberrance Repressed Correlation Filters for Visual Tracking. Symmetry 2022, 14, 1502. [Google Scholar] [CrossRef]
- Xu, L.; Kim, P.; Wang, M.; Pan, J.; Yang, X.; Gao, M. Spatio-temporal joint aberrance suppressed correlation filter for visual tracking. Complex Intell. Syst. 2022, 8, 3765–3777. [Google Scholar] [CrossRef]
- Li, T.; Ding, F.; Yang, W. UAV object tracking by background cues and aberrances response suppression mechanism. Neural Comput. Appl. 2021, 33, 3347–3361. [Google Scholar] [CrossRef]
- Xu, T.; Wu, X. Fast visual object tracking via distortion-suppressed correlation filtering. In Proceedings of the 2016 IEEE International Smart Cities Conference (ISC2), Trento, Italy, 12–15 September 2016; pp. 1–6. [Google Scholar]
- Chen, Z.; Guo, Q.; Wan, L.; Feng, W. Background-suppressed correlation filters for visual tracking. In Proceedings of the 2018 IEEE International Conference on Multimedia and Expo (ICME), San Diego, CA, USA, 23–27 July 2018; pp. 1–6. [Google Scholar]
- Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
- Bolme, D.S.; Beveridge, J.R.; Draper, B.A.; Lui, Y.M. Visual object tracking using adaptive correlation filters. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2544–2550. [Google Scholar]
- Danelljan, M.; Shahbaz Khan, F.; Felsberg, M.; Van de Weijer, J. Adaptive color attributes for real-time visual tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1090–1097. [Google Scholar]
- Danelljan, M.; Robinson, A.; Shahbaz Khan, F.; Felsberg, M. Beyond correlation filters: Learning continuous convolution operators for visual tracking. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 472–488. [Google Scholar]
- Fu, C.; Lin, F.; Li, Y.; Chen, G. Correlation filter-based visual tracking for UAV with online multi-feature learning. Remote Sens. 2019, 11, 549. [Google Scholar] [CrossRef] [Green Version]
- Danelljan, M.; Bhat, G.; Shahbaz Khan, F.; Felsberg, M. Eco: Efficient convolution operators for tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6638–6646. [Google Scholar]
- Xu, T.; Feng, Z.; Wu, X.; Kittler, J. Joint group feature selection and discriminative filter learning for robust visual object tracking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 7950–7960. [Google Scholar]
- Zhu, G.; Wang, J.; Wu, Y.; Zhang, X.; Lu, H. MC-HOG correlation tracking with saliency proposal. In Proceedings of the AAAI Conference on Artificial Intelligence, Phoenix, AZ, USA, 12–17 February 2016; Volume 30. [Google Scholar]
- Bai, S.; He, Z.; Dong, Y.; Bai, H. Multi-hierarchical independent correlation filters for visual tracking. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar]
- Bertinetto, L.; Valmadre, J.; Golodetz, S.; Miksik, O.; Torr, P.H. Staple: Complementary learners for real-time tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1401–1409. [Google Scholar]
- Li, Y.; Zhu, J.; Hoi, S.C.; Song, W.; Wang, Z.; Liu, H. Robust estimation of similarity transformation for visual object tracking. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 8666–8673. [Google Scholar]
- Li, F.; Yao, Y.; Li, P.; Zhang, D.; Zuo, W.; Yang, M.-H. Integrating boundary and center correlation filters for visual tracking with aspect ratio variation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2001–2009. [Google Scholar]
- Danelljan, M.; Häger, G.; Khan, F.S.; Felsberg, M. Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1561–1575. [Google Scholar] [CrossRef] [Green Version]
- Li, Y.; Zhu, J. A scale adaptive kernel correlation filter tracker with feature integration. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 5–12 September 2014; pp. 254–265. [Google Scholar]
- Danelljan, M.; Häger, G.; Khan, F.; Felsberg, M. Accurate scale estimation for robust visual tracking. In Proceedings of the British Machine Vision Conference, Nottingham, UK, 1–5 September 2014. [Google Scholar]
- Wang, F.; Yin, S.; Mbelwa, J.T.; Sun, F. Context and saliency aware correlation filter for visual tracking. Multimed. Tools Appl. 2022, 81, 27879–27893. [Google Scholar] [CrossRef]
- Li, Y.; Fu, C.; Ding, F.; Huang, Z.; Pan, J. Augmented memory for correlation filters in real-time UAV tracking. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 1559–1566. [Google Scholar]
- Sun, Y.; Sun, C.; Wang, D.; He, Y.; Lu, H. Roi pooled correlation filters for visual tracking. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 5783–5791. [Google Scholar]
- Tang, M.; Yu, B.; Zhang, F.; Wang, J. High-speed tracking with multi-kernel correlation filters. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–21 June 2018; pp. 4874–4883. [Google Scholar]
- Fawad, J.K.M.; Rahman, M.; Amin, Y.; Tenhunen, H. Low-Rank Multi-Channel Features for Robust Visual Object Tracking. Symmetry 2019, 11, 1155. [Google Scholar] [CrossRef] [Green Version]
- Ma, C.; Huang, J.; Yang, X.; Yang, M. Hierarchical convolutional features for visual tracking. In Proceedings of the IEEE International Conference on Computer Vision, Washington, DC, USA, 7–13 December 2015; pp. 3074–3082. [Google Scholar]
- Zhang, J.; Yuan, T.; He, Y.; Wang, J. A background-aware correlation filter with adaptive saliency-aware regularization for visual tracking. Neural Comput. Appl. 2022, 34, 6359–6376. [Google Scholar] [CrossRef]
- Jiang, L.; Zheng, Y.; Cheng, X.; Jeon, B. Dynamic temporal–spatial regularization-based channel weight correlation filter for aerial object tracking. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Liu, H.; Li, B. Target tracker with masked discriminative correlation filter. IET Image Process. 2020, 14, 2227–2234. [Google Scholar] [CrossRef]
- Mueller, M.; Smith, N.; Ghanem, B. Context-aware correlation filter tracking. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1396–1404. [Google Scholar]
- Huang, B.; Xu, T.; Shen, Z.; Jiang, S.; Li, J. BSCF: Learning background suppressed correlation filter tracker for wireless multimedia sensor networks. Ad Hoc Netw. 2021, 111, 102340. [Google Scholar] [CrossRef]
- Zhang, F.; Ma, S.; Qiu, Z.; Qi, T. Learning target-aware background-suppressed correlation filters with dual regression for real-time UAV tracking. Signal Process. 2022, 191, 108352. [Google Scholar] [CrossRef]
- Zhang, H.; Li, Y.; Liu, H.; Yuan, D.; Yang, Y. Learning Response-Consistent and Background-Suppressed Correlation Filters for Real-Time UAV Tracking. Sensors 2023, 23, 2980. [Google Scholar] [CrossRef] [PubMed]
- Fu, C.; Xu, J.; Lin, F.; Guo, F.; Liu, T.; Zhang, Z. Object saliency-aware dual regularized correlation filter for real-time aerial tracking. IEEE Trans. Geosci. Remote Sens. 2020, 58, 8940–8951. [Google Scholar] [CrossRef]
- Wu, Y.; Lim, J.; Yang, M.H. Object Tracking Benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1834–1848. [Google Scholar] [CrossRef] [Green Version]
- Liang, P.; Blasch, E.; Ling, H. Encoding color information for visual tracking: Algorithms and benchmark. IEEE Trans. Image Process. 2015, 24, 5630–5644. [Google Scholar] [CrossRef]
- Du, D.; Qi, Y.; Yu, H.; Yang, Y.; Duan, K.; Li, G.; Zhang, W.; Huang, Q.; Tian, Q. The unmanned aerial vehicle benchmark: Object detection and tracking. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 370–386. [Google Scholar]
Trackers | STRCF | ECO_HC | LADCF_HC | BACF | ARCF | AutoTrack | SRDCF | Ours |
---|---|---|---|---|---|---|---|---|
Precision | 74.8 | 76.3 | 76.1 | 72.2 | 74.9 | 74.5 | 70.6 | 79.3 |
Success rate | 64.2 | 63.0 | 65.7 | 62.0 | 63.5 | 62.4 | 59.1 | 68.5 |
FPS | 25 | 51 | 20 | 39 | 17 | 47 | 7 | 22 |
STRCF | ECO_HC | BACF | ARCF | SRDCF | AutoTrack | LADCF_HC | Ours | |
---|---|---|---|---|---|---|---|---|
LR | 69.7 | 73.0 | 69.5 | 67.9 | 69.0 | 71.4 | 69.2 | 63.3 |
IPR | 74.5 | 68.5 | 71.6 | 68.3 | 64.4 | 67.1 | 74.1 | 76.1 |
OPR | 76.8 | 72.3 | 70.9 | 68.3 | 65.2 | 67.3 | 77.8 | 77.8 |
IV | 78.3 | 76.1 | 78.4 | 75.6 | 73.7 | 75.1 | 79.1 | 83.8 |
OCC | 75.4 | 74.9 | 69.7 | 68.7 | 67.9 | 67.3 | 79.2 | 76.5 |
DEF | 73.7 | 74.0 | 69.7 | 71.6 | 66.5 | 68.7 | 72.2 | 76.7 |
BC | 79.6 | 76.2 | 75.9 | 74.2 | 67.8 | 70.9 | 79.7 | 86.5 |
SV | 76.3 | 71.7 | 70.2 | 67.7 | 65.7 | 64.3 | 76.8 | 75.6 |
MB | 79.4 | 75.3 | 73.5 | 74.2 | 71.1 | 70.0 | 79.3 | 79.9 |
FM | 75.7 | 74.0 | 75.6 | 73.4 | 70.5 | 69.1 | 74.4 | 78.3 |
OV | 69.3 | 65.7 | 68.9 | 62.5 | 55.3 | 63.6 | 71.5 | 75.9 |
STRCF | ECO_HC | BACF | ARCF | SRDCF | AutoTrack | LADCF_HC | Ours | |
---|---|---|---|---|---|---|---|---|
LR | 71.1 | 77.3 | 71.0 | 72.6 | 69.8 | 79.7 | 70.1 | 64.5 |
IPR | 80.9 | 78.2 | 78.6 | 78.1 | 70.9 | 78.6 | 80.8 | 82.2 |
OPR | 85.0 | 81.1 | 77.9 | 76.9 | 72.7 | 73.2 | 83.8 | 83.2 |
IV | 84.1 | 79.8 | 80.8 | 76.6 | 76.7 | 79.2 | 80.8 | 85.4 |
OCC | 81.4 | 81.0 | 73.5 | 74.0 | 71.1 | 70.4 | 83.0 | 78.8 |
DEF | 84.4 | 82.3 | 76.9 | 76.9 | 72.6 | 70.1 | 81.2 | 81.6 |
BC | 86.6 | 81.3 | 79.4 | 74.9 | 72.1 | 79.2 | 83.4 | 89.9 |
SV | 84.2 | 80.8 | 77.1 | 77.1 | 73.2 | 74.3 | 83.6 | 81.0 |
MB | 82.6 | 78.0 | 74.1 | 75.7 | 73.3 | 75.3 | 80.7 | 81.8 |
FM | 80.2 | 79.2 | 78.7 | 76.8 | 75.1 | 74.8 | 79.0 | 81.3 |
OV | 76.6 | 73.7 | 74.8 | 67.1 | 58.2 | 70.9 | 81.5 | 80.5 |
Trackers | OTB100 | TC128 | UAVDT | Average | ||||
---|---|---|---|---|---|---|---|---|
Precision | Success | Precision | Success | Precision | Success | Precision | Success | |
Baseline | 81.6 | 76.8 | 64.6 | 61.3 | 70.6 | 53.4 | 72.2 | 63.8 |
Baseline-W | 83.1 | 79.1 | 75.8 | 69.7 | 72.7 | 51.7 | 77.2 | 66.8 |
Baseline-B | 84.7 | 80.6 | 75.4 | 70.7 | 73.6 | 50.8 | 77.9 | 67.3 |
Ours | 85.2 | 81.7 | 76.8 | 70.4 | 76.0 | 53.4 | 79.3 | 68.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
He, J.; Ji, Y.; Sun, X.; Wu, S.; Wu, C.; Chen, Y. Learning Background-Suppressed Dual-Regression Correlation Filters for Visual Tracking. Sensors 2023, 23, 5972. https://doi.org/10.3390/s23135972
He J, Ji Y, Sun X, Wu S, Wu C, Chen Y. Learning Background-Suppressed Dual-Regression Correlation Filters for Visual Tracking. Sensors. 2023; 23(13):5972. https://doi.org/10.3390/s23135972
Chicago/Turabian StyleHe, Jianzhong, Yuanfa Ji, Xiyan Sun, Sunyong Wu, Chunping Wu, and Yuxiang Chen. 2023. "Learning Background-Suppressed Dual-Regression Correlation Filters for Visual Tracking" Sensors 23, no. 13: 5972. https://doi.org/10.3390/s23135972