[go: up one dir, main page]

CN117953245A - Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm - Google Patents

Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm Download PDF

Info

Publication number
CN117953245A
CN117953245A CN202311790094.2A CN202311790094A CN117953245A CN 117953245 A CN117953245 A CN 117953245A CN 202311790094 A CN202311790094 A CN 202311790094A CN 117953245 A CN117953245 A CN 117953245A
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
image
template
tail wing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311790094.2A
Other languages
Chinese (zh)
Inventor
武春风
向文鼎
韩璇
陈善球
刘文劲
王娟娟
郭诗嘉
刘子岳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CASIC Microelectronic System Research Institute Co Ltd
Original Assignee
CASIC Microelectronic System Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CASIC Microelectronic System Research Institute Co Ltd filed Critical CASIC Microelectronic System Research Institute Co Ltd
Priority to CN202311790094.2A priority Critical patent/CN117953245A/en
Publication of CN117953245A publication Critical patent/CN117953245A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an infrared unmanned aerial vehicle tail wing detection and tracking method based on a template matching and KCF algorithm, which comprises the following steps: acquiring video sequence data of an infrared unmanned aerial vehicle; denoising the infrared unmanned aerial vehicle image by using median filtering; threshold segmentation is carried out on the image, and the image is segmented into a foreground part and a background part; calculating a foreground region by utilizing a centroid algorithm to obtain an initial centroid coordinate of the unmanned aerial vehicle, acquiring initial unmanned aerial vehicle position information, and cutting out an initial template; obtaining an initial detection frame of the unmanned aerial vehicle image by using a template matching method; performing long and short diameter analysis by using morphological analysis, performing tail wing characteristic analysis, and obtaining a tail wing area of the unmanned aerial vehicle; and (3) performing feature calculation on the tail wing region of the unmanned aerial vehicle by using a kernel correlation filtering algorithm, performing feature matching through a filtering template in the tracking process, and selecting the best matched image region for continuously tracking the flight track of the unmanned aerial vehicle. The invention can accurately and stably detect and track the tail wing of the unmanned aerial vehicle.

Description

Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm
Technical Field
The invention relates to the technical field of image processing, in particular to an infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm.
Background
With the rapid development of modern science and technology, unmanned aerial vehicles increasingly enter the field of view of masses, and detection and tracking of unmanned aerial vehicles become hot spots. The unmanned aerial vehicle fin provides the important condition of unmanned aerial vehicle balanced flight, so in anti-unmanned aerial vehicle system, ground defense system is especially important to unmanned aerial vehicle fin's detection and tracking.
The existing main means are to directly detect and track the unmanned aerial vehicle, and few algorithms are specially used for detecting and tracking the tail wing of the unmanned aerial vehicle. For example, the patent number CN113963272 a, the patent application name "a method for detecting an image target of an unmanned aerial vehicle based on yolov" describes that the main target is the target of the whole unmanned aerial vehicle, the tail wing of the unmanned aerial vehicle is not independently detected, the tracking and aiming range is too wide for the anti-unmanned aerial vehicle system, and the core part of the unmanned aerial vehicle is not tracked and hit. For example, the patent number CN113487517 a, patent document in the patent application name "image-enhancement-based unmanned aerial vehicle target detection method, apparatus and device", describes that the main object of the proposed unmanned aerial vehicle technology is still to use the unmanned aerial vehicle as the whole target, and the weak point area of the unmanned aerial vehicle is not detected and tracked in a targeted manner. Based on the above, it is known that most students do not consider the weak point of the unmanned aerial vehicle when tracking and detecting the unmanned aerial vehicle device and performing targeted striking.
Disclosure of Invention
In view of the above, the invention provides an infrared unmanned aerial vehicle tail wing detection and tracking method based on a template matching and KCF algorithm.
The invention discloses an infrared unmanned aerial vehicle tail wing detection and tracking method based on a template matching and KCF algorithm, which comprises the following steps:
step 1: acquiring an infrared unmanned aerial vehicle video sequence;
step 2: image preprocessing, namely denoising an infrared unmanned aerial vehicle image by utilizing median filtering, enhancing foreground target characteristics, and weakening the influence of background noise and camera original noise; threshold segmentation is carried out on the image, and the image is segmented into a foreground part and a background part; obtaining initial barycenter coordinates of the unmanned aerial vehicle by using a barycenter algorithm, obtaining initial position information of the unmanned aerial vehicle, and cutting out an initial template;
Step 3: acquiring a tail wing area of the unmanned aerial vehicle based on an initial template and morphological analysis;
Step 4: and (3) performing feature calculation on the tail wing region of the unmanned aerial vehicle by using a kernel correlation filtering algorithm, performing feature matching through a filtering template in the tracking process, and selecting the best matched image region for continuously tracking the flight track of the unmanned aerial vehicle.
Further, the background in the unmanned aerial vehicle infrared video sequence is a sky area, and the main noise is sky background and camera noise.
Further, in the step 2:
The filtering template used for denoising is selected according to the size compromise between the image size and the background noise, and when the target size is relatively large, a filtering core with a relatively large scale is selected; when the target is smaller, selecting a filter kernel with smaller size;
After filtering, carrying out threshold segmentation on the image, and segmenting the image into a foreground part and a background part; binarizing the image by acquiring a threshold value by adopting an Ojin method, and dividing the image into a background part and a foreground part according to the gray characteristic of the image, so that the inter-class variance of the foreground and the background image is maximum; the foreground target area is a selected unmanned aerial vehicle area, and mass center coordinates in the foreground area are calculated; the centroid position of the target is obtained by weighted averaging of the coordinates in the X-axis and Y-axis directions.
Further, in the step 3:
After an initial unmanned aerial vehicle foreground area is obtained, performing long and short diameter analysis by morphological analysis according to the barycenter coordinates of the unmanned aerial vehicle and the segmented unmanned aerial vehicle external rectangular bounding box; and carrying out tail wing characteristic analysis according to the flight direction of the unmanned aerial vehicle to obtain a tail wing area of the unmanned aerial vehicle, and initializing the tail wing part as a template area.
Further, the template matching method may employ the following formula:
Wherein (x ', y') is the current coordinate point of the image in the template area, T (x ', y') is the intensity value of the current template at (x ', y'), x, y is the current coordinate point in the image, I (x, y) is the intensity value of the current image at (x, y), I (x+x ', y+y') is the intensity value of the (x+x ', y+y') point in the image, R (x, y) is the accumulation of the products of the current template and the image at the template size with (x, y) as the starting point, and represents the matching degree of the current point image and the template; the region with the highest matching degree is locked as the target region of the current frame, and the current region is updated as the latest template region.
Further, the step 4 includes:
The response output of the filter of the kernel correlation filtering algorithm is set to be a gaussian response output, and then the model for solving the filter can be described as:
where g represents the response output, f represents the input image, h represents the filtering template, Representing a convolution operation;
According to the convolution theorem, the Fourier transform of the functional cross-correlation is equal to the product of the functional Fourier transforms, i.e
Wherein F represents the Fourier transform, and wherein, by definition, the dot product represents the complex conjugate;
Assuming fh=f, (Fh) *=H*, fg=g, we get
Considering m images of the target as a reference, the target Loss function Loss can be expressed as:
Where i is the index of the ith image in the m images, G i is the fourier transform of the response output of the ith image, F i is the fourier transform of the ith input image, and then the above formula is derived and the Loss is 0 to solve, so that it can be obtained:
in the tracking process, the above template and the image of the current frame are only required to be subjected to related operation, and the coordinate corresponding to the maximum point in the obtained response result is taken as the target to be positioned at the current frame.
Further, the template updating mode is performed as follows:
Ht=(1-η)Ht-1+ηH(t)
h (t) represents the filter template obtained in the t frame, eta is an empirical constant, and H t represents the filter template of the current frame obtained by updating.
Further, in the current image, the coordinates corresponding to the largest point in the response result are used as targets to update at the current frame position, the area where the current position is located is the image area which is the best match, and along with the updating of the video frame, a nuclear correlation filtering algorithm updating template is continuously used for continuously tracking the flight track of the unmanned aerial vehicle.
Due to the adoption of the technical scheme, the invention has the following advantages: the infrared unmanned aerial vehicle tail wing can be detected and tracked in a targeted manner; the tail wing form of different unmanned aerial vehicles can be stably tracked.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments described in the embodiments of the present invention, and other drawings may be obtained according to these drawings for those skilled in the art.
Fig. 1 is a flow diagram of an unmanned aerial vehicle detection method based on a template matching and centroid algorithm in one embodiment;
Fig. 2 is a schematic diagram of an unmanned aerial vehicle detection output result of each flow based on a template matching and centroid algorithm in an embodiment;
Fig. 3 is a schematic diagram of a result of unmanned aerial vehicle tail wing tracking output based on a KCF algorithm in one embodiment.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and examples, wherein the examples are shown only in a partial, but not in all embodiments of the invention. All other embodiments obtained by those skilled in the art are intended to fall within the scope of the embodiments of the present invention.
Referring to fig. 1 to 3, the invention provides an embodiment of an infrared unmanned aerial vehicle tail wing detection and tracking method based on a template matching and KCF algorithm, which comprises the following steps:
s1: acquiring an infrared unmanned aerial vehicle video sequence, wherein an infrared unmanned aerial vehicle image is shown in fig. 2 (a);
S2: image preprocessing, namely denoising an infrared unmanned aerial vehicle image by utilizing median filtering, enhancing foreground target characteristics, weakening the influence of background noise and original noise of a camera, and enabling the filtered image to be shown in fig. 2 (b); threshold segmentation is carried out on the image, and the image is segmented into a foreground part and a background part; obtaining initial barycenter coordinates of the unmanned aerial vehicle by using a barycenter algorithm, obtaining initial position information of the unmanned aerial vehicle, and cutting out an initial template;
Specifically, the segmentation algorithm may be an oxford method, an optimal segmentation threshold of the image is found by using the oxford method (OTSU), the image is segmented into a foreground and a background, and the obtained output result is shown in fig. 2 (c); wherein the target area of the foreground part is the selected unmanned aerial vehicle area, and the centroid coordinates in the foreground area are calculated, and the obtained initial unmanned aerial vehicle position information is shown in fig. 2 (d);
s3: acquiring an unmanned aerial vehicle tail wing region based on an initial template and morphological analysis, wherein a schematic diagram of a result of the morphological analysis of a long and short diameter is shown in fig. 2 (e), and the initial template of the unmanned aerial vehicle tail wing region is shown in fig. 2 (f);
S4: and (3) performing feature calculation on the tail wing region of the unmanned aerial vehicle by using a kernel correlation filtering algorithm, performing feature matching through a filtering template in the tracking process, and selecting the best matched image region for continuously tracking the flight track of the unmanned aerial vehicle.
In this embodiment, the background in the infrared video sequence of the unmanned aerial vehicle is a sky area, and the main noise is a sky background and a camera noise.
In the present embodiment, in S2:
The filtering template used for denoising is selected according to the size compromise between the image size and the background noise, and when the target size is relatively large, a filtering core with a relatively large scale is selected; when the target is smaller, selecting a filter kernel with smaller size;
After filtering, carrying out threshold segmentation on the image, and segmenting the image into a foreground part and a background part; binarizing the image by acquiring a threshold value by adopting an Ojin method, and dividing the image into a background part and a foreground part according to the gray characteristic of the image, so that the inter-class variance of the foreground and the background image is maximum; the foreground target area is a selected unmanned aerial vehicle area, and mass center coordinates in the foreground area are calculated; the centroid position of the target is obtained by weighted averaging of the coordinates in the X-axis and Y-axis directions.
In the present embodiment, in S3:
After an initial unmanned aerial vehicle foreground area is obtained, performing long and short diameter analysis by morphological analysis according to the barycenter coordinates of the unmanned aerial vehicle and the segmented unmanned aerial vehicle external rectangular bounding box; and carrying out tail wing characteristic analysis according to the flight direction of the unmanned aerial vehicle to obtain a tail wing area of the unmanned aerial vehicle, and initializing the tail wing part as a template area.
In this embodiment, the template matching method may use the following formula:
Wherein (x ', y') is the current coordinate point of the image in the template area, T (x ', y') is the intensity value of the current template at (x ', y'), x, y is the current coordinate point in the image, I (x, y) is the intensity value of the current image at (x, y), I (x+x ', y+y') is the intensity value of the (x+x ', y+y') point in the image, R (x, y) is the accumulation of the products of the current template and the image at the template size with (x, y) as the starting point, and represents the matching degree of the current point image and the template; the region with the highest matching degree is locked as the target region of the current frame, and the current region is updated as the latest template region.
In this embodiment, S4 includes:
The response output of the filter of the kernel correlation filtering algorithm is set to be the gaussian response output, and then the model solving the filter can be described as:
where g represents the response output, d represents the input image, h represents the filtering template, Representing a convolution operation;
According to the convolution theorem, the Fourier transform of the functional cross-correlation is equal to the product of the functional Fourier transforms, i.e
Wherein F represents the Fourier transform, and wherein, by definition, the dot product represents the complex conjugate;
Assuming fh=f, (Fh) *=H*, fg=g, we get
Considering m images of the target as a reference, the target Loss function Loss can be expressed as:
Where i is the index of the ith image in the m images, G i is the fourier transform of the response output of the ith image, F i is the fourier transform of the ith input image, and then the above formula is derived and the Loss is 0 to solve, so that it can be obtained:
in the tracking process, the above template and the image of the current frame are only required to be subjected to related operation, and the coordinate corresponding to the maximum point in the obtained response result is taken as the target to be positioned at the current frame.
In this embodiment, the template updating is performed as follows:
Ht=(1-η)Ht-1+ηH(t)
h (t) represents the filter template obtained in the t frame, eta is an empirical constant, and H t represents the filter template of the current frame obtained by updating.
In this embodiment, in the current image, the coordinates corresponding to the largest point in the response result are used as the target to update at the current frame position, the area where the current position is located is the image area with the best matching, and along with the updating of the video frame, the kernel correlation filtering algorithm is continuously used to update the template for continuously tracking the flight track of the unmanned aerial vehicle.
FIG. 3 shows a tracking effect diagram of the tail object of the dynamic unmanned aerial vehicle according to the embodiment, the tracking result is sampled once every 5 frames, the upper left corner of each frame of the tracking effect diagram marks the current frame of the test image in the embodiment and the frame rate when the processing is implemented, the first diagram (upper left corner) of FIG. 3 shows that the diagram is the 5 th frame of the video frame, the current frame rate is 96.3FPS (FRAMES PER Secrond, the number of processed images per second), wherein the square marked area in the image is the tracked tail area, and the tail area in the current frame is used as a target template to calculate and update the tail area of the unmanned aerial vehicle again in the next frame, so that the tail area of the unmanned aerial vehicle is kept continuously tracked; the data content of the subsequent picture and so on. The present embodiment continues stable tracking of 125 consecutive frames on the test video data, with a tracking speed up to 100FPS. By adopting the technical scheme of the embodiment, the infrared unmanned aerial vehicle tail wing can be detected and tracked in a targeted manner, and the infrared unmanned aerial vehicle tail wing tracking method can be used for stably and rapidly tracking the flight track of the unmanned aerial vehicle.
Finally, it should be noted that: the above embodiments are only for illustrating the technical aspects of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the above embodiments, it should be understood by those of ordinary skill in the art that: modifications and equivalents may be made to the specific embodiments of the invention without departing from the spirit and scope of the invention, which is intended to be covered by the claims.

Claims (8)

1. The method for detecting and tracking the tail wing of the infrared unmanned aerial vehicle based on the template matching and KCF algorithm is characterized by comprising the following steps of:
step 1: acquiring an infrared unmanned aerial vehicle video sequence;
step 2: image preprocessing, namely denoising an infrared unmanned aerial vehicle image by utilizing median filtering, enhancing foreground target characteristics, and weakening the influence of background noise and camera original noise; threshold segmentation is carried out on the image, and the image is segmented into a foreground part and a background part; obtaining initial barycenter coordinates of the unmanned aerial vehicle by using a barycenter algorithm, obtaining initial position information of the unmanned aerial vehicle, and cutting out an initial template;
Step 3: acquiring a tail wing area of the unmanned aerial vehicle based on an initial template and morphological analysis;
Step 4: and (3) performing feature calculation on the tail wing region of the unmanned aerial vehicle by using a kernel correlation filtering algorithm, performing feature matching through a filtering template in the tracking process, and selecting the best matched image region for continuously tracking the flight track of the unmanned aerial vehicle.
2. The method of claim 1, wherein the background in the unmanned aerial vehicle infrared video sequence is a sky region and the dominant noise is a sky background and camera noise.
3. The method according to claim 1, characterized in that in said step 2:
The filtering template used for denoising is selected according to the size compromise between the image size and the background noise, and when the target size is relatively large, a filtering core with a relatively large scale is selected; when the target is smaller, selecting a filter kernel with smaller size;
After filtering, carrying out threshold segmentation on the image, and segmenting the image into a foreground part and a background part; binarizing the image by acquiring a threshold value by adopting an Ojin method, and dividing the image into a background part and a foreground part according to the gray characteristic of the image, so that the inter-class variance of the foreground and the background image is maximum; the foreground target area is a selected unmanned aerial vehicle area, and mass center coordinates in the foreground area are calculated; the centroid position of the target is obtained by weighted averaging of the coordinates in the X-axis and Y-axis directions.
4. The method according to claim 1, characterized in that in said step 3:
After an initial unmanned aerial vehicle foreground area is obtained, performing long and short diameter analysis by morphological analysis according to the barycenter coordinates of the unmanned aerial vehicle and the segmented unmanned aerial vehicle external rectangular bounding box; and carrying out tail wing characteristic analysis according to the flight direction of the unmanned aerial vehicle to obtain a tail wing area of the unmanned aerial vehicle, and initializing the tail wing part as a template area.
5. The method of claim 4, wherein the template matching method uses the following formula:
Wherein (x ', y') is the current coordinate point of the image in the template area, T (x ', y') is the intensity value of the current template at (x ', y'), x, y is the current coordinate point in the image, I (x, y) is the intensity value of the current image at (x, y), I (x+x ', y+y') is the intensity value of the (x+x ', y+y') point in the image, R (x, y) is the accumulation of the products of the current template and the image at the template size with (x, y) as the starting point, and represents the matching degree of the current point image and the template; the region with the highest matching degree is locked as the target region of the current frame, and the current region is updated as the latest template region.
6. The method according to claim 1, wherein the step 4 comprises:
The response output of the filter of the kernel correlation filtering algorithm is set to be a gaussian response output, and then the model for solving the filter can be described as:
where g represents the response output, f represents the input image, h represents the filtering template, Representing a convolution operation;
According to the convolution theorem, the Fourier transform of the functional cross-correlation is equal to the product of the functional Fourier transforms, i.e
Wherein F represents the Fourier transform, and wherein, by definition, the dot product represents the complex conjugate;
Assuming fh=f, (Fh) *=H*, fg=g, we get
Considering m images of the target as a reference, the target Loss function Loss can be expressed as:
Where i is the index of the ith image in the m images, G i is the fourier transform of the response output of the ith image, F i is the fourier transform of the ith input image, and then the above formula is derived and the Loss is 0 to solve, so that it can be obtained:
in the tracking process, the above template and the image of the current frame are only required to be subjected to related operation, and the coordinate corresponding to the maximum point in the obtained response result is taken as the target to be positioned at the current frame.
7. The method of claim 6, wherein the template updating is performed as follows:
Ht=(1-η)Ht-1+ηH(t)
h (t) represents the filter template obtained in the t frame, eta is an empirical constant, and H t represents the filter template of the current frame obtained by updating.
8. The method according to claim 1, wherein in the current image, the coordinates corresponding to the largest point in the response result are used as targets to update at the current frame position, the area where the current position is located is the image area which is the best match, and the template is updated continuously by using a kernel correlation filtering algorithm along with the updating of the video frame, so as to continuously track the flight trajectory of the unmanned aerial vehicle.
CN202311790094.2A 2023-12-22 2023-12-22 Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm Pending CN117953245A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311790094.2A CN117953245A (en) 2023-12-22 2023-12-22 Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311790094.2A CN117953245A (en) 2023-12-22 2023-12-22 Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm

Publications (1)

Publication Number Publication Date
CN117953245A true CN117953245A (en) 2024-04-30

Family

ID=90795649

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311790094.2A Pending CN117953245A (en) 2023-12-22 2023-12-22 Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm

Country Status (1)

Country Link
CN (1) CN117953245A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334063A (en) * 2024-06-13 2024-07-12 中国科学院长春光学精密机械与物理研究所 A method for image stabilization of ground-based telescopes based on kernel correlation filtering

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118334063A (en) * 2024-06-13 2024-07-12 中国科学院长春光学精密机械与物理研究所 A method for image stabilization of ground-based telescopes based on kernel correlation filtering

Similar Documents

Publication Publication Date Title
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN107305632B (en) Monocular computer vision technology-based target object distance measuring method and system
US10878259B2 (en) Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
CN106910204B (en) A kind of method and system to the automatic Tracking Recognition of sea ship
Praczyk A quick algorithm for horizon line detection in marine images
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN111738033B (en) Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal
CN111209920A (en) An aircraft detection method under complex dynamic background
CN107798691A (en) A kind of unmanned plane independent landing terrestrial reference real-time detecting and tracking method of view-based access control model
CN112683228A (en) Monocular camera ranging method and device
CN117765243B (en) AI guiding system based on high-performance computing architecture
Liu et al. Multi-type road marking recognition using adaboost detection and extreme learning machine classification
CN111507340A (en) Target point cloud data extraction method based on three-dimensional point cloud data
CN117953245A (en) Infrared unmanned aerial vehicle tail wing detection and tracking method based on template matching and KCF algorithm
Hashmani et al. A survey on edge detection based recent marine horizon line detection methods and their applications
CN112733678A (en) Ranging method, ranging device, computer equipment and storage medium
Miller et al. Person tracking in UAV video
Qing et al. A novel particle filter implementation for a multiple-vehicle detection and tracking system using tail light segmentation
KR101910256B1 (en) Lane Detection Method and System for Camera-based Road Curvature Estimation
CN114092404A (en) Infrared target detection method and computer readable storage medium
EP3044734B1 (en) Isotropic feature matching
CN113221739A (en) Monocular vision-based vehicle distance measuring method
CN112465867A (en) Infrared point target real-time detection tracking method based on convolutional neural network
Sato et al. Seat Belt Detection Using Genetic Algorithm-Based Template Matching
Khan et al. Automated road marking detection system for autonomous car

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination